At run time, the HTML document is processed in the DOM tree containing N elements with an average depth of D The style sheets used also contain the S CSS rule.
Style styles are applied individually, which means there is a direct relationship between N and overall complexity . It is worth noting that this can be somewhat compensated by the logic of the browser, such as link caching and processing styles from identical elements. For example, the following list items will have the same CSS properties that apply (unless pseudo-classes like :nth-child are applied):
<ul class="sample"> <li>one</li> <li>two</li> <li>three</li> </ul>
Selectors are mapped from right to left for an individual choice of rule - that is, if the right-most key does not match a specific element, there is no need to further process the selector and it is discarded. This means that the rightmost key must match as few elements as possible . Below, the p descriptor will contain more elements, including paragraphs outside the target container (which, of course, will not apply the rule, but will still lead to more repetition checks for this particular selector):
.custom-container p {} .container .custom-paragraph {}
Link selectors: a selector for descendants requires up to D elements to be repeated with . For example, a successful .container .content may require only one step if the elements are in a parent-child relationship, but the DOM tree needs to go to the html level before the element can confirm the inconsistency and the rule is safely discarded. This also applies to chain descendant selectors with some discounts.
On the other hand, the child selector > , the + or :first-child selector still needs an additional element to evaluate, but it only has an implied depth of one and will never require further traversal of the tree.
Defining the behavior of pseudo-elements, such as :before and :after , implies that they are not part of the RTL paradigm. Logic assumes that the pseudo-element itself does not exist until the rule indicates that it should be inserted before or after the contents of the element (which, in turn, requires additional manipulations with the DOM, but there are no additional calculations necessary to match the selector).
I could not find any information about pseudo-classes, such as :nth-child() or :disabled . Checking the state of an element will require additional calculations, but from the point of view of parsing it will only make sense to exclude them from RTL processing.
Given these relationships, the computational complexity of O(N*D*S) should be reduced primarily by minimizing the depth of the CSS selectors and addressing point 2 above. This will lead to significantly stronger improvements compared to minimizing the number of CSS rules or just HTML elements ^
Shallow, preferably single-level, specific selectors are processed faster. . This is achieved at a completely new level by Google (programmatically, not manually!), For example, the key selector and most of the rules in the search results look like
#gb {} #gbz, #gbg {} #gbz {} #gbg {} #gbs {} .gbto #gbs {} #gbx3, #gbx4 {} #gbx3 {} #gbx4 {}
^ - while this is true in terms of rendering engine performance, there are always additional factors such as traffic overhead and DOM parsing, etc.
Sources: 1 2 3 4 5
ov Jun 27 2018-12-12T00 : 00Z
source share