Most of the answers here focus on selector performance, as if that is the only thing that matters. I will try to cover up some little things (spoiler alert: they are not always a good idea), css uses value and rendering of certain properties.
Before I get an answer, let me distract IMO: I personally strongly disagree with the stated need for “evidence-based data”. It just makes the performance statement a measure of reliability, while in fact the area of the rendering engines is heterogeneous enough to make such a statistical conclusion inaccurate for measurement and inappropriate adoption or monitoring.
As the original finds quickly become obsolete, I would prefer that front-end developers understand the principles of the framework and their relative value with respect to supported / readable brown points. In the end, premature optimization is the root of all evil;)
Start with selector performance:
Small, preferably single-level, specific selectors are processed faster. There are no obvious performance indicators in the original answer, but the key point remains: at runtime, the HTML document is parsed in the DOM tree containing N elements with average depth D and with what the total number of CSS S rules has. To reduce the computational complexity of O(N*D*S) , you should
Use the correct keys as few elements as possible - the selectors correspond from right to left ^ for an individual rule, therefore, if the right-most key does not match a specific element, there is no need to further process the selector and it is discarded.
It is generally accepted that the selector * should be avoided, but this point should be taken further. The "normal" CSS reset really matches most elements - when this SO page is profiled, reset is responsible for about 1/3 of the time the selector is selected, so you may prefer normalize.css (nevertheless it adds up to 3.5 ms - point against premature optimization is strong)
Avoid descendant selectors as they require up to ~D elements to be repeated. This mainly affects the confirmation of non-compliance - for example, a positive matching .container .content may require only one step for elements in the relationship between parents and children, but the DOM tree needs to go to the html level before a negative match can be confirmed.
Minimize the number of DOM elements , because their styles are applied individually (it is worth noting, this is compensated by the logic of the browser, such as caching links and processing styles from the same elements - for example, when the style of identical siblings)
Delete unused rules , because the browser has to evaluate their applicability for each displayed element, It’s enough said that the fastest rule is the one that is not :)
This will lead to a quantitative (but, depending on the page, not necessarily perceived) improvement in terms of rendering engine performance, however, there are always additional factors, such as traffic overhead and DOM parsing, etc.
Next, CSS3 properties performance:
CSS3 brought us (among other things) rounded corners, background gradients and shadow variations, and with it a load problem. Think about it, by definition, a pre-processed image works better than the CSS3 rule set that should be displayed first. From the webkit wiki :
Gradients, shadows, and other decorations in CSS should only be used when necessary (for example, when the shape is dynamic based on the content) - otherwise, static images are always faster.
If this is not so bad, gradients, etc., may have to be recounted at each repaint / reflow event (see below for more details). Keep this in mind until most users can view a css3-heavy page, for example, without noticeable lag.
Next, write-off performance:
Avoid high and wide sprites , even if their transport footprint is relatively small. It is well known that the rendering mechanism cannot work with gif / jpg / png, and at runtime all graphic assets are managed as uncompressed bitmaps. At least it's easy to calculate: this sprite width times height times four bytes per pixel (RGBA) 238*1073*4≅1MB . Use it on multiple items on different tabs that are simultaneously open, and this quickly adds significant value.
A rather extreme case: it was found on mozilla webdev , but this is not at all unexpected when there are dubious practices such as diagonal sprites .
An alternative to viewing are individual base64 encoded images embedded directly in CSS.
Further, it overpays and reconstructs:
This is a misconception that reflow can only be launched using JS DOM manipulation - in fact, any style layout application will lead to its effect on the target element, its children and subsequent elements, etc. The only way to prevent unnecessary iterations is to try and avoid rendering dependencies . A direct example of this might be rendering tables :
Tables often require several passes before the layout is fully set, because they are one of the rare cases where elements can affect the display of other elements that were before them on the DOM. Imagine a cell at the end of a table with very wide content that causes a complete column change. This is why spreadsheets are not progressing gradually in all browsers.
I will make changes if I remember something important that was missed. Some links to complete:
http://perfectionkills.com/profiling-css-for-fun-and-profit-optimization-notes/
http://jacwright.com/476/runtime-performance-with-css3-vs-images/
https://developers.google.com/speed/docs/best-practices/payload
https://trac.webkit.org/wiki/QtWebKitGraphics
https://blog.mozilla.org/webdev/2009/06/22/use-sprites-wisely/
http://dev.opera.com/articles/view/efficient-javascript/