Large-scale, long-term complexity analysis #254
bartveneman
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'd write a deep-dive article on the complexity of CSS, how it's changing over the years and how we can use this analysis to inform spec writers and browser makers to see if their implementations are actually making CSS easier to write/maintain.
🪆 What?
With several thousands of people tapping into Project Wallace to view their CSS Analytics, I think it's time to take it a step further.
What if we could tap into the HTTP Archive for historical data and analyze CSS complexity over the last 10 years? I know this is already done every year by the Web Almanac, but it doesn't seem to focus on overall CSS complexity, apart from a brief chapter about specificity.
My proposal is to create a place (in the almanac, projectwallace.com, web.dev, etc.) and start digging into complexity breakdowns of Stylesheets. Let's see what we can learn about avarage declarations per ruleset, complexity per selector, selectors per ruleset, vendor prefix usage, browser hacks in selectors, properties and values, etc.
🧪 Why?
We can use these results to inform spec writers/browser makers what hoops authors are jumping through to make their layouts. New specs for scoping, layers, nesting etc. are emerging so it would be interesting to see if and how developers are adopting these new methods and verify how their usage affect complexity. Think of it as a Plan Do Check Act-cycle, where we want to Check that new specs actually helped fix some kind of complexity issues.
📋 Specific tasks
@supports
,@property
, `custom properties, etc.) have influenced complexity🏆 Success factors
When will this project be successful?
Beta Was this translation helpful? Give feedback.
All reactions