-
This is crazy amount of work and a crazy result. Is anyone familiar with tools that would guard against ending up in a situation like this? Google's Include What You Use comes to mind but I don't know of anything else.
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
The older I get the more I think #include in public headers needs to have a whitelisted regex git push filter, and the permitted whitelist of permitted includes is small and excludes most of the standard library. https://github.com/ned14/stl-header-heft, after all.
-
C++ modules helps with the parsing problem similar to precompiled headers, but it doesn't help with the code execution at compile time problem. All your overload matching, free function lookup, SFINAE, concept matching, and consteval code needs executing and that can take very considerable time. Other than JITing all that stuff, and maybe running an in-memory server like https://github.com/yrnkrn/zapcc, I don't know what more can be done here.
-
https://github.com/aras-p/ClangBuildAnalyzer is a very useful tool to quantify the cost of different headers (and other costly parts of the compile such as template instantiations). It doesn’t help with actually fixing such problems, but it’s a pretty good ruler to measure where the time is spent.
Related posts
-
IRHash: Efficient Multi-Language Compiler Caching by IR-Level Hashing
-
In 10 years, Clang has become 2x slower, but generates code that is 10-20% faster
-
Zapcc: A caching C++ compiler based on Clang
-
Distcc – distribute builds across multiple machines simultaneously
-
We need to seriously think about what to do with C++ modules