Have you ever wanted to test and quantify whether your application will work better than a static assembly or a shared assembly, devoid or not divided, upx or noxx, gcc -O2 or gcc -O3, a hash or btree, etc. etc. If so, then this is for you. There are hundreds of ways to customize the application, but how we collect, organize, process, visualize the consequences of each experiment.
I searched for several months to develop an open source application / profiling framework similar to the Mozilla Perftastic concept, where I can develop / build / test / profile hundreds of implementations of various customization experiments.
Some requirements:
Platform
SUSE32 and SUSE64
Data format
Very flexible, compact, simple, hierarchical. There are several options, including
Data acquisition
Flexible and customizable plugins. There is a lot of data to collect from the application, including performance data from / proc, sys time, wall time, processor usage, memory profile, leaks, valgrind logs, arena fragmentation, I / O, local hosts, binary size, etc. d. And some of the host system. My choice language for this is Python, and I would develop these plugins to monitor and / or analyze data in all different formats and store them in a framework data format.
Mark
All experiments will be flagged, including data such as the GCC version and compilation parameters, platform, host, application parameters, experiment, assembly tag, etc.
Graphing
History, comparative, hierarchical, dynamic and static.
- Application builds are performed by the user CI server, which releases a new version of the application several times a day over the past 3 years. That is why we need continuous trending. When we add new functions, correct errors, change the assembly parameters, we want to automatically collect profiling data and see the trend. Here you need to create various static assemblies.
- For analysis, Mozilla dynamic graphics are great for performing comparative graphics. It would be great to have comparative graphics between different tags. For example, compare N build versions, compare platforms, compare build options, etc.
- We have a test set of 3K tests, the data will be collected for each test and grouped from the cross-test testing data, for each test, for each group with tags, to complete the regression set.
- Features include RRDTool , Orca , Graphite
Grouping Analysis
- Min
- Max
- Median
- Average
- Standard deviation
- etc.
Presentation
All this will be presented and controlled using the application server, preferably Django or TG are the best.
Inspiration