A C-based modular pipeline to generate datasets, evaluate sorting algorithms, and visualize their performance in terms of execution time and number of comparisons. It supports automated testing, measurement, processing, and plotting of sorting results.
AnalysingAlgorithms
├── generate_dataset.c # Generates input datasets in /dataset
├── sort.c # Implements sorting algorithms (Bubble, Insertion, Merge, Bucket, Heap etc.)
├── measure_dataset.c # Runs sorting on datasets, records time & comparisons in /raw_measurements
├── process_info.c # Calculates average & standard deviation into /processed_information
├── plot.gp # Gnuplot script to visualize performance metrics
├── test_algorithm.c # Test sorting algorithms individually
├── reset.sh / reset.bat # Full automation to execute the entire flow
├── dataset/ # Stores generated input files
├── raw_measurements/ # Stores raw output: time & comparisons for each sort run
├── processed_information/ # Stores processed metrics (mean, std deviation)
├── graphs/ # Output directory for plots
You can either run each step manually or use the one-click script.
- On Linux/macOS:
chmod +x reset.sh ./reset.sh
- On Windows: Just double-click reset.
This will:
- Generate datasets
- Execute sorting algorithms on them
- Collect timing and comparison metrics
- Compute averages and deviations
- Plot graphs using Gnuplot
📊 Output
- Raw measurements: /raw_measurements
- Averaged metrics: /processed_information
- Plots: /graphs (e.g., time vs algorithm, comparisons vs algorithm)
🧪 Test Sorting Algorithms To test an individual sorting algorithm:
gcc sort.c test_algorithm.c -o test ./test
📦 Requirements
- GCC or any compatible C compiler
- Gnuplot (for plotting)