Skip to content

Commit ed516a1

Browse files
committed
pre-commit
1 parent 336fcfa commit ed516a1

24 files changed

+211
-83
lines changed

.github/workflows/test.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ jobs:
8484
run: python -m pip list
8585

8686
- name: Test core package
87-
run: |
87+
run: |
8888
python -m pytest src/hyperactive -p no:warnings
8989
9090
test-all-extras:
@@ -155,7 +155,7 @@ jobs:
155155
name: test-examples
156156
runs-on: ubuntu-latest
157157
timeout-minutes: 15
158-
158+
159159
steps:
160160
- uses: actions/checkout@v4
161161
with:
@@ -176,7 +176,7 @@ jobs:
176176
# For pull requests, compare with base branch
177177
CHANGED_FILES=$(git diff --name-only ${{ github.event.pull_request.base.sha }} ${{ github.sha }} | grep "^examples/" || true)
178178
fi
179-
179+
180180
if [ -n "$CHANGED_FILES" ]; then
181181
echo "examples_changed=true" >> $GITHUB_OUTPUT
182182
echo "Examples changed:"

CLAUDE.md

Lines changed: 128 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,128 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
5+
## Project Overview
6+
7+
Hyperactive is an optimization and data collection toolbox for convenient and fast prototyping of computationally expensive models. It provides various optimization algorithms including random search, grid search, Bayesian optimization, and many others for hyperparameter tuning and general optimization problems.
8+
9+
## Architecture
10+
11+
### Core Components
12+
13+
- **`src/hyperactive/`** - Main package with src-layout structure
14+
- **`base/`** - Base classes for experiments and optimizers using skbase framework
15+
- **`opt/`** - Optimization algorithms organized by backend:
16+
- **`gfo/`** - Gradient-Free-Optimizers backend algorithms
17+
- **`optuna/`** - Optuna backend integration
18+
- **`gridsearch/`** - Scikit-learn grid search integration
19+
- **`integrations/`** - Framework integrations (sklearn, sktime)
20+
- **`experiment/`** - Experiment definitions and benchmark functions
21+
- **`utils/`** - Utility functions for parallel computing and validation
22+
23+
### Key Design Patterns
24+
25+
- Uses **skbase** framework for base classes with tagging system for metadata
26+
- **Plugin architecture** - optimizers and experiments inherit from base classes
27+
- **Multiple backends** - supports GFO, Optuna, and sklearn optimization backends
28+
- **Integration layer** - separate modules for ML framework integrations
29+
30+
## Development Commands
31+
32+
### Build and Install
33+
```bash
34+
make build # Build the package using python -m build
35+
make install # Install from built wheel
36+
make install-editable # Install in development mode
37+
make reinstall # Uninstall and reinstall
38+
```
39+
40+
### Testing
41+
```bash
42+
make test # Run all tests (src, pytest, local)
43+
make test-pytest # Run main test suite with pytest
44+
make test-src # Run tests within src/hyperactive/
45+
make test-extensive # Run comprehensive tests including examples
46+
make test-examples # Test all example files
47+
```
48+
49+
### Code Quality
50+
```bash
51+
make lint # Check code with ruff linter
52+
make lint-fix # Auto-fix linting issues
53+
make format # Format code with ruff
54+
make format-check # Check formatting without changes
55+
make check # Run all quality checks (lint + format + imports)
56+
make fix # Fix all auto-fixable issues
57+
```
58+
59+
### Dependencies
60+
```bash
61+
make install-test-requirements # Install test dependencies
62+
make install-all-extras-for-test # Install all optional dependencies for testing
63+
```
64+
65+
## Testing Structure
66+
67+
- **`tests/`** - Main test directory (currently minimal, mostly in src/)
68+
- **`src/hyperactive/*/tests/`** - Component-specific tests within source code
69+
- **`examples/test_examples.py`** - Example validation tests
70+
- Uses **pytest** as the test runner
71+
- Test configuration in `pyproject.toml` under `[tool.test]` dependencies
72+
73+
## Key Configuration Files
74+
75+
- **`pyproject.toml`** - Modern Python packaging configuration with dependencies, build system, and ruff configuration
76+
- **`Makefile`** - Comprehensive build, test, and quality commands
77+
- **`.github/workflows/`** - CI/CD workflows for automated testing
78+
79+
## Code Style and Linting
80+
81+
- Uses **ruff** for linting and formatting (configured in pyproject.toml)
82+
- Follows **numpy docstring convention**
83+
- Python 3.9+ support
84+
- Line length: 88 characters
85+
- Import sorting with ruff's isort functionality
86+
87+
## Dependencies
88+
89+
### Core Dependencies
90+
- `numpy`, `pandas` - Data handling
91+
- `tqdm` - Progress bars
92+
- `gradient-free-optimizers` - Main optimization backend
93+
- `scikit-base` - Base class framework
94+
- `scikit-learn` - ML integration
95+
96+
### Optional Dependencies
97+
- `sklearn-integration` - Enhanced sklearn support
98+
- `sktime-integration` - Time series forecasting
99+
- `test` - Testing dependencies (pytest, flake8, etc.)
100+
- `all_extras` - All optional features
101+
102+
## Examples Structure
103+
104+
- **`examples/gfo/`** - Gradient-Free-Optimizers examples
105+
- **`examples/optuna/`** - Optuna backend examples
106+
- **`examples/sklearn/`** - Scikit-learn integration examples
107+
- Each example directory has its own README.md
108+
109+
## Common Development Tasks
110+
111+
### Adding New Optimizers
112+
1. Create optimizer class in appropriate `opt/` subdirectory
113+
2. Inherit from `BaseOptimizer` and implement required methods
114+
3. Add appropriate tags for metadata
115+
4. Register in `_registry/` if needed
116+
5. Add tests in corresponding `tests/` directory
117+
6. Add usage example in `examples/`
118+
119+
### Running Single Tests
120+
```bash
121+
python -m pytest tests/specific_test.py -v
122+
python -m pytest src/hyperactive/component/tests/test_file.py
123+
```
124+
125+
### Version Management
126+
- Version defined in `pyproject.toml`
127+
- Uses `importlib.metadata` for runtime version access
128+
- Current version: 4.8.1

examples/gfo/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ python bayesian_optimization_example.py
4444
- **RandomSearch**: Pure random sampling - excellent baseline
4545
- **GridSearch**: Exhaustive enumeration of discrete grids
4646

47-
### Local Search Methods
47+
### Local Search Methods
4848
- **HillClimbing**: Greedy local search from starting points
4949
- **SimulatedAnnealing**: Hill climbing with probabilistic escapes
5050
- **StochasticHillClimbing**: Hill climbing with random moves
@@ -55,7 +55,7 @@ python bayesian_optimization_example.py
5555
- **DifferentialEvolution**: Evolution strategy for continuous spaces
5656

5757
### Advanced Methods
58-
- **BayesianOptimizer**: Gaussian Process-based optimization
58+
- **BayesianOptimizer**: Gaussian Process-based optimization
5959
- **TreeStructuredParzenEstimators**: TPE algorithm (similar to Optuna's TPE)
6060
- **ForestOptimizer**: Random forest-based surrogate optimization
6161

@@ -64,7 +64,7 @@ python bayesian_optimization_example.py
6464
### Quick Decision Tree
6565

6666
1. **Need a baseline?** → RandomSearch
67-
2. **Small discrete space?** → GridSearch
67+
2. **Small discrete space?** → GridSearch
6868
3. **Expensive evaluations?** → BayesianOptimizer
6969
4. **Continuous optimization?** → ParticleSwarmOptimizer or BayesianOptimizer
7070
5. **Fast local optimization?** → HillClimbing
@@ -92,7 +92,7 @@ python bayesian_optimization_example.py
9292
All GFO algorithms support:
9393
- **Random state**: `random_state=42` for reproducibility
9494
- **Early stopping**: `early_stopping=10` trials without improvement
95-
- **Max score**: `max_score=0.99` stop when target reached
95+
- **Max score**: `max_score=0.99` stop when target reached
9696
- **Warm start**: `initialize={"warm_start": [points]}` initial solutions
9797

9898
## Advanced Usage
@@ -103,7 +103,7 @@ All GFO algorithms support:
103103
random_opt = RandomSearch(n_trials=20, ...)
104104
initial_results = random_opt.solve()
105105

106-
# Phase 2: Local refinement
106+
# Phase 2: Local refinement
107107
hill_opt = HillClimbing(
108108
n_trials=30,
109109
initialize={"warm_start": [initial_results]}
@@ -117,13 +117,13 @@ final_results = hill_opt.solve()
117117
```python
118118
param_space = {
119119
"learning_rate": (0.001, 0.1), # Log scale recommended
120-
"n_estimators": (10, 1000), # Integer range
120+
"n_estimators": (10, 1000), # Integer range
121121
"regularization": (0.0, 1.0), # Bounded continuous
122122
}
123123
```
124124

125125
**Discrete/Categorical:**
126-
```python
126+
```python
127127
param_space = {
128128
"algorithm": ["adam", "sgd", "rmsprop"], # Categorical
129129
"layers": [1, 2, 3, 4, 5], # Discrete integers
@@ -144,4 +144,4 @@ param_space = {
144144
- [Gradient-Free Optimization Overview](https://en.wikipedia.org/wiki/Derivative-free_optimization)
145145
- [Bayesian Optimization Tutorial](https://arxiv.org/abs/1807.02811)
146146
- [Evolutionary Algorithms Survey](https://ieeexplore.ieee.org/document/6900297)
147-
- [Hyperparameter Optimization Review](https://arxiv.org/abs/1502.02127)
147+
- [Hyperparameter Optimization Review](https://arxiv.org/abs/1502.02127)

examples/gfo/bayesian_optimization_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
44
Bayesian optimization uses a probabilistic model (typically Gaussian Process) to
55
model the objective function and an acquisition function to decide where to sample
6-
next. This approach is highly sample-efficient and particularly useful when
6+
next. This approach is highly sample-efficient and particularly useful when
77
function evaluations are expensive.
88
99
Characteristics:

examples/gfo/differential_evolution_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,4 +56,4 @@
5656
# Results
5757
print("\n=== Results ===")
5858
print(f"Best parameters: {best_params}")
59-
print("Differential evolution optimization completed successfully")
59+
print("Differential evolution optimization completed successfully")

examples/gfo/downhill_simplex_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,4 +61,4 @@
6161
# Results
6262
print("\n=== Results ===")
6363
print(f"Best parameters: {best_params}")
64-
print("Downhill simplex optimization completed successfully")
64+
print("Downhill simplex optimization completed successfully")

examples/gfo/evolution_strategy_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,4 +56,4 @@
5656
# Results
5757
print("\n=== Results ===")
5858
print(f"Best parameters: {best_params}")
59-
print("Evolution strategy optimization completed successfully")
59+
print("Evolution strategy optimization completed successfully")

examples/gfo/genetic_algorithm_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,4 +57,4 @@
5757
# Results
5858
print("\n=== Results ===")
5959
print(f"Best parameters: {best_params}")
60-
print("Genetic algorithm optimization completed successfully")
60+
print("Genetic algorithm optimization completed successfully")

examples/gfo/grid_search_example.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,4 +62,4 @@
6262
print("\n=== Results ===")
6363
print(f"Best parameters: {best_params}")
6464
print(f"Evaluated {total_combinations} parameter combinations")
65-
print("Grid search completed successfully")
65+
print("Grid search completed successfully")

examples/gfo/hill_climbing_example.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
"""
22
Hill Climbing Example - Local Search Optimization
33
4-
Hill climbing is a local search algorithm that starts from a random point and
4+
Hill climbing is a local search algorithm that starts from a random point and
55
iteratively moves to neighboring solutions with better objective values. It's
66
a simple but effective optimization strategy that can quickly find good local
77
optima, especially when started from reasonable initial points.
@@ -33,7 +33,7 @@
3333
# Define search space - discrete values for hill climbing
3434
search_space = {
3535
"n_estimators": list(range(10, 201)), # Discrete integer values
36-
"max_depth": list(range(1, 21)), # Discrete integer values
36+
"max_depth": list(range(1, 21)), # Discrete integer values
3737
"min_samples_split": list(range(2, 21)), # Discrete integer values
3838
"min_samples_leaf": list(range(1, 11)), # Discrete integer values
3939
}
@@ -60,4 +60,4 @@
6060
# Results
6161
print("\n=== Results ===")
6262
print(f"Best parameters: {best_params}")
63-
print("Hill climbing optimization completed successfully")
63+
print("Hill climbing optimization completed successfully")

0 commit comments

Comments
 (0)