A comprehensive, enterprise-grade framework for automated testing of RESTful APIs with AI-powered constraint mining, multi-agent architecture, advanced caching, and sophisticated UI components. Built with modern Python practices and designed for scalability, maintainability, and extensibility.
- OpenAPI/Swagger Specification Parser: Deep analysis of API specifications with schema extraction and validation
- Multi-Agent Architecture: Modular agent-based system for distributed API testing and analysis
- Constraint Mining: AI-powered extraction of implicit and explicit API constraints using LLM integration
- Contract Testing: Automated validation against OpenAPI specifications
- Schema Validation: Comprehensive request/response schema validation with detailed error reporting
- Test Collection Management: Create, save, execute, and manage collections of API tests
- Advanced Reporting: Detailed test execution reports with metrics, visualizations, and export capabilities
- Extensible Caching System: Multi-tier caching with Memory, File, and Redis support
- Sophisticated Logging: Contextual logging with separate console/file levels and colored output
- Type-Safe Design: Comprehensive Pydantic models with full type safety throughout
- Asynchronous Processing: High-performance async/await patterns for concurrent operations
- Factory Patterns: Flexible component instantiation with dependency injection
- Streamlit GUI: Interactive web-based interface for API exploration and testing
- CLI Tools: Command-line utilities for automation and CI/CD integration
- Component Library: Reusable UI components for custom dashboard creation
- LLM Integration: OpenAI/Google AI integration for intelligent constraint analysis
- Dynamic Test Generation: AI-driven test case creation based on API specifications
- Intelligent Validation: Smart response validation using machine learning patterns
The framework follows a sophisticated multi-layered architecture:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β User Interfaces β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β β β Streamlit β β CLI Tools β β Component β β β β GUI β β β β Library β β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β Business Logic β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β β β Services β β Agents β β Tools β β β β β β β β β β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β Infrastructure β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β β β Caching β β Logging β β Utilities β β β β System β β System β β β β β βββββββββββββββ βββββββββββββββ βββββββββββββββββββ β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ - OpenAPIParserTool: Parse and analyze OpenAPI/Swagger specifications
- RestApiCallerTool: Execute HTTP requests with authentication and validation
- CodeExecutorTool: Safe Python code execution with sandboxing
- StaticConstraintMinerTool: Extract constraints from API specifications
- TestCaseGeneratorTool: Generate test cases from specifications
- TestCollectionGeneratorTool: Create comprehensive test collections
- TestSuiteGeneratorTool: Build complete test suites
- TestExecutionReporterTool: Generate detailed execution reports
- TestDataGeneratorTool: Create test data based on schemas
- OperationSequencerTool: Sequence API operations for dependency testing
- RestApiAgent: Coordinate API testing workflows
- SpecLoaderAgent: Manage specification loading and parsing
- TestExecutionService: Orchestrate test execution workflows
- TestCollectionService: Manage test collection lifecycle
- RestApiCallerFactory: Create endpoint-specific API callers
- Explorer: Interactive API specification browser
- Tester: Test execution and validation interface
- Collections: Test collection management
- Common Components: Reusable UI elements (cards, badges, metrics, etc.)
restful-api-testing-framework/ βββ src/ # Source code β βββ agents/ # Multi-agent system β β βββ rest_api_agent.py # Main API testing agent β β βββ spec_loader/ # Specification loading agents β βββ tools/ # Core tool implementations β β βββ openapi_parser.py # OpenAPI specification parser β β βββ rest_api_caller.py # HTTP request executor β β βββ code_executor.py # Safe code execution β β βββ static_constraint_miner.py # Constraint extraction β β βββ test_case_generator.py # Test case creation β β βββ test_collection_generator.py # Collection management β β βββ test_suite_generator.py # Suite generation β β βββ test_execution_reporter.py # Report generation β β βββ test_data_generator.py # Test data creation β β βββ operation_sequencer.py # Operation sequencing β β βββ constraint_miner/ # Advanced constraint mining β βββ schemas/ # Type-safe data models β β βββ core/ # Base schemas β β βββ tools/ # Tool-specific schemas β β βββ test_collection.py # Test collection models β βββ core/ # Core abstractions β β βββ base_tool.py # Tool base class β β βββ base_agent.py # Agent base class β β βββ services/ # Business services β β βββ repositories/ # Data access layer β βββ utils/ # Utility functions β β βββ api_utils.py # API helpers β β βββ schema_utils.py # Schema utilities β β βββ report_utils.py # Reporting helpers β β βββ llm_utils.py # LLM integration β β βββ rest_api_caller_factory.py # Factory patterns β βββ ui/ # User interface components β β βββ components/ # Reusable UI components β β β βββ common/ # Common components β β β βββ explorer/ # API explorer components β β β βββ tester/ # Testing components β β β βββ collections/ # Collection components β β βββ explorer.py # API exploration interface β β βββ tester.py # Testing interface β β βββ collections.py # Collection management β β βββ styles.py # UI styling β βββ common/ # Common infrastructure β β βββ cache/ # Caching system β β β βββ cache_interface.py # Cache contracts β β β βββ in_memory_cache.py # Memory cache β β β βββ file_cache.py # File-based cache β β β βββ redis_cache.py # Redis cache β β β βββ cache_factory.py # Cache factory β β β βββ decorators.py # Caching decorators β β βββ logger/ # Logging system β β βββ logger_interface.py # Logger contracts β β βββ standard_logger.py # Standard logger β β βββ print_logger.py # Simple logger β β βββ logger_factory.py # Logger factory β βββ config/ # Configuration β β βββ settings.py # Application settings β β βββ constants.py # Constants β β βββ prompts/ # AI prompt templates β βββ main.py # Main application entry β βββ api_test_gui.py # Streamlit GUI application β βββ demo scripts/ # Demonstration tools β βββ openapi_parser_tool.py # Parser demo β βββ rest_api_caller_tool.py # API caller demo β βββ code_executor_tool.py # Code execution demo β βββ constraint_miner_tool.py # Constraint mining demo β βββ test_case_generator_tool.py # Test generation demo β βββ api_test_runner.py # Test runner demo β βββ cache_demo.py # Caching system demo β βββ logger_demo.py # Logging system demo βββ data/ # Sample data and specifications β βββ RBCTest_dataset/ # Research datasets β βββ toolshop/ # Toolshop API examples β βββ example/ # Example specifications β βββ scripts/ # Test scripts βββ output/ # Generated outputs βββ docs/ # Documentation β βββ Architecture.md # Architecture documentation βββ requirements.txt # Python dependencies βββ README.md # This file - Python 3.8+
- pip package manager
- Git
# Clone the repository git clone https://github.com/your-username/restful-api-testing-framework.git cd restful-api-testing-framework # Create and activate virtual environment (recommended) python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt- requests: HTTP library for API calls
- pydantic: Data validation and settings management
- asyncio: Asynchronous programming support
- pathlib: Modern path handling
- streamlit: Web-based GUI framework
- plotly: Interactive visualizations
- pandas: Data manipulation and analysis
- google-adk: Google AI integration
- openai: OpenAI API integration (optional)
- redis: Redis cache support (optional)
- pytest: Testing framework
- httpx: Async HTTP client for testing
Parse and analyze OpenAPI/Swagger specifications:
from tools.openapi_parser import OpenAPIParserTool from schemas.tools.openapi_parser import OpenAPIParserRequest # Initialize parser parser = OpenAPIParserTool() # Parse specification request = OpenAPIParserRequest( spec_source="data/toolshop/openapi.json", source_type="file" ) result = await parser.execute(request) print(f"Found {len(result.endpoints)} endpoints") print(f"API Info: {result.api_info.title} v{result.api_info.version}")CLI Usage:
python src/openapi_parser_tool.pyExecute HTTP requests with advanced features:
from tools.rest_api_caller import RestApiCallerTool from schemas.tools.rest_api_caller import RestApiCallerRequest # Initialize caller caller = RestApiCallerTool() # Make API call request = RestApiCallerRequest( method="GET", url="https://api.example.com/users", headers={"Authorization": "Bearer token"}, params={"page": 1, "limit": 10} ) result = await caller.execute(request) print(f"Status: {result.status_code}") print(f"Response: {result.response_data}")CLI Usage:
python src/rest_api_caller_tool.pyExtract constraints from API specifications using AI:
from tools.static_constraint_miner import StaticConstraintMinerTool from schemas.tools.constraint_miner import ConstraintMinerRequest # Initialize constraint miner miner = StaticConstraintMinerTool() # Mine constraints request = ConstraintMinerRequest( spec_source="data/toolshop/openapi.json", source_type="file", enable_llm=True ) result = await miner.execute(request) print(f"Found {len(result.constraints)} constraints") for constraint in result.constraints: print(f"- {constraint.type}: {constraint.description}")CLI Usage:
python src/constraint_miner_tool.py --spec data/toolshop/openapi.jsonGenerate comprehensive test collections:
from tools.test_collection_generator import TestCollectionGeneratorTool from schemas.tools.test_collection_generator import TestCollectionGeneratorRequest # Initialize generator generator = TestCollectionGeneratorTool() # Generate test collection request = TestCollectionGeneratorRequest( spec_source="data/toolshop/openapi.json", source_type="file", collection_name="Toolshop API Tests", include_positive_tests=True, include_negative_tests=True, include_edge_cases=True ) result = await generator.execute(request) print(f"Generated collection with {len(result.test_collection.test_cases)} test cases")Execute Python code safely with context:
from tools.code_executor import CodeExecutorTool from schemas.tools.code_executor import CodeExecutorRequest # Initialize executor executor = CodeExecutorTool() # Execute validation code request = CodeExecutorRequest( code=""" # Validate API response assert response.status_code == 200 assert 'users' in response.json() assert len(response.json()['users']) > 0 result = {'validation': 'passed', 'user_count': len(response.json()['users'])} """, context={"response": api_response}, timeout=10 ) result = await executor.execute(request) print(f"Execution result: {result.result}")Utilize the advanced caching system:
from common.cache import CacheFactory, CacheType, cache_result # Get cache instance cache = CacheFactory.get_cache("api-cache", CacheType.MEMORY) # Basic cache operations cache.set("user:123", {"name": "John", "email": "john@example.com"}, ttl=300) user_data = cache.get("user:123") # Use caching decorators @cache_result(ttl=300, cache_type=CacheType.MEMORY) def expensive_api_call(endpoint: str): # Expensive operation return make_api_call(endpoint) # Function result will be cached result = expensive_api_call("/api/v1/users")Cache Demo:
python src/cache_demo.pyImplement sophisticated logging:
from common.logger import LoggerFactory, LoggerType, LogLevel # Get logger instance logger = LoggerFactory.get_logger( name="api-testing", logger_type=LoggerType.STANDARD, console_level=LogLevel.INFO, file_level=LogLevel.DEBUG, log_file="logs/api_tests.log" ) # Add context logger.add_context(test_suite="user_management", endpoint="/api/v1/users") # Log with context logger.info("Starting API test execution") logger.debug("Request headers prepared") logger.warning("Rate limit approaching") logger.error("API call failed with 500 status")Logging Demo:
python src/logger_demo.pyLaunch the interactive web interface:
streamlit run src/api_test_gui.pyFeatures:
- API Explorer: Browse and analyze OpenAPI specifications
- Test Builder: Create and configure test cases
- Test Execution: Run tests and view real-time results
- Collection Management: Organize and manage test collections
- Reporting Dashboard: View detailed test reports and metrics
Coordinate complex testing workflows:
from agents.rest_api_agent import RestApiAgent from agents.spec_loader.agent import SpecLoaderAgent # Initialize agents spec_agent = SpecLoaderAgent() api_agent = RestApiAgent() # Load specification spec_result = await spec_agent.load_specification("data/toolshop/openapi.json") # Coordinate API testing test_result = await api_agent.execute_test_suite( specification=spec_result.specification, test_configuration=test_config )Execute complex testing scenarios:
from core.services.test_execution_service import TestExecutionService from schemas.test_collection import TestCollection # Initialize service test_service = TestExecutionService() # Load test collection collection = TestCollection.load_from_file("collections/user_management.json") # Execute with advanced options execution_result = await test_service.execute_collection( collection=collection, parallel_execution=True, max_workers=5, retry_failed_tests=True, generate_report=True ) print(f"Execution Summary:") print(f"- Total Tests: {execution_result.total_tests}") print(f"- Passed: {execution_result.passed_tests}") print(f"- Failed: {execution_result.failed_tests}") print(f"- Execution Time: {execution_result.execution_time}s")The framework includes comprehensive demonstration scripts:
# OpenAPI parser demonstration python src/openapi_parser_tool.py # API caller with authentication python src/rest_api_caller_tool.py # Constraint mining with AI python src/constraint_miner_tool.py # Test case generation python src/test_case_generator_tool.py # Complete test runner workflow python src/api_test_runner.py# Caching system capabilities python src/cache_demo.py # Logging system features python src/logger_demo.py# config/settings.py class Settings: # API Configuration DEFAULT_TIMEOUT = 30 MAX_RETRIES = 3 # Cache Configuration CACHE_TYPE = "memory" CACHE_TTL = 300 # Logging Configuration LOG_LEVEL = "INFO" LOG_FILE = "logs/app.log" # AI Configuration OPENAI_API_KEY = "your-api-key" ENABLE_LLM_FEATURES = True# .env file OPENAI_API_KEY=your-openai-api-key GOOGLE_AI_API_KEY=your-google-ai-key REDIS_URL=redis://localhost:6379/0 LOG_LEVEL=DEBUG CACHE_TYPE=redisRun the test suite:
# Install test dependencies pip install pytest pytest-asyncio httpx # Run all tests pytest tests/ # Run specific test categories pytest tests/test_tools.py pytest tests/test_caching.py pytest tests/test_logging.py # Run with coverage pytest --cov=src tests/- Memory Cache: Ultra-fast for frequently accessed data
- File Cache: Persistent storage for large datasets
- Redis Cache: Distributed caching for multi-instance deployments
- Non-blocking API calls with
asyncio - Concurrent test execution with worker pools
- Streaming responses for large datasets
- Automatic cleanup of temporary files
- Memory usage monitoring and optimization
- Connection pooling for HTTP requests
We welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Install development dependencies pip install -r requirements-dev.txt # Run pre-commit hooks pre-commit install # Run linting flake8 src/ black src/ mypy src/This project is licensed under the MIT License - see the LICENSE file for details.
- Built with modern Python best practices
- Inspired by enterprise testing frameworks
- Powered by OpenAI and Google AI technologies
- Streamlit for beautiful web interfaces
For questions, issues, or contributions:
- π§ Email: support@api-testing-framework.com
- π Issues: GitHub Issues
- π Documentation: Wiki
- π¬ Discussions: GitHub Discussions
Happy API Testing! π