Skip to content

HZeroxium/restful-api-testing-framework

Repository files navigation

RESTful API Testing Framework

A comprehensive, enterprise-grade framework for automated testing of RESTful APIs with AI-powered constraint mining, multi-agent architecture, advanced caching, and sophisticated UI components. Built with modern Python practices and designed for scalability, maintainability, and extensibility.

πŸš€ Key Features

Core Testing Capabilities

  • OpenAPI/Swagger Specification Parser: Deep analysis of API specifications with schema extraction and validation
  • Multi-Agent Architecture: Modular agent-based system for distributed API testing and analysis
  • Constraint Mining: AI-powered extraction of implicit and explicit API constraints using LLM integration
  • Contract Testing: Automated validation against OpenAPI specifications
  • Schema Validation: Comprehensive request/response schema validation with detailed error reporting
  • Test Collection Management: Create, save, execute, and manage collections of API tests
  • Advanced Reporting: Detailed test execution reports with metrics, visualizations, and export capabilities

Advanced Infrastructure

  • Extensible Caching System: Multi-tier caching with Memory, File, and Redis support
  • Sophisticated Logging: Contextual logging with separate console/file levels and colored output
  • Type-Safe Design: Comprehensive Pydantic models with full type safety throughout
  • Asynchronous Processing: High-performance async/await patterns for concurrent operations
  • Factory Patterns: Flexible component instantiation with dependency injection

User Interfaces

  • Streamlit GUI: Interactive web-based interface for API exploration and testing
  • CLI Tools: Command-line utilities for automation and CI/CD integration
  • Component Library: Reusable UI components for custom dashboard creation

AI-Powered Features

  • LLM Integration: OpenAI/Google AI integration for intelligent constraint analysis
  • Dynamic Test Generation: AI-driven test case creation based on API specifications
  • Intelligent Validation: Smart response validation using machine learning patterns

πŸ—οΈ Architecture Overview

The framework follows a sophisticated multi-layered architecture:

Core Architecture Layers

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ User Interfaces β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Streamlit β”‚ β”‚ CLI Tools β”‚ β”‚ Component β”‚ β”‚ β”‚ β”‚ GUI β”‚ β”‚ β”‚ β”‚ Library β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Business Logic β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Services β”‚ β”‚ Agents β”‚ β”‚ Tools β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Infrastructure β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Caching β”‚ β”‚ Logging β”‚ β”‚ Utilities β”‚ β”‚ β”‚ β”‚ System β”‚ β”‚ System β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ 

Component Overview

πŸ› οΈ Tools (Self-Contained Components)

  • OpenAPIParserTool: Parse and analyze OpenAPI/Swagger specifications
  • RestApiCallerTool: Execute HTTP requests with authentication and validation
  • CodeExecutorTool: Safe Python code execution with sandboxing
  • StaticConstraintMinerTool: Extract constraints from API specifications
  • TestCaseGeneratorTool: Generate test cases from specifications
  • TestCollectionGeneratorTool: Create comprehensive test collections
  • TestSuiteGeneratorTool: Build complete test suites
  • TestExecutionReporterTool: Generate detailed execution reports
  • TestDataGeneratorTool: Create test data based on schemas
  • OperationSequencerTool: Sequence API operations for dependency testing

πŸ€– Agents (Intelligent Coordinators)

  • RestApiAgent: Coordinate API testing workflows
  • SpecLoaderAgent: Manage specification loading and parsing

βš™οΈ Services (Business Logic Management)

  • TestExecutionService: Orchestrate test execution workflows
  • TestCollectionService: Manage test collection lifecycle
  • RestApiCallerFactory: Create endpoint-specific API callers

🎨 UI Components

  • Explorer: Interactive API specification browser
  • Tester: Test execution and validation interface
  • Collections: Test collection management
  • Common Components: Reusable UI elements (cards, badges, metrics, etc.)

πŸ“ Project Structure

restful-api-testing-framework/ β”œβ”€β”€ src/ # Source code β”‚ β”œβ”€β”€ agents/ # Multi-agent system β”‚ β”‚ β”œβ”€β”€ rest_api_agent.py # Main API testing agent β”‚ β”‚ └── spec_loader/ # Specification loading agents β”‚ β”œβ”€β”€ tools/ # Core tool implementations β”‚ β”‚ β”œβ”€β”€ openapi_parser.py # OpenAPI specification parser β”‚ β”‚ β”œβ”€β”€ rest_api_caller.py # HTTP request executor β”‚ β”‚ β”œβ”€β”€ code_executor.py # Safe code execution β”‚ β”‚ β”œβ”€β”€ static_constraint_miner.py # Constraint extraction β”‚ β”‚ β”œβ”€β”€ test_case_generator.py # Test case creation β”‚ β”‚ β”œβ”€β”€ test_collection_generator.py # Collection management β”‚ β”‚ β”œβ”€β”€ test_suite_generator.py # Suite generation β”‚ β”‚ β”œβ”€β”€ test_execution_reporter.py # Report generation β”‚ β”‚ β”œβ”€β”€ test_data_generator.py # Test data creation β”‚ β”‚ β”œβ”€β”€ operation_sequencer.py # Operation sequencing β”‚ β”‚ └── constraint_miner/ # Advanced constraint mining β”‚ β”œβ”€β”€ schemas/ # Type-safe data models β”‚ β”‚ β”œβ”€β”€ core/ # Base schemas β”‚ β”‚ β”œβ”€β”€ tools/ # Tool-specific schemas β”‚ β”‚ └── test_collection.py # Test collection models β”‚ β”œβ”€β”€ core/ # Core abstractions β”‚ β”‚ β”œβ”€β”€ base_tool.py # Tool base class β”‚ β”‚ β”œβ”€β”€ base_agent.py # Agent base class β”‚ β”‚ β”œβ”€β”€ services/ # Business services β”‚ β”‚ └── repositories/ # Data access layer β”‚ β”œβ”€β”€ utils/ # Utility functions β”‚ β”‚ β”œβ”€β”€ api_utils.py # API helpers β”‚ β”‚ β”œβ”€β”€ schema_utils.py # Schema utilities β”‚ β”‚ β”œβ”€β”€ report_utils.py # Reporting helpers β”‚ β”‚ β”œβ”€β”€ llm_utils.py # LLM integration β”‚ β”‚ └── rest_api_caller_factory.py # Factory patterns β”‚ β”œβ”€β”€ ui/ # User interface components β”‚ β”‚ β”œβ”€β”€ components/ # Reusable UI components β”‚ β”‚ β”‚ β”œβ”€β”€ common/ # Common components β”‚ β”‚ β”‚ β”œβ”€β”€ explorer/ # API explorer components β”‚ β”‚ β”‚ β”œβ”€β”€ tester/ # Testing components β”‚ β”‚ β”‚ └── collections/ # Collection components β”‚ β”‚ β”œβ”€β”€ explorer.py # API exploration interface β”‚ β”‚ β”œβ”€β”€ tester.py # Testing interface β”‚ β”‚ β”œβ”€β”€ collections.py # Collection management β”‚ β”‚ └── styles.py # UI styling β”‚ β”œβ”€β”€ common/ # Common infrastructure β”‚ β”‚ β”œβ”€β”€ cache/ # Caching system β”‚ β”‚ β”‚ β”œβ”€β”€ cache_interface.py # Cache contracts β”‚ β”‚ β”‚ β”œβ”€β”€ in_memory_cache.py # Memory cache β”‚ β”‚ β”‚ β”œβ”€β”€ file_cache.py # File-based cache β”‚ β”‚ β”‚ β”œβ”€β”€ redis_cache.py # Redis cache β”‚ β”‚ β”‚ β”œβ”€β”€ cache_factory.py # Cache factory β”‚ β”‚ β”‚ └── decorators.py # Caching decorators β”‚ β”‚ └── logger/ # Logging system β”‚ β”‚ β”œβ”€β”€ logger_interface.py # Logger contracts β”‚ β”‚ β”œβ”€β”€ standard_logger.py # Standard logger β”‚ β”‚ β”œβ”€β”€ print_logger.py # Simple logger β”‚ β”‚ └── logger_factory.py # Logger factory β”‚ β”œβ”€β”€ config/ # Configuration β”‚ β”‚ β”œβ”€β”€ settings.py # Application settings β”‚ β”‚ β”œβ”€β”€ constants.py # Constants β”‚ β”‚ └── prompts/ # AI prompt templates β”‚ β”œβ”€β”€ main.py # Main application entry β”‚ β”œβ”€β”€ api_test_gui.py # Streamlit GUI application β”‚ └── demo scripts/ # Demonstration tools β”‚ β”œβ”€β”€ openapi_parser_tool.py # Parser demo β”‚ β”œβ”€β”€ rest_api_caller_tool.py # API caller demo β”‚ β”œβ”€β”€ code_executor_tool.py # Code execution demo β”‚ β”œβ”€β”€ constraint_miner_tool.py # Constraint mining demo β”‚ β”œβ”€β”€ test_case_generator_tool.py # Test generation demo β”‚ β”œβ”€β”€ api_test_runner.py # Test runner demo β”‚ β”œβ”€β”€ cache_demo.py # Caching system demo β”‚ └── logger_demo.py # Logging system demo β”œβ”€β”€ data/ # Sample data and specifications β”‚ β”œβ”€β”€ RBCTest_dataset/ # Research datasets β”‚ β”œβ”€β”€ toolshop/ # Toolshop API examples β”‚ β”œβ”€β”€ example/ # Example specifications β”‚ └── scripts/ # Test scripts β”œβ”€β”€ output/ # Generated outputs β”œβ”€β”€ docs/ # Documentation β”‚ └── Architecture.md # Architecture documentation β”œβ”€β”€ requirements.txt # Python dependencies └── README.md # This file 

πŸ› οΈ Installation

Prerequisites

  • Python 3.8+
  • pip package manager
  • Git

Quick Setup

# Clone the repository git clone https://github.com/your-username/restful-api-testing-framework.git cd restful-api-testing-framework # Create and activate virtual environment (recommended) python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt

Dependencies

Core Dependencies

  • requests: HTTP library for API calls
  • pydantic: Data validation and settings management
  • asyncio: Asynchronous programming support
  • pathlib: Modern path handling

UI Dependencies

  • streamlit: Web-based GUI framework
  • plotly: Interactive visualizations
  • pandas: Data manipulation and analysis

AI/ML Dependencies

  • google-adk: Google AI integration
  • openai: OpenAI API integration (optional)

Caching Dependencies

  • redis: Redis cache support (optional)

Testing Dependencies

  • pytest: Testing framework
  • httpx: Async HTTP client for testing

πŸ“– Usage

1. OpenAPI Specification Parser

Parse and analyze OpenAPI/Swagger specifications:

from tools.openapi_parser import OpenAPIParserTool from schemas.tools.openapi_parser import OpenAPIParserRequest # Initialize parser parser = OpenAPIParserTool() # Parse specification request = OpenAPIParserRequest( spec_source="data/toolshop/openapi.json", source_type="file" ) result = await parser.execute(request) print(f"Found {len(result.endpoints)} endpoints") print(f"API Info: {result.api_info.title} v{result.api_info.version}")

CLI Usage:

python src/openapi_parser_tool.py

2. REST API Caller

Execute HTTP requests with advanced features:

from tools.rest_api_caller import RestApiCallerTool from schemas.tools.rest_api_caller import RestApiCallerRequest # Initialize caller caller = RestApiCallerTool() # Make API call request = RestApiCallerRequest( method="GET", url="https://api.example.com/users", headers={"Authorization": "Bearer token"}, params={"page": 1, "limit": 10} ) result = await caller.execute(request) print(f"Status: {result.status_code}") print(f"Response: {result.response_data}")

CLI Usage:

python src/rest_api_caller_tool.py

3. Constraint Mining

Extract constraints from API specifications using AI:

from tools.static_constraint_miner import StaticConstraintMinerTool from schemas.tools.constraint_miner import ConstraintMinerRequest # Initialize constraint miner miner = StaticConstraintMinerTool() # Mine constraints request = ConstraintMinerRequest( spec_source="data/toolshop/openapi.json", source_type="file", enable_llm=True ) result = await miner.execute(request) print(f"Found {len(result.constraints)} constraints") for constraint in result.constraints: print(f"- {constraint.type}: {constraint.description}")

CLI Usage:

python src/constraint_miner_tool.py --spec data/toolshop/openapi.json

4. Test Collection Generation

Generate comprehensive test collections:

from tools.test_collection_generator import TestCollectionGeneratorTool from schemas.tools.test_collection_generator import TestCollectionGeneratorRequest # Initialize generator generator = TestCollectionGeneratorTool() # Generate test collection request = TestCollectionGeneratorRequest( spec_source="data/toolshop/openapi.json", source_type="file", collection_name="Toolshop API Tests", include_positive_tests=True, include_negative_tests=True, include_edge_cases=True ) result = await generator.execute(request) print(f"Generated collection with {len(result.test_collection.test_cases)} test cases")

5. Code Execution

Execute Python code safely with context:

from tools.code_executor import CodeExecutorTool from schemas.tools.code_executor import CodeExecutorRequest # Initialize executor executor = CodeExecutorTool() # Execute validation code request = CodeExecutorRequest( code=""" # Validate API response assert response.status_code == 200 assert 'users' in response.json() assert len(response.json()['users']) > 0 result = {'validation': 'passed', 'user_count': len(response.json()['users'])} """, context={"response": api_response}, timeout=10 ) result = await executor.execute(request) print(f"Execution result: {result.result}")

6. Caching System

Utilize the advanced caching system:

from common.cache import CacheFactory, CacheType, cache_result # Get cache instance cache = CacheFactory.get_cache("api-cache", CacheType.MEMORY) # Basic cache operations cache.set("user:123", {"name": "John", "email": "john@example.com"}, ttl=300) user_data = cache.get("user:123") # Use caching decorators @cache_result(ttl=300, cache_type=CacheType.MEMORY) def expensive_api_call(endpoint: str): # Expensive operation return make_api_call(endpoint) # Function result will be cached result = expensive_api_call("/api/v1/users")

Cache Demo:

python src/cache_demo.py

7. Logging System

Implement sophisticated logging:

from common.logger import LoggerFactory, LoggerType, LogLevel # Get logger instance logger = LoggerFactory.get_logger( name="api-testing", logger_type=LoggerType.STANDARD, console_level=LogLevel.INFO, file_level=LogLevel.DEBUG, log_file="logs/api_tests.log" ) # Add context logger.add_context(test_suite="user_management", endpoint="/api/v1/users") # Log with context logger.info("Starting API test execution") logger.debug("Request headers prepared") logger.warning("Rate limit approaching") logger.error("API call failed with 500 status")

Logging Demo:

python src/logger_demo.py

8. Streamlit GUI Application

Launch the interactive web interface:

streamlit run src/api_test_gui.py

Features:

  • API Explorer: Browse and analyze OpenAPI specifications
  • Test Builder: Create and configure test cases
  • Test Execution: Run tests and view real-time results
  • Collection Management: Organize and manage test collections
  • Reporting Dashboard: View detailed test reports and metrics

9. Multi-Agent Architecture

Coordinate complex testing workflows:

from agents.rest_api_agent import RestApiAgent from agents.spec_loader.agent import SpecLoaderAgent # Initialize agents spec_agent = SpecLoaderAgent() api_agent = RestApiAgent() # Load specification spec_result = await spec_agent.load_specification("data/toolshop/openapi.json") # Coordinate API testing test_result = await api_agent.execute_test_suite( specification=spec_result.specification, test_configuration=test_config )

10. Advanced Test Scenarios

Execute complex testing scenarios:

from core.services.test_execution_service import TestExecutionService from schemas.test_collection import TestCollection # Initialize service test_service = TestExecutionService() # Load test collection collection = TestCollection.load_from_file("collections/user_management.json") # Execute with advanced options execution_result = await test_service.execute_collection( collection=collection, parallel_execution=True, max_workers=5, retry_failed_tests=True, generate_report=True ) print(f"Execution Summary:") print(f"- Total Tests: {execution_result.total_tests}") print(f"- Passed: {execution_result.passed_tests}") print(f"- Failed: {execution_result.failed_tests}") print(f"- Execution Time: {execution_result.execution_time}s")

🎯 Demo Scripts

The framework includes comprehensive demonstration scripts:

API Testing Demos

# OpenAPI parser demonstration python src/openapi_parser_tool.py # API caller with authentication python src/rest_api_caller_tool.py # Constraint mining with AI python src/constraint_miner_tool.py # Test case generation python src/test_case_generator_tool.py # Complete test runner workflow python src/api_test_runner.py

Infrastructure Demos

# Caching system capabilities python src/cache_demo.py # Logging system features python src/logger_demo.py

πŸ”§ Configuration

Application Settings

# config/settings.py class Settings: # API Configuration DEFAULT_TIMEOUT = 30 MAX_RETRIES = 3 # Cache Configuration CACHE_TYPE = "memory" CACHE_TTL = 300 # Logging Configuration LOG_LEVEL = "INFO" LOG_FILE = "logs/app.log" # AI Configuration OPENAI_API_KEY = "your-api-key" ENABLE_LLM_FEATURES = True

Environment Variables

# .env file OPENAI_API_KEY=your-openai-api-key GOOGLE_AI_API_KEY=your-google-ai-key REDIS_URL=redis://localhost:6379/0 LOG_LEVEL=DEBUG CACHE_TYPE=redis

πŸ§ͺ Testing

Run the test suite:

# Install test dependencies pip install pytest pytest-asyncio httpx # Run all tests pytest tests/ # Run specific test categories pytest tests/test_tools.py pytest tests/test_caching.py pytest tests/test_logging.py # Run with coverage pytest --cov=src tests/

πŸ“Š Performance Considerations

Caching Strategy

  • Memory Cache: Ultra-fast for frequently accessed data
  • File Cache: Persistent storage for large datasets
  • Redis Cache: Distributed caching for multi-instance deployments

Asynchronous Processing

  • Non-blocking API calls with asyncio
  • Concurrent test execution with worker pools
  • Streaming responses for large datasets

Resource Management

  • Automatic cleanup of temporary files
  • Memory usage monitoring and optimization
  • Connection pooling for HTTP requests

🀝 Contributing

We welcome contributions! Please see our contributing guidelines:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

Development Setup

# Install development dependencies pip install -r requirements-dev.txt # Run pre-commit hooks pre-commit install # Run linting flake8 src/ black src/ mypy src/

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Built with modern Python best practices
  • Inspired by enterprise testing frameworks
  • Powered by OpenAI and Google AI technologies
  • Streamlit for beautiful web interfaces

πŸ“ž Support

For questions, issues, or contributions:


Happy API Testing! πŸš€

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •