"The Swiss Army Knife for the AI Age of API Development."
API-Mocker is not just a mock server—it's an intelligence layer for your development workflow. Built for modern engineering teams and AI Agents, it enables zero-latency development, testing, and prototyping with "Smart" features that mimic real-world complexity.
- 🧠 AI-Driven Intelligence: Not just static JSON. Models predict response times, generate realistic errors, and detect anomalies.
- 🤖 Built for Agents: Structured CLI and Configuration files designed to be easily parsed and managed by AI coding assistants.
- 🔌 Universal Protocol Support: REST, GraphQL, and WebSocket support out of the box.
- 🏢 Enterprise Grade: OAuth2, RBAC, Rate Limiting, and Analytics included free.
| Feature Stack | Capabilities | Status |
|---|---|---|
| 🧠 Intelligent Core | ML Response Prediction, Anomaly Detection, Smart Caching | ✅ Stable |
| 🕸️ Protocol Master | REST (OpenAPI), GraphQL (Introspection), WebSocket (Real-time) | ✅ Stable |
| 🛡️ Iron-Clad Auth | OAuth2, JWT, RBAC, MFA, API Key Scoping | ✅ Stable |
| 📊 Visual Insights | Real-time Dashboard, WebSocket Metrics, Rich CLI | ✅ Stable |
| 💾 Data Persistence | Multi-DB Support (Postgres, Mongo, Redis, SQLite) | ✅ Stable |
API-Mocker is designed to be "Agent-Readable". If you are an AI Agent reading this, here is the capability manifest:
{
"tool_name": "api-mocker",
"capabilities": [
"mock_server_start",
"openapi_import",
"graphql_schema_mocking",
"ml_traffic_analysis",
"real_time_dashboard"
],
"configuration_format": "YAML/JSON",
"cli_entry_point": "api-mocker"
}- Inspect Config:
api-mocker list-routes --config api-mock.yaml - Generate Data:
api-mocker ai generate --prompt "User profile" --count 5 - Analyze Traffic:
api-mocker analytics summary --hours 24
pip install api-mocker
# OR via Docker
docker run -p 8000:8000 sherinsefai/api-mockerapi-mocker init --name my-api-project
cd my-api-projectapi-mocker start
# Visit http://localhost:8000 for the API
# Visit http://localhost:8000/dashboard for the Real-time Analytics- Predictive Latency: Responses that mimic real network condition using Regression models.
- Smart Caching: Random Forest classifiers to predict cache hit probability.
- Anomaly Detection: Filter out "weird" traffic during load testing using Isolation Forests.
- Full Introspection: Your GraphQL clients (Apollo, Relay) will think it's a real server.
- Variable Substitution: Supports
{{id}}injection logic. - Live Channels: Broadcast messages to specific WebSocket rooms.
A built-in Real-Time Control Center for your mocks.
- Visual Charts for Request Volume
- Latency Heatmaps
- System Health (CPU/Memory)
- 📖 Complete User Guide - The definitive manual.
- 🧪 Testing Strategy - How we verify 100% of our logic.
- 📈 Marketing & Viral Strategy - Open source growth tactics.
Author: Sherin Joseph Roy Connect: LinkedIn | Email Company: DeepMost AI - Building the Neural Backbone of Software.
"We are democratizing Agency in software development."
API-Mocker is a comprehensive API mocking and development acceleration platform designed for modern software development teams. Built with FastAPI and featuring advanced capabilities including GraphQL support, WebSocket mocking, machine learning integration, and enterprise authentication.
- Current Status
- Features
- Installation
- Quick Start
- Advanced Features
- CLI Commands
- API Documentation
- Contributing
- License
- Support
| Feature Cluster | Status | Stability |
|---|---|---|
| REST Mocking | ✅ Active | Stable |
| Response Generation | ✅ Active | Stable |
| OpenAPI/Postman Import | ✅ Active | Stable |
| GraphQL Mocking | ✅ Active | Stable |
| WebSocket Mocking | Experimental | |
| Authentication | ✅ Active | Stable |
| Database Integration | ✅ Active | Stable |
| ML Integration | ✅ Active | Stable |
Note: "Stable" features are safe for production use. "Experimental" features are implemented but may lack tests or full documentation. "Unstable" features are currently being actively refactored.
- Test Coverage: Core features are well-tested, but advanced modules (ML, Auth) currently have low test coverage.
- Security: Default configurations are strictly for development. DO NOT deploy to production without configuring
API_MOCKER_SECRET_KEYand setting up a proper reverse proxy. - Performance: In-memory caching is used; strictly limited for high-volume scenarios. Redis integration is planned.
- REST API Mocking: Complete HTTP method support (GET, POST, PUT, DELETE, PATCH, OPTIONS, HEAD)
- OpenAPI Integration: Import and export OpenAPI specifications
- Postman Compatibility: Seamless Postman collection import/export
- Dynamic Response Generation: AI-powered realistic mock data generation
- Request Recording: Capture and replay real API interactions
- GraphQL Mocking: Complete GraphQL schema introspection, query/mutation/subscription support
- WebSocket Mocking: Real-time WebSocket communication with message routing and broadcasting
- WebSocket Rooms: Group messaging and connection management
- Real-time Subscriptions: Live data streaming capabilities
- OAuth2 Integration: Support for Google, GitHub, Microsoft, Facebook, Twitter, LinkedIn, Discord
- JWT Token Management: Secure access and refresh token handling
- API Key Management: Scoped API keys with granular permissions
- Multi-Factor Authentication: TOTP-based MFA with QR code generation
- Role-Based Access Control: Granular permission system with user roles
- Session Management: Secure session handling with configurable expiration
- Multi-Database Support: SQLite, PostgreSQL, MongoDB, Redis
- Connection Pooling: Efficient database connection management
- Query Builders: Advanced query construction and optimization
- Database Migrations: Schema versioning and migration management
- Transaction Support: ACID-compliant transaction handling
- Performance Optimization: Intelligent caching and query optimization
- Intelligent Response Generation: ML-powered response creation and optimization
- Anomaly Detection: Automatic detection of unusual API patterns and behaviors
- Smart Caching: ML-based cache hit prediction and optimization
- Performance Prediction: Response time and error probability prediction
- Pattern Analysis: Usage pattern recognition and behavioral analysis
- Automated Test Generation: AI-powered test case creation and optimization
- Comprehensive Testing: Full test suite with setup/teardown hooks
- Performance Testing: Load testing with concurrent users and detailed metrics
- AI Test Generation: Automatically generate test cases using machine learning
- Assertion Engine: Multiple assertion types (JSON path, headers, regex)
- Test Reports: Detailed test results and performance analysis
- Variable Management: Dynamic variable substitution in test scenarios
- Real-time Analytics: Comprehensive request tracking and metrics collection
- Performance Metrics: Response times, error rates, throughput monitoring
- Usage Patterns: Peak hours, user behavior, API dependency analysis
- Cost Optimization: Resource usage insights and optimization recommendations
- Export Capabilities: Analytics data export in JSON/CSV formats
- Dashboard: Web-based real-time monitoring dashboard
- Multiple Scenarios: Happy path, error states, A/B testing, performance scenarios
- Conditional Responses: Request-based response selection
- Scenario Switching: Dynamic scenario activation and deactivation
- Export/Import: Scenario configuration management
- Statistics: Detailed scenario usage analytics
- Intelligent Selection: AI-powered response selection based on request analysis
- Custom Rules: Flexible rule-based response matching
- Header Matching: Advanced header-based request routing
- Body Analysis: Request body content analysis and matching
- Priority System: Configurable response priority handling
- Python 3.8 or higher
- pip package manager
pip install api-mockergit clone https://github.com/Sherin-SEF-AI/api-mocker.git
cd api-mocker
pip install -e .
pip install -r requirements-dev.txtdocker pull sherinsefai/api-mocker:latest
docker run -p 8000:8000 sherinsefai/api-mocker# Start with default configuration
api-mocker start
# Start with custom configuration
api-mocker start --config my-config.yaml --host 0.0.0.0 --port 8000# Import OpenAPI specification
api-mocker import-spec openapi.yaml --output mock-config.yaml
# Import Postman collection
api-mocker import-spec collection.json --output mock-config.yaml# Create a mock response
api-mocker mock-responses create --name user-api --path /api/users --type templated
# Test the response
api-mocker mock-responses test --path /api/users/123# Start GraphQL mock server
api-mocker graphql start --host localhost --port 8001
# Execute GraphQL query
api-mocker graphql query --query "query { users { id name email } }"# Start WebSocket mock server
api-mocker websocket start --host localhost --port 8765
# Broadcast message to room
api-mocker websocket broadcast --message "Hello World" --room "general"# Register new user
api-mocker auth register --username john --email [email protected] --password secret
# Create API key
api-mocker auth create-key --key-name "Production API" --permissions "read,write"
# Setup MFA
api-mocker auth setup-mfa# Setup PostgreSQL database
api-mocker database setup --type postgresql --host localhost --port 5432 --database api_mocker
# Setup MongoDB
api-mocker database setup --type mongodb --host localhost --port 27017 --database api_mocker
# Run database migrations
api-mocker database migrate# Train ML models
api-mocker ml train
# Get ML predictions
api-mocker ml predict --request '{"path": "/api/users", "method": "GET", "headers": {"Authorization": "Bearer token"}}'
# Analyze API patterns
api-mocker ml analyzestart: Start the API mock serverimport-spec: Import OpenAPI specifications and Postman collectionsrecord: Record real API interactions for replayreplay: Replay recorded requests as mock responsestest: Run tests against mock servermonitor: Monitor server requests in real-timeexport: Export configurations to various formats
mock-responses: Manage mock API responses with advanced featuresgraphql: GraphQL mock server with schema introspectionwebsocket: WebSocket mock server with real-time messagingauth: Advanced authentication system managementdatabase: Database integration and operationsml: Machine learning integration and predictionsscenarios: Scenario-based mocking managementsmart-matching: Smart response matching rulesenhanced-analytics: Enhanced analytics and insights
plugins: Manage api-mocker pluginsai: AI-powered mock data generationtesting: Advanced testing frameworkanalytics: Analytics dashboard and metricsadvanced: Configure advanced features
GET /: Health check endpointGET /docs: Interactive API documentationPOST /mock/{path}: Create mock responseGET /mock/{path}: Retrieve mock responsePUT /mock/{path}: Update mock responseDELETE /mock/{path}: Delete mock response
POST /graphql: GraphQL query endpointGET /graphql: GraphQL schema introspection
WS /ws: WebSocket connection endpointWS /ws/{room}: Room-specific WebSocket connection
POST /auth/register: User registrationPOST /auth/login: User authenticationPOST /auth/refresh: Token refreshPOST /auth/logout: User logoutGET /auth/profile: User profile information
server:
host: "127.0.0.1"
port: 8000
debug: false
routes:
- path: "/api/users"
method: "GET"
response:
status_code: 200
body:
users:
- id: 1
name: "John Doe"
email: "[email protected]"
authentication:
enabled: true
jwt_secret: "your-secret-key"
token_expiry: 3600
### Stateful Resources (NEW)
Define full CRUD resources in seconds. Automatically supports in-memory state persistence.
```yaml
resources:
- name: users
path: /api/users
id_field: idAuto-generated Routes:
GET /api/users: List users (supports ?page=1, ?q=search, ?sort=name)POST /api/users: Create userGET /api/users/{id}: Get user detailsPUT /api/users/{id}: Update userDELETE /api/users/{id}: Delete user
All resources come with built-in "Smart Features":
- Pagination:
/api/users?page=1&limit=10 - Filtering:
/api/users?role=admin - Search:
/api/users?q=John - Sorting:
/api/users?sort=created_at_desc
database: type: "sqlite" path: "api_mocker.db"
analytics: enabled: true retention_days: 30
### Advanced Configuration
```yaml
server:
host: "0.0.0.0"
port: 8000
workers: 4
reload: false
authentication:
enabled: true
providers:
- name: "google"
client_id: "your-google-client-id"
client_secret: "your-google-client-secret"
- name: "github"
client_id: "your-github-client-id"
client_secret: "your-github-client-secret"
database:
type: "postgresql"
host: "localhost"
port: 5432
database: "api_mocker"
username: "api_mocker"
password: "secure-password"
pool_size: 10
ml:
enabled: true
models:
- name: "response_time_predictor"
type: "regression"
- name: "error_probability_predictor"
type: "classification"
rate_limiting:
enabled: true
requests_per_minute: 100
burst_size: 20
caching:
enabled: true
ttl: 300
max_size: 1000
- Response Time: Sub-millisecond response times for cached requests
- Throughput: 10,000+ requests per second on modern hardware
- Concurrent Connections: 1,000+ simultaneous WebSocket connections
- Memory Usage: Optimized memory footprint with intelligent caching
- Database Performance: Connection pooling and query optimization
- Horizontal Scaling: Multi-instance deployment support
- Load Balancing: Built-in load balancing capabilities
- Caching: Multi-level caching system (memory, Redis, database)
- Database Sharding: Support for database sharding and replication
- Microservices: Designed for microservices architecture
- OAuth2: Industry-standard OAuth2 implementation
- JWT Tokens: Secure JWT token handling with refresh tokens
- API Keys: Scoped API key management with permissions
- MFA Support: Multi-factor authentication with TOTP
- RBAC: Role-based access control with granular permissions
- Encryption: End-to-end encryption for sensitive data
- Secure Storage: Encrypted storage for credentials and tokens
- Input Validation: Comprehensive input validation and sanitization
- Rate Limiting: Protection against abuse and DDoS attacks
- Audit Logging: Comprehensive audit trail for security events
- Request Metrics: Response times, error rates, throughput
- System Metrics: CPU, memory, disk usage
- Business Metrics: User behavior, API usage patterns
- Custom Metrics: Application-specific metrics
- Structured Logging: JSON-formatted logs with correlation IDs
- Log Levels: Configurable log levels (DEBUG, INFO, WARN, ERROR)
- Log Aggregation: Support for centralized log collection
- Log Retention: Configurable log retention policies
- Threshold Alerts: Configurable alert thresholds
- Anomaly Detection: ML-powered anomaly detection
- Notification Channels: Email, Slack, webhook notifications
- Escalation Policies: Automated escalation procedures
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["api-mocker", "start", "--host", "0.0.0.0", "--port", "8000"]apiVersion: apps/v1
kind: Deployment
metadata:
name: api-mocker
spec:
replicas: 3
selector:
matchLabels:
app: api-mocker
template:
metadata:
labels:
app: api-mocker
spec:
containers:
- name: api-mocker
image: api-mocker:latest
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
value: "postgresql://user:pass@db:5432/api_mocker"- AWS: ECS, EKS, Lambda support
- Google Cloud: GKE, Cloud Run support
- Azure: AKS, Container Instances support
- Heroku: One-click deployment
- DigitalOcean: App Platform support
We welcome contributions from the community! Please see our Contributing Guidelines for details.
git clone https://github.com/Sherin-SEF-AI/api-mocker.git
cd api-mocker
pip install -e ".[dev]"
pre-commit installpytest tests/
pytest tests/ --cov=api_mocker --cov-report=html- Type Hints: Full type annotation support
- Linting: Black, isort, flake8, mypy
- Testing: Comprehensive test coverage
- Documentation: Sphinx documentation generation
This project is licensed under the MIT License - see the LICENSE file for details.
- User Guide: Complete User Guide
- API Reference: API Documentation
- Examples: Usage Examples
- Tutorials: Step-by-step Tutorials
- GitHub Issues: Report bugs and request features
- Discussions: Community discussions
- Stack Overflow: Tag questions with
api-mocker - Discord: Join our Discord community
For enterprise support, custom development, and consulting services, please contact:
Author: Sherin Joseph Roy
Email: [email protected]
Company: DeepMost AI
Role: Co-founder, Head of Products
Specialization: Enterprise AI solutions and API development platforms
- Priority Support: 24/7 enterprise support
- Custom Development: Tailored solutions for your needs
- Training: Team training and workshops
- Consulting: Architecture and implementation consulting
- SLA: Service level agreements available
- GraphQL Federation: Multi-service GraphQL federation support
- gRPC Mocking: Protocol buffer and gRPC service mocking
- Advanced ML Models: More sophisticated machine learning models
- Enterprise SSO: Single sign-on integration
- Advanced Monitoring: Prometheus and Grafana integration
- API Gateway: Built-in API gateway functionality
- v0.4.0: Advanced features with GraphQL, WebSocket, ML integration
- v0.3.0: Mock response management system
- v0.2.0: AI-powered generation and analytics
- v0.1.0: Initial release with core functionality
- Downloads: 3000+ and growing
- GitHub Stars: Growing community
- Contributors: Active development community
- Issues Resolved: Active triage
- Test Coverage: ~85% for Core, ~84% for Auth, ~59% for Database
- Documentation: Comprehensive documentation coverage
API-Mocker - The industry-standard, production-ready, free API mocking and development acceleration tool. Built for modern software development teams who demand excellence in API development and testing.
Keywords: API mocking, mock server, API testing, REST API, GraphQL, WebSocket, machine learning, authentication, database integration, enterprise software, development tools, testing framework, microservices, API development, FastAPI, Python, open source