AI-powered competitive programming workspace with code execution, automated critique, coaching chat, and performance analytics.
OpenRank combines a modern coding UI (Next.js + Monaco) with a FastAPI orchestration backend that can:
- execute user code against database-driven test cases,
- run static complexity checks,
- generate AI feedback and strategy coaching,
- and log submissions for trend dashboards.
- Practice with feedback loops: write code, run tests, get instant pass/fail and runtime/memory signals.
- Coach on demand: request deep AI critique or continue with contextual coaching chat.
- Track progression: dashboard summarizes pass rate, dominant coding patterns, and recent activity.
- Problem-bank driven: problems, starter code, and test cases are fetched from Supabase.
- Problem picker (
/problems) with difficulty and full markdown description. - Monaco editor preloaded with
starter_code. - Run Code mode (judge only, AI skipped) for fast iteration.
- Get AI Feedback mode (judge + LLM analysis + strategy guidance).
- Rich execution panel showing per-case input/expected/actual plus runtime and memory.
- Structured critique via LLM:
- time/space complexity,
- optimality judgment,
- bug hints,
- improvement suggestions.
- Strategic coaching:
- detected pattern,
- recommended optimal pattern,
- explanation of tradeoffs,
- similar practice problems.
- Follow-up conversational coaching with full chat history context.
- Total submissions.
- Pass rate.
- Pattern distribution pie chart.
- Recent submissions table (status, complexity, date).
graph TD
A[Next.js Frontend] -->|POST /full-critique| B[FastAPI Backend]
A -->|POST /chat| B
A -->|GET /stats| B
A -->|GET /problems| B
B --> C[Static Analyzer\nAST loop/risk scan]
B --> D[Sandbox Executor\nlocal Python subprocess]
B --> E[LLM via Groq\nanalysis + coaching]
B --> F[Supabase\nproblems + submissions]
- Next.js 16 (App Router)
- React 18 + TypeScript
- Tailwind CSS
- Monaco Editor (
@monaco-editor/react) - Recharts + Lucide icons + React Markdown
- FastAPI
- Pydantic
- LangChain + Groq (
llama-3.3-70b-versatile) - Supabase Python client
- Local Python sandbox execution using subprocess + tracemalloc
backend/
agent_core.py # LLM chains, sandbox executor, chat coach
database.py # Supabase client + dashboard stats aggregation
main.py # FastAPI routes
schemas.py # Pydantic output schemas
static_analyzer.py # AST complexity/risk scan
workflow.py # End-to-end orchestration pipeline
test_coach.py # Quick local coaching smoke test
test_db.py # Supabase connectivity check
frontend/
app/page.tsx # Main product UI (workspace + dashboard)
app/layout.tsx
app/globals.css
package.json
- Python 3.10+
- Node.js 18+
- npm 9+
- Supabase project with required tables/columns
- Groq API key
Create backend/.env:
cp .env.example backend/.envGROQ_API_KEY=your_groq_api_key
SUPABASE_URL=https://YOUR_PROJECT.supabase.co
SUPABASE_KEY=your_supabase_anon_or_service_keyBackend code auto-loads
.envwhen variables are not already present.
From backend/:
python -m venv .venv
# Windows
.\.venv\Scripts\activate
# Mac/Linux
source .venv/bin/activate
pip install fastapi uvicorn pydantic langchain-groq langchain-core supabase python-dotenv requests
uvicorn main:app --reloadBackend runs at http://localhost:8000.
From frontend/:
npm install
npm run devFrontend runs at http://localhost:3000.
Base URL: http://localhost:8000
Runs the full workflow: static analysis → test execution → optional AI critique.
Request:
{
"code": "def solution(x): return x",
"problem": "Two Sum",
"language": "python",
"run_ai": true
}Response:
{
"report": "### Execution & Analysis Summary ...",
"judge_results": [
{
"input": "[2,7,11,15], 9",
"expected": "[0,1]",
"actual": "[0, 1]",
"passed": true,
"runtime": 1.02,
"memory": 0.13
}
]
}Continues contextual coaching based on current code + chat history.
Request:
{
"code": "def solution(...): ...",
"problem": "Two Sum",
"history": [
{ "role": "user", "content": "Why is this O(n^2)?" }
]
}Response:
{ "reply": "Because your nested loops compare each pair..." }Returns dashboard metrics aggregated from recent submissions.
Returns list for problem dropdown.
Returns full selected problem payload including description and starter code.
Expected fields used by app:
id(string/uuid)title(string)difficulty(string)description(markdown text)starter_code(text)test_cases(JSON array)
Accepted test-case item shape after normalization:
{
"input": [1, 2],
"expected_output": 3
}Fields inserted by workflow logger:
problem_namecode_snippetstatus(PASS|FAIL|ERROR)time_complexityspace_complexitypattern_detected- plus table-managed metadata (
id,created_at, etc.)
- Static scan parses code AST and estimates loop nesting risk.
- Safety gate can reject high-risk deep nested-loop code paths.
- Test resolution fetches test cases from Supabase by title/description fallback.
- Sandbox run executes function in isolated temp file subprocess with timeout.
- Optional AI stage produces complexity + strategy analysis when
run_ai=true. - Final report is generated and submission is logged asynchronously to Supabase.
From backend/:
python test_db.py # verifies Supabase connectivity
python test_coach.py # verifies coaching chain responseGROQ_API_KEY is not set- Ensure
backend/.envexists and key is valid.
- Ensure
SUPABASE_URL or SUPABASE_KEY is not set- Add both variables in
backend/.env.
- Add both variables in
- No test cases found for problem
- Verify selected problem title/description and
test_casesdata inproblemstable.
- Verify selected problem title/description and
- Frontend cannot reach backend
- Confirm FastAPI is running on port
8000.
- Confirm FastAPI is running on port
- Next.js warning about multiple lockfiles
- Optionally set
turbopack.rootin Next config or remove unrelated lockfiles.
- Optionally set
Current sandbox safety includes basic keyword blocking and timeout controls; it is suitable for local/dev workflows but not hardened for multi-tenant untrusted production execution without stronger isolation.
- Add authentication and per-user data isolation.
- Containerized execution sandbox.
- Multi-language judge support.
- CI checks + automated tests.
- Deployment profiles for staging/production.
This project is licensed under the MIT License. See LICENSE.