We love contributions! Here's how you can help.
The most impactful contribution. Create a new suite for a professional role:
- Define scenarios across all 3 layers (execution, reasoning, self-improvement)
- Include realistic fixtures (sample data, transcripts, etc.)
- Write clear rubrics for LLM-judge KPIs
- Test with at least 2 different agents
- Add scenarios to existing suites
- Improve scoring rubrics for more accurate evaluation
- Add edge-case scenarios
- Contribute fixtures (more diverse test data)
- Add support for new agent frameworks
- Improve existing adapters (HTTP, Stdio, OpenClaw)
- Improve scoring algorithms
- Add new automated scorer types
- Add new reporter formats
- Optimize performance
- Fix bugs
git clone https://github.com/mondaycom/sensei.git
cd sensei
npm install
npm run build
npm test # runs vitest across all packages- Each suite lives in
suites/<role-name>/ - Define scenarios in
suite.yaml - Put test data in
fixtures/ - Include at least:
- 3 execution scenarios
- 2 reasoning scenarios
- 1 self-improvement scenario
- Each KPI must have a clear rubric (for LLM-judge) or expected value (for automated)
- Test your suite against a real agent before submitting
- TypeScript strict mode
- Meaningful variable names
- Comments for complex logic only
- All new code should have corresponding tests
Releases are triggered manually by maintainers via Actions → Release → Run workflow. Select the bump type (patch / minor / major) and which packages to release. You don't need to do anything special in your PR to trigger a release.
- Fork the repo
- Create a feature branch
- Make your changes
- Run tests:
npm test - Run build:
npm run build - Submit PR with clear description
By contributing, you agree that your contributions will be licensed under MIT.