Table of contents
CancerVision is an AI project that uses computer vision and deep learning to segment brain tumors from MRI scans. It identifies tumor regions such as the core, edema, and enhancing areas to support medical analysis and treatment planning. The aim is to build accurate models that reduce manual work and improve consistency in diagnosis.
- Git: Ensure that git is installed on your machine. Download Git
- Python 3.13: Required for the project. Download Python
- UV: Used for managing Python environments. Install UV
- Docker (optional): For DevContainer development. Download Docker
-
Clone the repository:
git clone https://github.com/CogitoNTNU/CancerVision cd CancerVision -
Install dependencies:
uv sync
-
Resources:
- Set up pre commit (only for development):
uv run pre-commit install
To run the project, run the following command from the root directory of the project:
uv run python -m src.training.train --experiment-id dynunet_brats_baselinePreview a resolved training setup without running it:
uv run python -m src.training.train --experiment-id dynunet_brats_baseline --dry-runRun single-case inference with the registry-backed CLI:
uv run python -m src.inference.inference \
--model-id dynunet_latest \
--case-dir /path/to/BraTS20_Training_001Run inference for all case folders in a directory:
uv run python -m src.inference.inference \
--model-id dynunet_latest \
--input-root /path/to/MICCAI_BraTS2020_TrainingData \
--output-root res/predictions/dynunet_latestDeployable model definitions live in res/models/model_registry.json.
Each entry maps a model id to architecture and checkpoint metadata so inference does not depend on hardcoded paths.
Serve a browser-based inference UI that accepts arbitrary model weights and the four BraTS MRI modalities via drag-and-drop:
uv run python -m src.web --host 127.0.0.1 --port 8080See docs/manuals/web-interface.md for details.
Classify predicted segmentations into case-level tumor categories:
uv run python -m src.classification.classify \
--classifier-id brats_rule_based_v1 \
--input-root res/predictions/dynunet_latestClassifier definitions live in res/classification/classifier_registry.json.
To build and preview the documentation site locally:
uv run mkdocs build
uv run mkdocs serveThis will build the documentation and start a local server at http://127.0.0.1:8000/ where you can browse the docs and API reference. Get the documentation according to the lastes commit on main by viewing the gh-pages branch on GitHub: https://cogitontnu.github.io/PROJECT-TEMPLATE/.
To run the test suite, run the following command from the root directory of the project:
uv run pytest --doctest-modules --cov=src --cov-report=htmlThis project would not have been possible without the hard work and dedication of all of the contributors. Thank you for the time and effort you have put into making this project a reality.
Distributed under the MIT License. See LICENSE for more information.

