Research codebase for ultrasound FMC-style reconstruction experiments: MobileViT-style backbones, FPN / UNet-style decoders, pixel shuffle heads, optional Mixture-of-Experts (MoE) and DANN variants. Documentation and primary code comments are in English.
This repository supports work presented in connection with the Acoustical Society of America — Honolulu 2025:
Enabling low-cost full matrix capture acquisition using MobileViT with binary ultrasonic data.
Authors: Rafael Niddam (École de technologie supérieure, 1100 Notre-Dame St. W., A-1340, Montréal, QC H3C 1K3, Canada, Guillaume Painchaud-April (Evident Sci., QC, Canada), Alain Le Duff (Evident Sci., QC, Canada), and Pierre Belanger (École de technologie supérieure, Montréal, QC, Canada).
Contact: Rafael Niddam on LinkedIn
- Models: MobileViT v3 family (FPN, UNet, SegFormer, SPT, LSA, pixel variants), Lite MobileViT blocks, MobileNet v2 FPN/UNet, MoE and DANN wrappers (see
models/__init__.pyregistry). The final FMC pixel-shuffle segmentation model isMobileViTv3_v1_dynamicFPNpixel2implemented inmodels/segmentation/mobilevit_v3_pixel2.py. - Training:
training/train.py(argparse,--list-models), plustraining/train_moe.pyandtraining/train_dann.pyfor specialized setups. - Evaluation & analysis: use
evaluation/evaluate_loop2.pyas the main FMC validation script;evaluation/evaluate_moe2.pyfor MoE. A trailing2in a script name marks the final version in that line (seeevaluation/__init__.py). - Utilities: losses, NCC, plotting helpers in
utils/.
MobileViT-FPN-PixelShuffle/
├── configs/ # Hyperparameters and training defaults
├── evaluation/ # Evaluation and post-hoc analysis
├── models/ # Backbones, decoders, segmentation heads, MoE
├── training/ # Training entry points
├── utils/ # Metrics, losses, FMC visualization helpers
├── requirements.txt
├── LICENSE
└── README.md
Requirements
- Python 3.10+
- CUDA GPU recommended for training (CPU may work for tiny tests)
Install
python -m venv .venvActivate:
- Windows (PowerShell):
.\.venv\Scripts\Activate.ps1 - Linux / macOS:
source .venv/bin/activate
pip install -r requirements.txtList registered model names
python training/train.py --list-modelsTrain (example; adjust --model and --batch_size as needed)
python training/train.py --model MobileViTv3_v1_dynamicFPNpixel2 --batch_size 16Evaluate (final workflow: loop presets + project paths)
python evaluation/evaluate_loop2.py --preset 0
# Optional: checkpoint and folder under data/ (e.g. Test_dataset) or absolute path
python evaluation/evaluate_loop2.py --preset 0 --checkpoint path/to/Model.pth --test-dir Test_datasetPreset 0 reads .mat files from data/Test_dataset/. Other presets use data/<subdir>/ (see test_subdir in evaluate_loop2.py). Weights: weights/, or --checkpoint. Older scripts: evaluation/evaluate.py, evaluation/evaluate_loop.py.
Additional trainers (older, script-style imports—may require aligning sys.path or project root with your layout):
python training/train_moe.pypython training/train_dann.py
- Defaults live in
configs/(e.g.config_mobileunet.py). - Training:
data/train_dataset/data/valid_dataset(seetraining/train.py). Evaluation:evaluate_loop2.py— preset 0 usesdata/Test_dataset/; other presets usedata/<subdir>/+weights/(or--checkpoint/--test-dir).
This project is released under the MIT License; see LICENSE.