This project trains deep learning models to classify cinnamon quill images into four quality grades:
- Alba
- C4
- C5
- C5 Special
The workflow includes preprocessing, augmentation, two CNN architectures, transfer learning experiments, TensorBoard monitoring, evaluation, and saving trained models in .pth format.
project/
│── datasets/
│ └── cinnamon/
│ ├── Alba/
│ ├── C4/
│ ├── C5/
│ └── C5 Special/
│
│── checkpoints/
│ ├── saved_model_pretrained.pth
│ └── saved_model_scratch.pth
│
│── utils.py
│── train.py
│── test.py
│── predict.py
│── README.mdPlace cinnamon images in labeled class folders.
All images are resized and normalized:
- Resize to 224×224
- Normalize using ImageNet mean/std
- Convert to tensors
Used during training:
- Random rotations
- Horizontal & vertical flips
- Random cropping
- Optional color jitter
These help models generalize better and deal with class imbalance.
Dataset is split using:
- 70% Training
- 15% Validation
- 15% Test
Using random_split from PyTorch.
Two network architectures were created:
- With transfer learning
- Without transfer learning (trained from scratch)
- With transfer learning
- Without transfer learning
Toggle transfer learning:
use_pretrained = True # Transfer learning
use_pretrained = False # Train from scratchThe final classifier layers are adapted for 4 output classes: Alba, C4, C5, and C5 Special.
Both ResNet18 and VGG16 models were trained using the following configurations:
- SGD
- Adam
- CrossEntropyLoss
- WeightedRandomSampler to reduce class imbalance
- Batch size = 8
- Learning rate scheduler
- Dropout added to VGG classifier
- Training and validation logs tracked in TensorBoard:
tensorboard --logdir=runsModels are evaluated on the test set using:
- Test accuracy
- Test loss
- F1-score
- Class-wise accuracy
- Confusion matrix
A final summary table is generated:
| Model | Optimizer | Loss | Accuracy | F1 | Classwise Accuracy |
|---|---|---|---|---|---|
| ResNet18 | SGD | ... | ... | ... | ... |
| ResNet18 | Adam | ... | ... | ... | ... |
| VGG16 | SGD | ... | ... | ... | ... |
| VGG16 | Adam | ... | ... | ... | ... |
Each experiment saves a .pth model checkpoint:
saved_model_pretrained.pthsaved_model_scratch.pth
Saved using:
torch.save({
"state_dict": model.state_dict(),
"optimizer": optimizer.state_dict()
}, "checkpoints/model_name.pth")A separate predict.py script loads a trained .pth model and predicts the class of a single input image.
Run it with:
python predict.pyImage: datasets/cinnamon/C5/C5 01.JPG
Predicted Grade: C5- Download and preprocess dataset
- Data augmentation
- Train/Val/Test split
- ResNet18 + VGG16 implemented
- Transfer learning ON/OFF
- Train using SGD & Adam
- TensorBoard logging
- Test accuracy and metrics
- Save model in
.pthformat - Prediction script
- Imbalanced data may require
WeightedRandomSampler. - Transfer learning typically increases performance.
- Include TensorBoard accuracy / loss plots in your report.
Build a reliable cinnamon grading model using deep learning and analyze how:
- architecture choice (ResNet vs. VGG), and
- training strategy (transfer learning vs. training from scratch)
affect final performance.