A plugin for re-identificaiton of wildlife individuals using learned embeddings.
pip install -r requirements.txt
Optionally, these environment variables must be set to enable Weights and Biases logging capability:
WANDB_API_KEY={your_wanb_api_key}
WANDB_MODE={'online'/'offline'}
You can create a new line in a code block in markdown by using two spaces at the end of the line followed by a line break. Here's an example:
cd wbia_tbd
python train.py
The data is expected to be in the coco JSON format. Paths to data files and the image directory are defined in the config YAML file.
A config file path can be set by:
python train.py --config {path_to_config}
-
exp_name: Name of the experiment -
project_name: Name of the project -
checkpoint_dir: Directory for storing training checkpoints -
comment: Comment text for the experiment -
data: Subfield for data-related settingsimages_dir: Directory containing the all of the dataset imagestrain_anno_path: Path to the JSON file containing training annotationsval_anno_path: Path to the JSON file containing validation annotationsviewpoint_list: List of viewpoints to use.train_n_filter_min: Minimum number of samples per name (individual) to keep for the training set. Names under the theshold will be discarded.val_n_filter_min: Minimum number of samples per name (individual) to keep for the validation set. Names under the theshold will be discardedtrain_n_subsample_max: Maximum number of samples per name to keep for the training set. Annotations of names above the threshold will be randomly subsampled during loadingval_n_subsample_max: Maximum number of samples per name to keep for the validation set. Annotations of names above the threshold will be randomly subsampled during loadingname_keys: List of keys used for defining a unique name (individual). Fields from multiple keys will be combined to form the final representation of a name. Common use-case isname_keys: ['name', 'viewpoint']for treating each name + viewpoint combination as uniqueimage_size:- Image height to resize to
- Image width to resize to
-
engine: Subfields for engine-related settingsnum_workers: Number of workers for data loading (default: 0)train_batch_size: Batch size for trainingvalid_batch_size: Batch size for validationepochs: Number of training epochsseed: Random seed for reproducibilitydevice: Device to be used for trainingloss_module: Loss function moduleuse_wandb: Whether to use Weights and Biases for logging
-
scheduler_params: Subfields for learning rate scheduler parameterslr_start: Initial learning ratelr_max: Maximum learning ratelr_min: Minimum learning rate
-
model_params: Dictionary containing model-related settingsmodel_name: Name of the model backbone architectureuse_fc: Whether to use a fully connected layer after backbone extractionfc_dim: Dimension of the fully connected layerdropout: Dropout rateloss_module: Loss function modules: Scaling factor for the loss functionmargin: Margin for the loss functionpretrained: Whether to use a pretrained model backbonen_classes: Number of classes in the training dataset, used for loading checkpoint
-
test: Subfields for plugin-related settingsfliplr: Whether to perform horizontal flipping during testingfliplr_view: List of viewpoints to apply horizontal flippingbatch_size: Batch size for plugin inference
This is an initial commit which includes training, inference and WBIA integration capabilities. Release of additional features is underway.