[Feature] Add RandAugment_T to pipelines#2154
[Feature] Add RandAugment_T to pipelines#2154sttaseen wants to merge 1 commit intoopen-mmlab:0.xfrom
Conversation
|
|
|
Thank you very much for contributing to MMAction2. Could you please sign the CLA so we can accept your contribution? And there seem to be lint issues in the code. Would you mind fixing them by with pre-commit hooks following our documentation? |
|
Hi @sttaseen !We are grateful for your efforts in helping improve this open-source project during your personal time. Welcome to join OpenMMLab Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA Thank you again for your contribution❤ |
Motivation
While
torchvision.transforms.RandAugmentworks effectively for spatial transformations, it does not cover any temporally varying transformations needed for video clips. T. Kim et al. proposedRandAug_Tin their paper, Learning Temporally Invariant and Localizable Features via Data Augmentations, which is an extension oftorchvision.transforms.RandAugmentthat, linearly interpolates a random transformation between two magnitudes from the first frame to the last frame in a video clip.Modification
Added
randaugment_utils.pyundermmaction/datasets/pipelines.Modified
__init__.pyandaugmentations.pyundermmaction/datasets/pipelinesto add new data augmentation,RandAugment_T.Use cases (Optional)
Sample Use:
Checklist