This is a collection of research papers for Multi-Modal reinforcement learning (MMRL). And the repository will be continuously updated to track the frontier of MMRL. Some papers may not be relevant to RL, but we include them anyway as they may be useful for the research of MMRL.
Welcome to follow and star!
Multi-Modal RL agents focus on learning from video (images), language (text), or both, as humans do. We believe that it is important for intelligent agents to learn directly from images or text, since such data can be easily obtained from the Internet.
format:
- [title](paper link) [links]
- authors.
- key words.
- experiment environment.
-
- Ruiqi Wang, Dezhong Zhao, Ziqin Yuan, Tianyu Shao, Guohua Chen, Dominic Kao, Sungeun Hong, Byung-Cheol Min
- Keywords: Preference-based Reinforcement Learning, Foundation Models for Robotics, Neuro-Symbolic Fusion, Multimodal Feedback, Causal Inference, Trajectory Synthesis, Robot Manipulation
- ExpEnv: 2 locomotion and 6 manipulation tasks
-
Co-Reinforcement Learning for Unified Multimodal Understanding and Generation
- Jingjing Jiang, Chongjie Si, Jun Luo, Hanwang Zhang, Chao Ma
- Keywords: Reinforcement Learning, GRPO, Unified Multimodal Understanding and Generation
- ExpEnv: text-to-image generation and multimodal understanding benchmarks
-
VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning
- Haozhe Wang, Chao Qu, Zuming Huang, Wei Chu, Fangzhen Lin, Wenhu Chen
- Keywords: Vision-Language Models, Reasoning, Reinforcement Learning
- ExpEnv: MathVista, MathVerse, MathVision, MMMU-Pro, EMMA, MEGA-Bench
-
SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-Improvement
- Xiyao Wang, Zhengyuan Yang, Chao Feng, Hongjin Lu, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang
- Keywords: vision language model; reinforcement finetuning; vlm reasoning; data selection
- ExpEnv: MathVista and other visual reasoning benchmarks
-
VisualQuality-R1: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank
- Tianhe Wu, Jian Zou, Jie Liang, Lei Zhang, Kede Ma
- Keywords: Image Quality Assessment, Reinforcement Learning, Reasoning-induced no-reference IQA model
- ExpEnv: Image quality assessment benchmarks
-
Q-Insight: Understanding Image Quality via Visual Reinforcement Learning
- Weiqi Li, Xuanyu Zhang, Shijie Zhao, Yabin ZHANG, Junlin Li, Li zhang, Jian Zhang
- Keywords: image quality understanding, multi-modal large language model, reinforcement learning
- ExpEnv: Image quality understanding benchmarks
-
To Think or Not To Think: A Study of Thinking in Rule-Based Visual Reinforcement Fine-Tuning
- Ming Li, Jike Zhong, Shitian Zhao, Yuxiang Lai, Haoquan Zhang, Wang Bill Zhu, Kaipeng Zhang
- Keywords: Visual Reinforcement Fine-Tuning, explicit thinking, overthinking
- ExpEnv: six diverse visual reasoning tasks
-
Fast-Slow Thinking GRPO for Large Vision-Language Model Reasoning
- Wenyi Xiao, Leilei Gan
- Keywords: Large Vision-Language Model, Fast-Slow Thinking, Reasoning
- ExpEnv: seven reasoning benchmarks
-
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning
- Borong Zhang, Yuhao Zhang, Jiaming Ji, Yingshan Lei, Josef Dai, Yuanpei Chen, Yaodong Yang
- Keywords: Vision-Language-Action Models, Safety Alignment, Large-Scale Constrained Learning
- ExpEnv: long-horizon mobile manipulation tasks
-
Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models
- Jiaqi WANG, Kevin Qinghong Lin, James Cheng, Mike Zheng Shou
- Keywords: Vision-Language Models, Reinforcement Learning
- ExpEnv: CLEVR, Super-CLEVR, GeoQA, AITZ
-
VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
- Senqiao Yang, Junyi Li, Xin Lai, Jinming Wu, Wei Li, Zejun MA, Bei Yu, Hengshuang Zhao, Jiaya Jia
- Keywords: Vision Language Models, Reinforcement Learning
- ExpEnv: general VQA tasks and OCR-related tasks
-
Systematic Reward Gap Optimization for Mitigating VLM Hallucinations
- Lehan He, Zeren Chen, Zhelun Shi, Tianyu Yu, Lu Sheng, Jing Shao
- Keywords: Vision Language Models (VLMs), Preference learning, Hallucination mitigation, Reinforcement Learning from AI Feedback (RLAIF)
- ExpEnv: ObjectHal-Bench and other hallucination benchmarks
-
VAGEN: Reinforcing World Model Reasoning for Multi-Turn VLM Agents
- Kangrui Wang, Pingyue Zhang, Zihan Wang, Yaning Gao, Linjie Li, Qineng Wang, Hanyang Chen, Yiping Lu, Zhengyuan Yang, Lijuan Wang, Ranjay Krishna, Jiajun Wu, Li Fei-Fei, Yejin Choi, Manling Li
- Keywords: Visual States, World Modeling, Multi-turn RL, VLM Agents
- ExpEnv: five diverse agent tasks
-
Point-RFT: Improving Multimodal Reasoning with Visually Grounded Reinforcement Finetuning
- Minheng Ni, Zhengyuan Yang, Linjie Li, Chung-Ching Lin, Kevin Lin, Wangmeng Zuo, Lijuan Wang
- Keywords: Large multimodal model, grounded reasoning, reinforcement learning
- ExpEnv: ChartQA, CharXiv, PlotQA, IconQA, TabMWP
-
MiCo: Multi-image Contrast for Reinforcement Visual Reasoning
- Xi Chen, Mingkang Zhu, Shaoteng Liu, Xiaoyang Wu, Xiaogang Xu, Yu Liu, Xiang Bai, Hengshuang Zhao
- Keywords: Visual Reasoning, Chain-of-Thought, LLM, VLM, MLLM
- ExpEnv: multi-image reasoning benchmarks
-
DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO
- Jinyoung Park, Jeehye Na, Jinyoung Kim, Hyunwoo J. Kim
- Keywords: Video Large Language Model, Post-training, GRPO
- ExpEnv: video reasoning benchmarks
-
SAM-R1: Leveraging SAM for Reward Feedback in Multimodal Segmentation via Reinforcement Learning
- Jiaqi Huang, Zunnan Xu, Jun Zhou, Ting Liu, Yicheng Xiao, Mingwen Ou, Bowen Ji, Xiu Li, Kehong Yuan
- Keywords: Reinforcement Learning, Multimodal Large Models, Image Segmentation
- ExpEnv: image segmentation benchmarks
-
SE-GUI: Enhancing Visual Grounding for GUI Agents via Self-Evolutionary Reinforcement Learning
- Xinbin Yuan, Jian Zhang, Kaixin Li, Zhuoxuan Cai, Lujian Yao, Jie Chen, Enguang Wang, Qibin Hou, Jinwei Chen, Peng-Tao Jiang, Bo Li
- Keywords: gui agent; reinforcement learning; visual grounding
- ExpEnv: ScreenSpot-Pro and other grounding benchmarks
-
GUI Exploration Lab: Enhancing Screen Navigation in Agents via Multi-Turn Reinforcement Learning
- Haolong Yan, Yeqing Shen, Xin Huang, Jia Wang, Kaijun Tan, Zhixuan Liang, Hongxin Li, Zheng Ge, Osamu Yoshie, Si Li, Xiangyu Zhang, Daxin Jiang
- Keywords: GUI Environment, Large Vision Language Model, Multi-Turn Reinforcement Learning, Agent
- ExpEnv: PC software and mobile Apps simulation
-
Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models
- Huajie Tan, Yuheng Ji, Xiaoshuai Hao, Xiansheng Chen, Pengwei Wang, Zhongyuan Wang, Shanghang Zhang
- Keywords: Multimodal, Reinforcement Fine-Tuning, Visual Reasoning
- ExpEnv: visual counting, structural perception, spatial transformation
-
TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMs
- Yunheng Li, JingCheng, Shaoyong Jia, Hangyi Kuang, Shaohui Jiao, Qibin Hou, Ming-Ming Cheng
- Keywords: Temporal Grounding; Multimodal Large Language Model; Reinforcement Fine-Tuning
- ExpEnv: Charades-STA, ActivityNet Captions, QVHighlights
-
Grounded Reinforcement Learning for Visual Reasoning
- Gabriel Herbert Sarch, Snigdha Saha, Naitik Khandelwal, Ayush Jain, Michael J. Tarr, Aviral Kumar, Katerina Fragkiadaki
- Keywords: visual reasoning, vision-language models, reinforcement learning, visual grounding
- ExpEnv: SAT-2, BLINK, V*bench, ScreenSpot, VisualWebArena
-
SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning
- Zhongwei Wan, Zhihao Dou, Che Liu, Yu Zhang, Dongfei Cui, Qinjian Zhao, Hui Shen, Jing Xiong, Yi Xin, Yifan Jiang, Chaofan Tao, Yangfan He, Mi Zhang, Shen Yan
- Keywords: MLLMs, Reasoning
- ExpEnv: MathVista, MathVision, Mathverse, MMMU-Pro
-
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration
- Hao Zhong, Muzhi Zhu, Zongze Du, Zheng Huang, Canyu Zhao, Mingyu Liu, Wen Wang, Hao Chen, Chunhua Shen
- Keywords: RL, Omni
- ExpEnv: Referring Audio-Visual Segmentation (RefAVS), Reasoning Video Object Segmentation (REVOS)
-
Janus-Pro-R1: Advancing Collaborative Visual Comprehension and Generation via Reinforcement Learning
- Kaihang Pan, Yang Wu, Wendong Bu, Kai Shen, Juncheng Li, Yingting Wang, liyunfei, Siliang Tang, Jun Xiao, Fei Wu, ZhaoHang, Yueting Zhuang
- Keywords: Image generation, Image understanding
- ExpEnv: text-to-image generation and image editing benchmarks
-
Semi-off-Policy Reinforcement Learning for Vision-Language Slow-Thinking Reasoning
- Junhao Shen, Haiteng Zhao, Yuzhe Gu, Songyang Gao, Kuikun Liu, Haian Huang, Jianfei Gao, Dahua Lin, Wenwei Zhang, Kai Chen
- Keywords: Large vision-language model, Slow-thinking reasoning
- ExpEnv: MathVision, OlympiadBench
-
ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs
- Xiyao Wang, Zhengyuan Yang, Chao Feng, Yuhang Zhou, Xiaoyu Liu, Yongyuan Liang, Ming Li, Ziyi Zang, Linjie Li, Chung-Ching Lin, Kevin Lin, Furong Huang, Lijuan Wang
- Keywords: Visual reasoning; Vision-Language Model; Visual captioning; Reward Model; Visual Hallucination
- ExpEnv: visual perception benchmarks
-
GuardReasoner-VL: Safeguarding VLMs via Reinforced Reasoning
- Yue Liu, Shengfang Zhai, Mingzhe Du, Yulin Chen, Tri Cao, Hongcheng Gao, Cheng Wang, Xinfeng Li, Kun Wang, Junfeng Fang, Jiaheng Zhang, Bryan Hooi
- Keywords: Vision Language Model, Guard Model, Reinforcement Learning, Large Reasoning Model
- ExpEnv: safety benchmarks for VLMs
-
Fact-R1: Towards Explainable Video Misinformation Detection with Deep Reasoning
- Fanrui Zhang, Dian Li, Qiang Zhang, Chenjun, sinbadliu, Junxiong Lin, Jiahong Yan, Jiawei Liu, Zheng-Jun Zha
- Keywords: Video Misinformation Detection, Deep Reasoning
- ExpEnv: FakeVV benchmark
-
Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning
- Yana Wei, Liang Zhao, Jianjian Sun, Kangheng Lin, jisheng yin, Jingcheng Hu, Yinmin Zhang, En Yu, Haoran Lv, Zejia Weng, Jia Wang, Qi Han, Zheng Ge, Xiangyu Zhang, Daxin Jiang, Vishal M. Patel
- Keywords: Multimodal LLM, Visual Reasoning, Cognitive Behavior Transfer
- ExpEnv: MATH500, MathVision, MathVerse
-
VideoRFT: Incentivizing Video Reasoning Capability in MLLMs via Reinforced Fine-Tuning
- Qi Wang, Yanrui Yu, Ye Yuan, Rui Mao, Tianfei Zhou
- Keywords: Multimodal Large Language Models, Video Reasoning, Reinforced fine-tuning
- ExpEnv: six video reasoning benchmarks
-
Safe RLHF-V: Safe Reinforcement Learning from Multi-modal Human Feedback
- Jiaming Ji, Xinyu Chen, Rui Pan, Conghui Zhang, Han Zhu, Jiahao Li, Donghai Hong, Boyuan Chen, Jiayi Zhou, Kaile Wang, Juntao Dai, Chi-Min Chan, Yida Tang, Sirui Han, Yike Guo, Yaodong Yang
- Keywords: AI Safety, AI Alignment
- ExpEnv: BeaverTails-V benchmark
-
Unveiling Chain of Step Reasoning for Vision-Language Models with Fine-grained Rewards
- Honghao Chen, Xingzhou Lou, Xiaokun Feng, Kaiqi Huang, Xinlong Wang
- Keywords: VLM, Reasoning, PRM
- ExpEnv: vision-language reasoning benchmarks
-
GRIT: Teaching MLLMs to Think with Images
- Yue Fan, Xuehai He, Diji Yang, Kaizhi Zheng, Ching-Chen Kuo, Yuting Zheng, Sravana Jyothi Narayanaraju, Xinze Guan, Xin Eric Wang
- Keywords: Multimodal Reasoning model, Reinforcement learning
- ExpEnv: multimodal reasoning benchmarks
-
NoisyGRPO: Incentivizing Multimodal CoT Reasoning via Noise Injection and Bayesian Estimation
- Longtian Qiu, Shan Ning, Jiaxuan Sun, Xuming He
- Keywords: Multimodal Large Language Model, Reinforcement learning
- ExpEnv: CoT quality and hallucination benchmarks
-
Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding
- Ye Wang, Ziheng Wang, Boshen Xu, Yang Du, Kejun Lin, Zihan Xiao, Zihao Yue, Jianzhong Ju, Liang Zhang, Dingyi Yang, Xiangnan Fang, Zewen He, Zhenbo Luo, Wenxuan Wang, Junqi Lin, Jian Luan, Qin Jin
- Keywords: large vision language model, temporal video grounding, reinforcement learning, post-training
- ExpEnv: Charades-STA, ActivityNet Captions, QVHighlights, TVGBench
-
Generative RLHF-V: Learning Principles from Multi-modal Human Preference
- Jiayi Zhou, Jiaming Ji, Boyuan Chen, Jiapeng Sun, Wenqi Chen, Donghai Hong, Sirui Han, Yike Guo, Yaodong Yang
- Keywords: Alignment, Safety, RLHF, Preference Learning, Multi-modal LLMs
- ExpEnv: seven multimodal benchmarks
-
OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles
- Yihe Deng, Hritik Bansal, Fan Yin, Nanyun Peng, Wei Wang, Kai-Wei Chang
- Keywords: Vision-language reasoning, iterative improvement, distillation, reinforcement learning
- ExpEnv: MathVista, EMMA, HallusionBench
-
Video-R1: Reinforcing Video Reasoning in MLLMs
- Kaituo Feng, Kaixiong Gong, Bohao Li, Zonghao Guo, Yibing Wang, Tianshuo Peng, Junfei Wu, Xiaoying Zhang, Benyou Wang, Xiangyu Yue
- Keywords: Multimodal Large Language Models, Video Reasoning
- ExpEnv: VideoMMMU, VSI-Bench, MVBench, TempCompass
-
- Jun Ling, Yao Qi, Tao Huang, Shibo Zhou, Yanqin Huang, Yang Jiang, Ziqi Song, Ying Zhou, Yang Yang, Heng Tao Shen, Peng Wang
- Keywords: table recognition, latex generation
- ExpEnv: table-to-LaTeX benchmarks
-
ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding
- Yiyang Zhou, Yangfan He, Yaofeng Su, Siwei Han, Joel Jang, Gedas Bertasius, Mohit Bansal, Huaxiu Yao
- Keywords: Video understanding; Multi-agent framework; Reflective reasoning; VLA alignment; Video reasoning
- ExpEnv: 12 datasets across video understanding, video reasoning, and VLA tasks
-
Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models
- Xiaoyu Zhan, Wenxuan Huang, Hao Sun, Xinyu Fu, Changfeng Ma, Shaosheng Cao, Bohan Jia, Shaohui Lin, Zhenfei Yin, LEI BAI, Wanli Ouyang, Yuanqi Li, Jie Guo, Yanwen Guo
- Keywords: Visual-Language Model, 3D Reasoning, GRPO
- ExpEnv: 3D spatial reasoning tasks
-
EvolvedGRPO: Unlocking Reasoning in LVLMs via Progressive Instruction Evolution
- Zhebei Shen, Qifan Yu, Juncheng Li, Wei Ji, Qizhi Chen, Siliang Tang, Yueting Zhuang
- Keywords: multi-modal reasoning, reinforcement learning, self-improvement
- ExpEnv: multi-modal reasoning tasks
-
URSA: Unlocking Multimodal Mathematical Reasoning via Process Reward Model
- Ruilin Luo, Zhuofan Zheng, Lei Wang, Yifan Wang, Xinzhe Ni, Zicheng Lin, Songtao Jiang, Yiyao Yu, Chufan Shi, Ruihang Chu, Jin zeng, Yujiu Yang
- Keywords: Multimodal Reasoning, Data Synthesis, Process Reward Model, Reinforcement Learning
- ExpEnv: ChartQA and other multimodal math benchmarks
-
Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing
- Junfei Wu, Jian Guan, Kaituo Feng, Qiang Liu, Shu Wu, Liang Wang, Wei Wu, Tieniu Tan
- Keywords: Large Vision-Language Models, Spatial Reasoning
- ExpEnv: spatial reasoning benchmarks
-
ABNet: Adaptive explicit-Barrier Net for Safe and Scalable Robot Learning
- Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, Daniela Rus
- Key: Safe learning, Robot learning, Scalable learning, Barrier Net, Provable safety, Reinforcement Learning, Multi-modal control.
- ExpEnv: 2D robot obstacle avoidance, Safe robot manipulation, Vision-based end-to-end autonomous driving
-
DexScale: Automating Data Scaling for Sim2Real Generalizable Robot Control
- Guiliang Liu, Yueci Deng, Runyi Zhao, Huayi Zhou, Jian Chen, Jietao Chen, Ruiyan Xu, Yunxin Tai, Kui Jia
- Key: Data Engine, Embodied AI, Robot Control, Manipulation, Policy Learning, Sim2Real, Domain Randomization, Domain Adaptation, Reinforcement Learning, Multi-modal control.
- ExpEnv: Robot manipulation tasks (e.g., pick-and-place), diverse tasks, multiple robot embodiments.
-
DynaMind: Reasoning over Abstract Video Dynamics for Embodied Decision-Making
- Ziru Wang, Mengmeng Wang, Jade Dai, Teli Ma, Guo-Jun Qi, Yong Liu, Guang Dai, Jingdong Wang
- Key: Embodied Decision-Making, Multi-modal Learning, Video Dynamics Abstraction, Robot Learning.
- ExpEnv: LOReL Sawyer, Franka Kitchen, BabyAI, Real-world scenarios.
-
Craftium: Bridging Flexibility and Efficiency for Rich 3D Single- and Multi-Agent Environments
- Mikel Malagón, Josu Ceberio, Jose A. Lozano
- Key: 3D Environments, Reinforcement Learning, Multi-Agent Systems, Embodied AI.
- ExpEnv: One-vs-one multi-agent combat environment (Craftium-built), Open-world environment (Luanti/VoxeLibre in Craftium), Procedural 3D Dungeons (Craftium-built).
-
- Saketh Bachu, Erfan Shayegani, Rohit Lal, Trishna Chakraborty, Arindam Dutta, Chengyu Song, Yue Dong, Nael B. Abu-Ghazaleh, Amit Roy-Chowdhury
- Key: Vision Language Models, Safety Alignment, Reinforcement Learning from Human Feedback (RLHF), Multi-modal RL.
- ExpEnv: Jailbreak-V28K, AdvBench-COCO (derived from AdvBench and MS-COCO), HH-RLHF, VQA-v2, Custom Prompts.
-
Vision Language Models are In-Context Value Learners
- Yecheng Jason Ma, Joey Hejna, Chuyuan Fu, Dhruv Shah, Jacky Liang, Zhuo Xu, Sean Kirmani, Peng Xu, Danny Driess, Ted Xiao, Osbert Bastani, Dinesh Jayaraman, Wenhao Yu, Tingnan Zhang, Dorsa Sadigh, Fei Xia
- Key: robot learning, vision-language model, value estimation, manipulation
- ExpEnv: more than 300 distinct real-world tasks across diverse robot platforms, including bimanual manipulation tasks
-
TopoNets: High performing vision and language models with brain-like topography
- Mayukh Deb, Mainak Deb, Apurva Ratan Murty
- Key: topography, neuro-inspired, convolutional neural networks, Transformers, visual cortex, neuroscience
- ExpEnv: ResNet-18, ResNet-50, ViT, GPT-Neo-125M, NanoGPT
-
LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models
- Junyan Ye, Baichuan Zhou, Zilong Huang, Junan Zhang, Tianyi Bai, Hengrui Kang, Jun He, Honglin Lin, Zihao Wang, Tong Wu, Zhizheng Wu, Yiping Chen, Dahua Lin, Conghui He, Weijia Li
- Key: LMMs, Deepfake, Multimodality
- ExpEnv: Video, Image, 3D, Text, Audio
-
- Simon Schrodi, David T. Hoffmann, Max Argus, Volker Fischer, Thomas Brox
- Key: CLIP, modality gap, object bias, contrastive loss, data-centric, vision language models, VLM
- ExpEnv: Contrastive Vision-Language Models (VLMs) Analysis
-
Multi-Robot Motion Planning with Diffusion Models
- Yorai Shaoul, Itamar Mishani, Shivam Vats, Jiaoyang Li, Maxim Likhachev
- Key: Multi-Agent Planning, Robotics, Generative Models
- ExpEnv: Simulated logistics environments
-
DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
- Guowei Xu, Ruijie Zheng, Yongyuan Liang, Xiyao Wang, Zhecheng Yuan, Tianying Ji, Yu Luo, Xiaoyu Liu, Jiaxin Yuan, Pu Hua, Shuzhen Li, Yanjie Ze, Hal Daumé III, Furong Huang, Huazhe Xu
- Keyword: Visual RL; Dormant Ratio
- ExpEnv: DeepMind Control Suite,Meta-world,Adroit
-
Revisiting Data Augmentation in Deep Reinforcement Learning
- Jianshu Hu, Yunpeng Jiang, Paul Weng
- Keyword: Reinforcement Learning, Data Augmentation
- ExpEnv: DeepMind Control Suite
-
Revisiting Plasticity in Visual Reinforcement Learning: Data, Modules and Training Stages
- Guozheng Ma, Lu Li, Sen Zhang, Zixuan Liu, Zhen Wang, Yixin Chen, Li Shen, Xueqian Wang, Dacheng Tao
- Keyword: Plasticity, Visual Reinforcement Learning, Deep Reinforcement Learning, Sample Efficiency
- ExpEnv: DeepMind Control Suite,Atari
-
Entity-Centric Reinforcement Learning for Object Manipulation from Pixels
- Dan Haramati, Tal Daniel, Aviv Tamar
- Keyword: deep reinforcement learning, visual reinforcement learning, object-centric, robotic object manipulation, compositional generalization
- ExpEnv: IsaacGym
-
PaLI: A Jointly-Scaled Multilingual Language-Image Model(notable top 5%)
- Xi Chen, Xiao Wang, Soravit Changpinyo, AJ Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer, Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish Thapliyal, James Bradbury, Weicheng Kuo, Mojtaba Seyedhosseini, Chao Jia, Burcu Karagol Ayan, Carlos Riquelme, Andreas Steiner, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
- Keyword: amazing zero-shot, language component and visual component
- ExpEnv: None
-
VIMA: General Robot Manipulation with Multimodal Prompts
- Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, Linxi Fan. NeurIPS Workshop 2022
- Key Words: multimodal prompts, transformer-based generalist agent model, large-scale benchmark
- ExpEnv: VIMA-Bench, VIMA-Data
-
MIND ’S EYE: GROUNDED LANGUAGE MODEL REASONING THROUGH SIMULATION
- Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M. Dai
- Keyword: language2physical-world, reasoning ability
- ExpEnv: MuJoCo
- How Much Can CLIP Benefit Vision-and-Language Tasks?
- Sheng Shen, Liunian Harold Li, Hao Tan, etc. ICLR 2022
- Key Words: Vision-and-Language, CLIP
- ExpEnv: None
-
Grounding Language to Entities and Dynamics for Generalization in Reinforcement Learning
- Austin W. Hanjie, Victor Zhong, Karthik Narasimhan. ICML 2021
- Key Words: Multi-modal Attention
- ExpEnv: Messenger
-
Mastering Atari with Discrete World Models
- Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, etc.
- Key Words: World models
- ExpEnv: Atari
-
Decoupling Representation Learning from Reinforcement Learning
- Adam Stooke,Kimin Lee,Pieter Abbeel, etc.
- Key Words: representation learning, unsupervised learning
- ExpEnv: DeepMind Control, Atari, DMLab
- Learning Actionable Representations with Goal-Conditioned Policies
- Dibya Ghosh, Abhishek Gupta, Sergey Levine.
- Key Words: Actionable Representations Learning
- ExpEnv: 2D navigation(2D Wall, 2D Rooms, Wheeled, Wheeled Rooms, Ant, Pushing)
-
- Moritz Schneider, Robert Krug, Narunas Vaskevicius, Luigi Palmieri, Joschka Boedecker
- Key Words: reinforcement learning, rl, model-based reinforcement learning, representation learning, pvr, visual representations
- ExpEnv: DeepMind Control Suite, ManiSkill2, Miniworld
-
Learning Multimodal Behaviors from Scratch with Diffusion Policy Gradient
- Zechu Li, Rickmer Krohn, Tao Chen, Anurag Ajay, Pulkit Agrawal, Georgia Chalvatzaki
- Keyword: Reinforcement Learning, Multimodal Behaviors, Diffusion Models
- ExpEnv: AntMaze (navigation), Robotic Manipulation (Franka tasks)
-
Seek Commonality but Preserve Differences: Dissected Dynamics Modeling for Multi-modal Visual RL
- Yangru Huang, Peixi Peng, Yifan Zhao, Guangyao Chen, Yonghong Tian
- Key: multi-modal reinforcement learning, visual RL, dynamics modeling, modality consistency, modality inconsistency, DDM
- ExpEnv: CARLA, DMControl
-
- Ruizhe Zhong, Xingbo Du, Shixiong Kai, Zhentao Tang, Siyuan Xu, Jianye Hao, Mingxuan Yuan, Junchi Yan
- Keywords: 3D Floorplanning, Deep Reinforcement Learning, Hybrid Action Space, Multi-Modality Representation
- ExpEnv: MCNC Benchmark, GSRC Benchmark
-
Inverse Dynamics Pretraining Learns Good Representations for Multitask Imitation
- David Brandfonbrener, Ofir Nachum, Joan Bruna
- Key Words: representation learning, imitation learning
- ExpEnv: Sawyer Door Open, MetaWorld, Franka Kitchen, Adroit
-
Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation
- Keji He, Chenyang Si, Zhihe Lu, Yan Huang, Liang Wang, Xinchao Wang
- Key Words: Vision-and-Language Navigation, High-Frequency, Data Augmentation
- ExpEnv: Matterport3d
-
Language Is Not All You Need: Aligning Perception with Language Models
- Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, etc.
- Key Words: Multimodal Perception, World Modeling
- ExpEnv: IQ50
-
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
- Linxi Fan, Guanzhi Wang, Yunfan Jiang, etc.
- Key Words: multimodal dataset, MineClip
- ExpEnv: Minecraft
-
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
- Bowen Baker, Ilge Akkaya, Peter Zhokhov, etc.
- Key Words: Inverse Dynamics Model
- ExpEnv: minerl
-
SOAT: A Scene-and Object-Aware Transformer for Vision-and-Language Navigation
- Abhinav Moudgil, Arjun Majumdar,Harsh Agrawal, etc.
- Key Words: Vision-and-Language Navigation
- ExpEnv: Room-to-Room, Room-Across-Room
-
Pretraining Representations for Data-Efficient Reinforcement Learning
- Max Schwarzer, Nitarshan Rajkumar, Michael Noukhovitch, etc.
- Key Words: latent dynamics modelling, unsupervised RL
- ExpEnv: Atari
-
Investigating Pre-Training Objectives for Generalization in Vision-Based Reinforcement Learning
- Donghu Kim, Hojoon Lee, Kyungmin Lee, Dongyoon Hwang, Jaegul Choo
- Key Words: vision-based RL
- ExpEnv: Atari
-
RL-VLM-F: Reinforcement Learning from Vision Language Foundation Model Feedback
-
Reward Shaping for Reinforcement Learning with An Assistant Reward Agent
- Haozhe Ma, Kuankuan Sima, Thanh Vinh Vo, Di Fu, Tze-Yun Leong
- Key Words: dual-agent reward shaping framework
- ExpEnv: Mujoco
-
FuRL: Visual-Language Models as Fuzzy Rewards for Reinforcement Learning
- Yuwei Fu, Haichao Zhang, Di Wu, Wei Xu, Benoit Boulet
- Key Words: high-dimensional observations, representation learning for RL
- ExpEnv: MetaWorld
-
Rich-Observation Reinforcement Learning with Continuous Latent Dynamics
- Yuda Song, Lili Wu, Dylan J Foster, Akshay Krishnamurthy
- Key Words: VLM as reward function
- ExpEnv: maze
-
LLM-Empowered State Representation for Reinforcement Learning
- Boyuan Wang, Yun Qu, Yuhang Jiang, Jianzhun Shao, Chang Liu, Wenming Yang, Xiangyang Ji
- Key Words: LLM-based state representation
- ExpEnv: Mujoco
-
Code as Reward: Empowering Reinforcement Learning with VLMs
- David Venuto, Mohammad Sami Nur Islam, Martin Klissarov, etc.
- Key Words: Vision-Language Models, reward functions
- ExpEnv: MiniGrid
-
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
- Wenlong Huang, Pieter Abbeel, Deepak Pathak, etc.
- Key Words: large language models, Embodied Agents
- ExpEnv: VirtualHome
-
Reinforcement Learning with Action-Free Pre-Training from Videos
- Younggyo Seo, Kimin Lee, Stephen L James, etc.
- Key Words: action-free pretraining, videos
- ExpEnv: Meta-world, DeepMind Control Suite
-
History Compression via Language Models in Reinforcement Learning
- Learning Latent Dynamics for Planning from Pixels
- Danijar Hafner, Timothy Lillicrap, Ian Fischer, etc.
- Key Words: latent dynamics model, pixel observations
- ExpEnv: DeepMind Control Suite
- Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
- Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli
- Key Words: unseen instruction, sequential instruction
- ExpEnv: Minecraft
-
- Haoran Xu, Peixi Peng, Guang Tan, Yuan Li, Xinhai Xu, Yonghong Tian
- Key Words: Visual Reinforcement Learning, Multi-Modality Representation, Dynamic Vision Sensor
- ExpEnv: Carla
-
Vision-and-Language Navigation via Causal Learning
- Liuyi Wang, Zongtao He, Ronghao Dang, Mengjiao Shen, Chengju Liu, Qijun Chen
- Key Words: vision-and-language navigation, cross-modal causal transformer
- ExpEnv: R2R REVERIE RxR-English SOON
-
End-to-end Generative Pretraining for Multimodal Video Captioning
- Paul Hongsuck Seo, Arsha Nagrani, Anurag Arnab, Cordelia Schmid
- Key Words: Multimodal video captioning, Pretraining using a future utterance, Multimodal Video Generative Pretraining
- ExpEnv: HowTo100M
-
Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
-
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
- Shizhe Chen, Pierre-Louis Guhur, Makarand Tapaswi, Cordelia Schmid, Ivan Laptev
- Keyword: dual-scale graph transformer, dual-scale graph transformer, affordance detection
- ExpEnv: None
-
Masked Visual Pre-training for Motor Control
- Tete Xiao, Ilija Radosavovic, Trevor Darrell, etc. ArXiv 2022
- Key Words: self-supervised learning, motor control
- ExpEnv: Isaac Gym
-
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
- Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine
- Key Words: robotic navigation, goal-conditioned, unannotated large dataset, CLIP, ViNG, GPT-3
- ExpEnv: None
-
[Real-World Robot Learning with Masked Visual Pre-training](https://arxiv.org/abs/2210.03109)
- Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, Trevor Darrell
- Key Words: real-world robotic tasks,
- ExpEnv: None
-
R3M: A Universal Visual Representation for Robot Manipulation
- Suraj Nair, Aravind Rajeswaran, Vikash Kumar, etc.
- Key Words: Ego4D human video dataset, pre-train visual representation
- ExpEnv: MetaWorld, Franka Kitchen, Adroit
-
RL-EMO: A Reinforcement Learning Framework for Multimodal Emotion Recognition ICASSP 2024
- Chengwen Zhang, Yuhao Zhang, Bo Cheng
- Keyword: Multimodal Emotion Recognition, Reinforcement Learning, Graph Convolution Network
- ExpEnv: None
-
Language Conditioned Imitation Learning over Unstructured Data RSS 2021
- Corey Lynch, Pierre Sermanet
- Keyword: open-world environments
- ExpEnv: None
-
Learning Generalizable Robotic Reward Functions from “In-The-Wild” Human Videos RSS 2021
- Annie S. Chen, Suraj Nair, Chelsea Finn.
- Key Words: Reward Functions, “In-The-Wild” Human Videos
- ExpEnv: None
-
Offline Reinforcement Learning from Images with Latent Space Models L4DC 2021
- Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, etc.
- Key Words: Latent Space Models
- ExpEnv: DeepMind Control, Adroit Pen, Sawyer Door Open, Robel D’Claw Screw
-
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition? ICASSP 2022
- Vandana Rajan, Alessio Brutti, Andrea Cavallaro.
- Key Words: Multi-Modal Emotion Recognition, Cross-Attention
- ExpEnv: None
-
Spatialvlm: Endowing vision-language models with spatial reasoning capabilities
- Boyuan Chen, Zhuo Xu, Sean Kirmani, Brian Ichter, Danny Driess, Pete Florence, Dorsa Sadigh, Leonidas Guibas, Fei Xia
- Key Words: Visual Question Answering, 3D Spatial Reasoning
- ExpEnv: spatial VQA dataset
-
- Fotios Lygerakis, Vedant Dave, Elmar Rueckert
- Key Words: Robotic Manipulation, Self-supervised representation
- ExpEnv: Gym
-
On Time-Indexing as Inductive Bias in Deep RL for Sequential Manipulation Tasks
- M. Nomaan Qureshi, Ben Eisner, David Held
- Key Words: Multimodality of policy output, Action head switching
- ExpEnv: MetaWorld
-
Parameterized Decision-making with Multi-modal Perception for Autonomous Driving
- Yuyang Xia, Shuncheng Liu, Quanlin Yu, Liwei Deng, You Zhang, Han Su, Kai Zheng
- Key Words: Autonomous driving, GNN in RL
- ExpEnv: CARLA
-
- Fathima Abdul Rahman, Guang Lu
- Key Words: Emotion Recognition, GNN in RL
- ExpEnv: IEMOCAP
-
Reinforced UI Instruction Grounding: Towards a Generic UI Task Automation API
- Zhizheng Zhang, Wenxuan Xie, Xiaoyi Zhang, Yan Lu
- Key Words: LLM, generic UI task automation API
- ExpEnv: RicoSCA, MoTIF
-
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving
- Long Chen, Oleg Sinavski, Jan Hünermann, Alice Karnsund, Andrew James Willmott, Danny Birch, Daniel Maund, Jamie Shotton
- Key Words: LLM in Autonomous Driving, object-level multimodal LLM
- ExpEnv: RicoSCA, MoTIF
-
- Juan Del Aguila Ferrandis, João Moura, Sethu Vijayakumar
- Key Words: multimodal exploration approach
- ExpEnv: KUKA iiwa robot arm
-
End-to-End Streaming Video Temporal Action Segmentation with Reinforce Learning
- Wujun Wen, Jinrong Zhang, Shenglan Liu, Yunheng Li, Qifeng Li, Lin Feng
- Key Words: Temporal Action Segmentation, RL in Video Analysis
- ExpEnv: EGTEA
-
Do as I can, not as I get:Topology-aware multi-hop reasoningon multi-modal knowledge graphs
- Shangfei Zheng, Hongzhi Yin, Tong Chen, Quoc Viet Hung Nguyen, Wei Chen, Lei Zhao
- Key Words: Multi-hop reasoning, multi-modal knowledge graphs, inductive setting, adaptive reinforcement learning
- ExpEnv: None
-
Multimodal Reinforcement Learning for Robots Collaborating with Humans
- Afagh Mehri Shervedani, Siyu Li, Natawut Monaikul, Bahareh Abbasi, Barbara Di Eugenio, Milos Zefran
- Key Words: robust and deliberate decisions, end-to-end training, importance enhancement, similarity, improve IRL training process multimodal RL domains
- ExpEnv: None
-
See, Plan, Predict: Language-guided Cognitive Planning with Video Prediction
- Maria Attarian, Advaya Gupta, Ziyi Zhou, Wei Yu, Igor Gilitschenski, Animesh Garg
- Keyword: cognitive planning, language-guided video prediction
- ExpEnv: None
-
Open-vocabulary Queryable Scene Representations for Real World Planning
- Boyuan Chen, Fei Xia, Brian Ichter, Kanishka Rao, Keerthana Gopalakrishnan, Michael S. Ryoo, Austin Stone, Daniel Kappler
- Key Words: Target Detection, Real World, Robotic Tasks
- ExpEnv: Say Can
-
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
- Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan, Andy Zeng
- Key Words: real world, natural language
- ExpEnv: Say Can
Our purpose is to make this repo even better. If you are interested in contributing, please refer to HERE for instructions in contribution.
Awesome Multi-Modal Reinforcement Learning is released under the Apache 2.0 license.
