Hi, I'm Shreyas Chandra Sekhar!
Robotics AI researcher, LinkedIn
- [Graduate research - Automating Endoscopic movement using RL]
- Automated endoscopic control of the da Vinci surgical system using reinforcement learning driven by hand-gesture and eye-movement tracking, reducing surgeons’ cognitive load by 30% during laparoscopic procedures.
- Robot Motion Planning
- Investigated HER and DDPG algorithms on Fetch robots in ROS, demonstrating 90% joint-space pick-and-place accuracy and contributing to reproducible benchmarks in deep reinforcement learning.
- Research Torque on Exoskeleton Human model
- Simulated a Blender-based human–exoskeleton system with PID-controlled torques and external forces, achieving stable gaited walking with 85% trajectory accuracy and enabling torque optimization for biomedical robotics.
- Domain Randomization pick and place
- Advanced domain-randomized perception in ROS to achieve 95% object identification accuracy, contributing to robust robotic pick-and-place automation under real-world variability.ue optimization for biomedical robotics.
- Drone navigation through waypoints points
- Advanced quadrotor control research by deriving and simulating forward/inverse dynamics in MATLAB and ROS, enabling deeper insights into joint-space vs. task-space controller performance.Automating Fetch Robot Motion Planning
- Snake Robots for Rescue operations
- Simulated snake robot locomotion in ROS/MATLAB, deriving forward/inverse kinematics for variable joints and link lengths to optimize motion control.
- GenAI
- An end-to-end Retrieval-Augmented Generation (RAG) pipeline that transforms real-estate listing data into searchable vector embeddings, retrieves relevant properties via semantic similarity, and generates grounded recommendations using an LLM.
- Pick & Place
- Simulated 6-DOF KUKA KR210 arm (Action) to perform pick-and-place (Task), using ROS & MoveIt (Situation), achieving 90% task accuracy (Result).
- Collaboration & Competition
- Trained two agents with MADDPG (Action) to collaborate and compete in a racket-ball game (Task), using an 8-variable observation space (Situation), enabling coordinated behaviors (Result).
- Continuous Control
- Applied PPO, A3C, and D4PG (Action) to train a double-arm Reacher agent (Task), handling 33D observation and 4D action space (Situation), achieving 30+ reward points (Result).
- Value-Based Learning
- Developed double DQN, dueling DQN, and prioritized replay (Action) to train a Unity agent (Task), navigating reward-driven scenarios (Situation), achieving 13+ average rewards (Result).
- Arm Manipulation RL
- Trained a robotic arm with DQN (Action) to hit targets (Task), optimizing policies (Situation), reaching 94% (arm) and 92% (gripper) accuracy (Result).
- Home Service Robot
- Designed a ROS-based robot (Action) to autonomously map and navigate (Task), using mapping and navigation stack (Situation), successfully picking and delivering objects (Result).
- Map My World – SLAM
- Implemented RTAB-based SLAM (Action) on a ROS robot (Task), navigating until loop closures and occupancy grid formed (Situation), completing 3 loop closures successfully (Result).
- Where Am I – Perception
- Modeled robots in ROS with AMCL & navigation stack (Action) to localize and navigate (Task), tuning stack parameters (Situation), achieving reliable position/orientation navigation (Result).
- [Robotics Inference System]
- Built ROS-based inference (Action) integrating AMCL & navigation (Task), tuning stack (Situation), achieving robust multi-robot navigation (Result).
- Face & Emotion Recognition
- Implemented CNN architectures (Action) to classify faces and emotions (Task), benchmarking multiple models (Situation) and achieved high recognition accuracy (Result).
- Follow Me – Deep Learning
- Architected fully convolutional deep model (Action) to enable quadcopter person-following (Task), trained with segmentation (Situation), reaching 95% accuracy (Result).
- 3D Perception
- Used ROS & MoveIt (Action) to identify and manipulate objects (Task), applying Confusion Matrix techniques (Situation), achieving 100% identification accuracy (Result).

