
Sandip Sharan
Senthil Kumar
Research Engineer
University of Florida - REEF
I'm a Research Engineer at the Autonomous Vehicle Laboratory (REEF), University of Florida, where I develop machine learning-based sensor calibration and fusion systems to enhance navigation in GPS-denied environments. My work bridges robotics, perception, and learning-based navigation focusing on radar-only localization, multi-sensor fusion, and robust autonomous systems for both ground and aerial platforms. Previously, I contributed to research at the GAMMA Lab and Bio-Imaging & Machine Vision Lab at the University of Maryland, working on VR-based driving simulation, trajectory prediction, and computer vision for underwater systems. I’m passionate about building reliable, data-driven autonomy by combining classical GNC principles with modern AI techniques.
Publications

AIAA SciTech 2026
Enabling Autonomous Navigation with Radar-Only Perception in GPS-Denied Environments
Sandip Sharan Senthil Kumar, et al.
Developed a radar-only perception framework for robust autonomous navigation in GPS-denied environments, leveraging machine learning-based calibration and sensor fusion techniques.

IEEE ICRA 2025
DISC: Dataset for Analyzing Driving Styles in Simulated Crashes for Mixed Autonomy
Sandip Sharan Senthil Kumar, et al.
Introduced a large-scale dataset for understanding human and autonomous vehicle driving behaviors under crash scenarios using simulation-based experiments.

IEEE IROS 2024
TRAVERSE: Traffic-Responsive Autonomous Vehicle Experience & Rare-event Simulation for Enhanced Safety
Sandip Sharan Senthil Kumar, et al.
Proposed a simulation-based framework to generate rare traffic scenarios for evaluating and improving autonomous vehicle safety and responsiveness.

IEEE Access 2024
ShellCollect: A Framework for Smart Precision Shellfish Harvesting Using Data Collection Path Planning
Sandip Sharan Senthil Kumar, et al.
Designed an intelligent robotic framework for precision shellfish harvesting using path-planning algorithms and sensor-based environmental data collection.
Experience
Research Engineer — Autonomous Vehicle Laboratory, REEF, University of Florida
Manager: Dr. Humberto Ramos
• Engineered a machine learning-based calibration system to model and correct sensor inaccuracies and significantly enhance navigation precision and reliability for autonomous ground and aerial robots. • Spearheading the development of an innovative stereo radar navigation solution for GPS-denied environments, leveraging sensor fusion techniques to enable robust autonomous localization. • Refactored legacy ROS codebases to ROS2, aligning with current middleware standards for robotic hardware deployment and maintainability.
Computer Vision Research Associate — GAMMA Lab, UMD
Advisor: Dr. Ming C. Lin
• Engineered a high-fidelity VR driving simulator by integrating Unity3D with SUMO traffic models and NHTSA precrash scenarios; conducted a user study to analyze behavioral data for realistic autonomous vehicle simulation. • Designed and trained ML models to classify driving styles and predict vehicle trajectories using MTR, enhancing AV safety and bridging the sim-to-real gap through data-driven behavioral realism.
Research Assistant — Bio-Imaging & Machine Vision Lab, UMD
Advisor: Dr. Tao Yang
• Developed a computer vision framework for real-time GoPro footage analysis, utilizing YOLOv8 for state-of-the-art object detection and tracking, followed by image processing techniques to track and display trajectories. • Implemented a machine learning model for pose estimation of a dredge underwater with 86% accuracy using sonar data.
Robotics Software Engineer Intern — Void Robotics
• Re-engineered the project repository through Docker containerization for seamless operation on the NVIDIA Jetson Nano, leveraging GPU acceleration through CUDA for a 50% improvement in processing speed. • Integrated ROS2 on NVIDIA Jetson Nano via Docker for streamlined communication with ZED2 4.0 stereo cameras, reducing data transfer latency by 60% and improving overall system performance.
Education

University of Maryland, College Park
M.Eng. in Robotics

Anna University, Chennai, India
B.E. in Mechanical Engineering
Projects

3D Surface Inspection
Formulated a novel 3D inspection framework integrating a modified HF-NeuS model for surface rendering and semantic understanding, along with DeepCrack for neural crack segmentation through 3D reconstruction.

Underwater Image Restoration
Developed a deep learning model to predict depth maps from underwater images and utilize neural predictions to eliminate light attenuation and haze, significantly improving visual clarity in underwater imagery.

Gesture-Based Virtual Driving System
Constructed a virtual car driving interface using Google MediaPipe for real-time hand gesture recognition and machine learning classification, simulated on TurtleBot and Gazebo for behavior validation.

Brain Cancer Image Synthesis
Developed a generative AI model to synthesize brain cancer MRI images for dataset augmentation, enhancing training diversity and improving tumor detection model accuracy.

Autonomous Vehicles at Intersections (Deep Q-Learning)
Leveraged reinforcement learning techniques, specifically Deep Q-Learning, to optimize autonomous vehicle behavior at intersections, improving traffic flow and safety in mixed autonomy environments.

Leader-Follower Bot
Implemented a leader-follower system using ArUco markers for leader identification, LiDAR for obstacle detection, and A* path planning for dynamic and efficient navigation control in multi-robot systems.