About

I'm Sandip - crazy about building reliable AI for real-world autonomy. 🔬

My work focuses on delivering perception and navigation systems that remain dependable when sensing is sparse, noisy, or degraded, especially in GPS-denied environments.

My research spans machine learning-based sensor calibration, radar-centric perception, and uncertainty-aware multi-sensor fusion for GPS-denied navigation. I have experience developing simulation-heavy autonomy pipelines and scalable dataset workflows, and translating those efforts into deployment-ready modules and clean, maintainable codebases.

A central theme of my work is the integration of classical estimation and control with modern learning-based methods. By pairing structured GNC principles with neural models, I aim to improve robustness, interpretability, and real-world performance under challenging operational conditions.

I am currently a Research Engineer at the Autonomous Vehicle Laboratory (REEF) at the University of Florida, where I develop learning-based calibration and fusion systems to enhance navigation reliability in GPS-denied environments. My work bridges robotics, perception, and learning-based navigation, with a particular focus on radar-only localization, multi-sensor fusion, and resilient autonomy for both aerial and ground platforms.

Previously, I contributed to research at the GAMMA Lab and the Bio-Imaging & Machine Vision Lab at the University of Maryland, working on VR-based driving simulation, trajectory prediction, and computer vision for underwater robotic systems. These experiences shaped my interest in autonomy that must operate beyond controlled settings.

Ultimately, I care about building autonomous systems that work outside ideal conditions low visibility, harsh environments, and tight computational budgets where reliability is not a feature, but the product itself.

Work Experience

Role impact, technical contributions, and collaboration context. 🤝

Manager: Dr. Humberto Ramos

  • Engineered a machine learning-based calibration system to model and correct sensor inaccuracies, improving navigation precision and reliability for autonomous ground and aerial robots.
  • Leading development of a stereo radar navigation stack for GPS-denied environments using fusion-based localization for robust autonomy.
  • Refactored legacy ROS codebases to ROS2 for long-term maintainability and deployment on modern robotic platforms.
GAMMA Lab, UMD

Computer Vision Research Associate

Aug 2023 - Nov 2024

Advisor: Dr. Ming C. Lin

  • Engineered a high-fidelity VR driving simulator integrating Unity3D, SUMO traffic models, and NHTSA pre-crash scenarios to study behavioral realism.
  • Built ML models for driving style classification and trajectory prediction using MTR to improve AV safety analysis and sim-to-real relevance.

Sep 2023 - May 2024

Advisor: Dr. Tao Yang

  • Developed a computer vision pipeline for real-time GoPro analysis using YOLOv8 for object detection and trajectory tracking.
  • Implemented a sonar-based pose estimation model for underwater dredge localization with 86% accuracy.
Void Robotics

Robotics Software Engineer Intern

May 2023 - Aug 2023

  • Re-engineered core systems with Docker on NVIDIA Jetson Nano, using CUDA acceleration to improve processing throughput by 50%.
  • Integrated ROS2 communication with ZED2 stereo cameras in containerized workflows, reducing transfer latency by 60%.