I spend my time developing new forms of intelligent systems that are capable of perceiving multi-modal data to work under complex real-world scenarios, while achieving extreme levels of efficiency. My research lies at the intersection of machine learning (ML) systems and data science, and primarily focuses on understanding the computational challenges of end-to-end ML systems from both algorithm and systems perspectives.
Earlier in my academic career, I have developed a variety of localisation, communication and coordination protocols for mobile sensing platforms, ranging from sensor networks/IoT, mobile/wearable devices, and robots.
Below are snapshots of the research projects I have worked on (with pointers to some relevant papers). For more detailed information about my work please see my publications.
Training-free Proxies for AutoML
Lightweight Deep Learning for On-device Deployments
Blind Video Super Resolution
Event-based Motion Analysis
Spatio-temporal Analytics for Urban Mobility
3D Segmentation & Reconstruction
Security & Privacy on Mobile/Wearables/IoT
Deep Visual Tracking & Motion Estimation
Cross-modality Association and Learning
Localisation, Navigation & Mapping
Probabilistic Data Management