Intelligent Driver Monitoring meets Gesture Control

Intelligent Driver Monitoring meets Gesture Control

CASE STUDY

Business Overview

A well-known automotive company approached us to build an intelligent cockpit experience that could do two things: keep drivers safe by monitoring signs of drowsiness or distraction, and reduce touch-based interactions with the infotainment system by using hand gestures. The goal was to create a seamless, intuitive, and safety-focused user experience fit for the vehicles of tomorrow.

Challenge

The client needed a solution that could actively observe the driver’s behaviour and issue timely warnings without being intrusive. They also wanted to eliminate the need for physical buttons or screens for common infotainment tasks, all while ensuring the system worked in real time.

Some of the hurdles were:

  • Recognizing natural hand gestures without false triggers
  • Detecting subtle signs of driver fatigue or distraction like eye closure or yawning
  • Ensuring smooth integration with the infotainment system to execute commands and alert the driver when needed
  • Achieving high performance on a compact, in-vehicle computing platform

Solution

The team at NeST Digital designed a smart cockpit system that is a camera based solution which brings together gesture control and driver monitoring using AI.

A custom AI model was built and trained on a dataset created in-house. Our team developed a camera-based hand tracking module that recognizes gestures to control the infotainment system. This includes custom static and dynamic gestures such as wave, swipe, zoom in , zoom out and more.

At the same time, a driver alertness module was built that closely monitors signs of drowsiness or distraction, such as eye closure, yawning, head tilt, or even sudden emotional shifts.

To make it all work together, a communication layer was built which interacts with the vehicle’s infotainment unit, instantly translating gestures into commands and sending alerts when the fatigue level of the driver was detected.

We updated the hardware to run efficiently on a compact NVIDIA Jetson based embedded platform, while the software stack was powered by TensorFlow, OpenCV, and python. The firmware and modules were optimized to ensure fast, real-time processing without compromising accuracy.

Impact

The solution transformed the cockpit experience into an intelligent and driver-friendly module. Drivers could now interact with the system using simple, intuitive gestures while keeping their eyes on the road and hands on the physical controls.

 

At the same time, the system actively monitored the driver’s face to detect early signs of fatigue or inattentiveness. In real-world tests, it was able to do the following:

    • Detect drowsiness with a much better accuracy, reducing the risk of fatigue-related accidents
    • Recognize hand gestures with 90%+ reliability, even under varying lighting conditions
    • Reduced the manual interactions by 70%, in the infotainment module, leading to a safer and more intuitive driving experience

SHARE

FEATURED CASE STUDIES