How does Real-time Fall Detection using Pose Estimation work?

Pose estimation is locating human joints (elbows and wrists), mainly known as key points in video and image frames. Each person is made up of a number of key points. Then lines are drawn between keypoint pairs to successfully draw a person’s rough shape. We have many pose estimation methods depending on input and detection ways. 
Real-time Fall Detection using Pose Estimation

source : uni-augsburg

When a person wins a gold medal in an olympics high jump competition, we all praise him with words and claps. But do you know how high jump is analyzed between many players? And how will the judges announce the medals? This is done by analyzing the athlete’s performance with pose estimation technology. In this blog we will discuss how does real-time fall detection using pose estimation work?

What is fall detection?

A device’s ability to identify when a person has fallen is known as fall detection. Muscle movements are affected when people fall. If there are no injuries they lack the energy to stand up. A fall detection gadget can quickly call the predefined mobile number if it detects that we have fallen.

For instance apple watch detects hard falls and helps in connecting emergency services.

Pose Estimation of Olympics Figure Skating

Pose estimation is locating human joints (elbows and wrists), mainly known as key points in video and image frames. Each person is made up of a number of key points. Then lines are drawn between keypoint pairs to successfully draw a person’s rough shape. We have many pose estimation methods depending on input and detection ways. 

Human pose estimation has many applications like gaming, animation, and action recognition. For instance the home court deep learning app uses pose estimation to evaluate basketball players’ movements.

Fall detection is paramount in action recognition research. Artificial intelligence is trained to categorize general actions like walking, sitting down and jumping. But, it is a challenging task because of solid articulations, small and hardly visible joints, occlusions, clothes, and illumination variations.

Pre-trained model

This block talks about the pose estimation using openPifPaf. It starts with AI analyzing the total images and recognizing the key points, combining them to find the people in that image. This explains a top-down method in which AI uses a person detector to recognize locations before identifying every key point. 

Multi-stream input

pose estimation pre trained model

Source: towards data science

Many open-source models process single input at once. To make the method easier we can use multiple processing in python to process various streams jointly by using subprocesses. This gives us an option to take advantage of multiple processors on machines.

Person tracking

pose estimation person tracking

Source: towards data science

In the above video frames it is challenging to find who will fall. This happens because algorithms interact with the same person in consecutive frames. But how will the algorithm know how to look at a constantly moving person?

To solve this we have to use multiple person tracker. It should not be some complicated tracker but an object tracker. Below it is explained how tracking will be done in simple steps:

1. Compute centroids.

2. Assigning ID for every centroid.

3. Compute new centroids for next frame.

4. Calculate the euclidean distance for both previous and current centroids and join them at a minimum length.

5. When a correlation is detected, we have to update new centroid with an ID of previous centroid.

6. If a correlation is not detected we must assign a new centroid an ID when a new person enters the frame.

7. When the person goes out of frames, we must delete the ID and centroid. For your reference check this tutorial by Adrian Rosebrock.

What is a fall detection algorithm?

fall detection algorithm

Source : arxiv

A fall detection algorithm analyses an object’s motion in one (Y-axis) and two dimensions (both x and y-axis) to enclose different camera angles.

When a bounding adds it check the person’s width that is bigger than his height. The algorithm assumes that person is on ground but not upright. We can delete false positives by cyclists or fast-moving people with algorithms.

By adding a two-point check to observe a person’s fall when ankle and neck points are detected. This avoids false computation of a person’s height when a person can’t identify fully with occlusions.

Testing Results

We have to use the UR fall detection data set for testing results because it has different fall scenarios. From 30 video models it can identify a total of 25 falls and it missed other 5 because they are out of frame. So it gave attention up to 83.33% and an F1 score of 90.90%.

The test was also executed with NVIDIA Quadro GV100s, which gave a 6FPS average, suits as a complete source for real-time processing.

pose estimation testing results

source : towards data science

What are the applications of fall detection?

Fall detection can be applied to various scenarios.

1. People who are careless and fall backward.

2. People who are suffering from health issues such as heart attacks or strokes

3. Kids playing in the ground.

4. Elderly people.

5. Drunk people.

Conclusion

We must first understand and master the complexities of identifying a single motion before moving on to the more difficult challenges of general action recognition, including various activities. Suppose we create a model that can recognize a fall as quickly as possible. Then we can extract specific patterns that enable the model to easily detect various kinds of actions.

Visionify provides computer vision solutions that help manufacturing companies to solve their problems in real-time, it also saves time and increases productivity.  call us to get a live demo.