Gao Z, Zhang H, Liu AA et al (2016) Human action recognition on depth dataset. Experimental results provide a qualitative assessment of the method as well as a comparison to the state-of-the-art. We have also tested the system in real time for a different set of 12 persons (five males and seven females) and achieved 98.92% accuracy. Using a model with temporal data leverages the information from previous frames to give an accurate and robust result. The system achieves a test accuracy of 99.04% on single frames and 99.38% accuracy after polling of predictions on 45 frames of the videos. To the best of our knowledge, this is the first study using an end-to-end deep learning pipeline to detect Yoga from videos. A hybrid deep learning model is proposed using convolutional neural network (CNN) and long short-term memory (LSTM) for Yoga recognition on real-time videos, where CNN layer is used to extract features from keypoints of each frame obtained from OpenPose and is followed by LSTM to give temporal predictions. Bhujangasana, Padmasana, Shavasana, Tadasana, Trikonasana, and Vrikshasana) has been created using 15 individuals (ten males and five females) with a normal RGB webcam and is made publicly available. An approach to accurately recognize various Yoga asanas using deep learning algorithms has been presented in this work.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |