Myo:Hauptseite
Aus SWLabWiki
(Unterschied zwischen Versionen)
K |
|||
(9 dazwischenliegende Versionen von 2 Benutzern werden nicht angezeigt) | |||
Zeile 1: | Zeile 1: | ||
= Description = | = Description = | ||
− | Hier entwickeln Master-Studierende eine Software, um mit einem speziellen Armband (MYO) eine Gestenerkennung für Gebärdensprache zu realisieren. | + | Hier entwickeln Master-Studierende eine Software, um mit einem speziellen Armband (MYO) eine Gestenerkennung für Gebärdensprache zu realisieren. |
+ | |||
+ | Gesture recognition with the help of armband sensor. | ||
=Targets= | =Targets= | ||
<!-- Listen Sie hier ihre kurzfristigen und langfristigen Ziele auf. --> | <!-- Listen Sie hier ihre kurzfristigen und langfristigen Ziele auf. --> | ||
− | + | 1. To use Machine learning algorithm for identification of hand gestures. | |
− | + | ||
+ | 2. Mapping the identified gestures on set of predefined language constructs. | ||
= Project-Team = | = Project-Team = | ||
Zeile 14: | Zeile 17: | ||
--> | --> | ||
− | * [[ Benutzer: | + | * [[ Benutzer:r_shringi15 | Rajveer Shringi ]] |
+ | * [[ Benutzer:a_dash15 | Ayushman Dash ]] | ||
+ | * [[ Benutzer:a_sahu15 | Amit Sahu ]] | ||
+ | |||
=Project-Status= | =Project-Status= | ||
<!-- Versuchen Sie den aktuellen Projektstand für Außenstehende zu skizzieren. --> | <!-- Versuchen Sie den aktuellen Projektstand für Außenstehende zu skizzieren. --> | ||
− | * We started with the task of digit recognition, once the band is worn by subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition. ... | + | * We broke down the problem statement of gesture recognition to a controlled real-world simulation of a digit gesture recognition. |
+ | |||
+ | * We started with the task of digit recognition, once the band is worn by the subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition. | ||
+ | |||
+ | * We have conducted experiments and collected data of 16 people so far. This data is further used for model generation and analysis for recognition of gestures based on specific feature set in each gesture. | ||
+ | |||
+ | * For this we have utilized and analyzed 4 classical machine learning approaches(Hidden Markov Models-HMM, Support Vector Machines-SVM, Naive Bayes-NB, K Nearest Neighbor-KNN) and Artificial Neural Networks approach(Long short term memory -LSTM). | ||
+ | |||
+ | * Created two sets of training instances | ||
+ | ** One with 10 instances per class | ||
+ | ** One with 20 instances per class | ||
+ | * Evaluated models using the following algorithms, | ||
+ | ** HMM - Raw Data | ||
+ | ** HMM - Windowed Features | ||
+ | ** Naive Bays | ||
+ | ** KNN (1 neighbour) | ||
+ | ** SVM (Parameters using grid search) | ||
+ | * Analysed the accuracy precision, F-Score for all the models in all the folds | ||
+ | * Analysed the features and tried to decide which features to eliminate and which features are not significant using. | ||
+ | ** Parallel Coordinates | ||
+ | ** Andrews Curves | ||
+ | |||
+ | * Also we have converted the 3-D real world changes in (arm-band)spatial coordinates while performing gestures into the 2-D screen coordinates which then can be used to plot images of changes in 2D spatial data. | ||
+ | * These images can be further fed to a variety of Neural networks such as - Convolution Neural Networks for identification and classification tasks. | ||
+ | |||
+ | |||
+ | * We have also developed and (open sourced) released an application for data visualization and capture for Myo armband. [https://github.com/sigvoiced/pewter https://github.com/sigvoiced/pewter] | ||
+ | * Now we aim to wrap up these results via an application that is capable of capturing, analyzing (fixed set of)gestures in real time and classifying them. Once this is done we can give a live demo/presentation of our results so far and continue our work towards more complex and higher order gestures. | ||
− | = | + | =Internal Documents= |
Die hier verlinkten weiteren Seiten zu diesem Projekt sind '''nur für angemeldete SWLab-Teilnehmer''' lesbar. | Die hier verlinkten weiteren Seiten zu diesem Projekt sind '''nur für angemeldete SWLab-Teilnehmer''' lesbar. | ||
<!-- Verlinken Sie hier interne Dokumente im Projekt-Namespace. | <!-- Verlinken Sie hier interne Dokumente im Projekt-Namespace. |
Aktuelle Version vom 12. Juni 2019, 11:32 Uhr
Inhaltsverzeichnis |
[Bearbeiten] Description
Hier entwickeln Master-Studierende eine Software, um mit einem speziellen Armband (MYO) eine Gestenerkennung für Gebärdensprache zu realisieren.
Gesture recognition with the help of armband sensor.
[Bearbeiten] Targets
1. To use Machine learning algorithm for identification of hand gestures.
2. Mapping the identified gestures on set of predefined language constructs.
[Bearbeiten] Project-Team
[Bearbeiten] Project-Status
- We broke down the problem statement of gesture recognition to a controlled real-world simulation of a digit gesture recognition.
- We started with the task of digit recognition, once the band is worn by the subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition.
- We have conducted experiments and collected data of 16 people so far. This data is further used for model generation and analysis for recognition of gestures based on specific feature set in each gesture.
- For this we have utilized and analyzed 4 classical machine learning approaches(Hidden Markov Models-HMM, Support Vector Machines-SVM, Naive Bayes-NB, K Nearest Neighbor-KNN) and Artificial Neural Networks approach(Long short term memory -LSTM).
- Created two sets of training instances
- One with 10 instances per class
- One with 20 instances per class
- Evaluated models using the following algorithms,
- HMM - Raw Data
- HMM - Windowed Features
- Naive Bays
- KNN (1 neighbour)
- SVM (Parameters using grid search)
- Analysed the accuracy precision, F-Score for all the models in all the folds
- Analysed the features and tried to decide which features to eliminate and which features are not significant using.
- Parallel Coordinates
- Andrews Curves
- Also we have converted the 3-D real world changes in (arm-band)spatial coordinates while performing gestures into the 2-D screen coordinates which then can be used to plot images of changes in 2D spatial data.
- These images can be further fed to a variety of Neural networks such as - Convolution Neural Networks for identification and classification tasks.
- We have also developed and (open sourced) released an application for data visualization and capture for Myo armband. https://github.com/sigvoiced/pewter
- Now we aim to wrap up these results via an application that is capable of capturing, analyzing (fixed set of)gestures in real time and classifying them. Once this is done we can give a live demo/presentation of our results so far and continue our work towards more complex and higher order gestures.
[Bearbeiten] Internal Documents
Die hier verlinkten weiteren Seiten zu diesem Projekt sind nur für angemeldete SWLab-Teilnehmer lesbar.