Myo:Hauptseite
Aus SWLabWiki
(Unterschied zwischen Versionen)
(→Description) |
(→Project-Status) |
||
Zeile 25: | Zeile 25: | ||
* We started with the task of digit recognition, once the band is worn by subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition. ... | * We started with the task of digit recognition, once the band is worn by subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition. ... | ||
+ | * Created two sets of training instances | ||
+ | ** One with 10 instances per class | ||
+ | ** One with 20 instances per class | ||
+ | * Evaluated models using the following algorithms, | ||
+ | ** HMM - Raw Data | ||
+ | ** HMM - Windowed Features | ||
+ | ** Naive Bays | ||
+ | ** KNN (1 neighbour) | ||
+ | ** SVM (Parameters using grid search) | ||
+ | * Analysed the accuracy precision, F-Score for all the models in all the folds | ||
+ | * Analysed the features and tried to decide which features to eliminate and which features are not significant using. | ||
+ | ** Parallel Coordinates | ||
+ | ** Andrews Curves | ||
+ | |||
+ | * Now we aim to wrap up these results via an application that is capable of capturing, analyzing (fixed set of)gestures in real time and classifying them. Once this is done we can give a live demo/presentation of our results so far and continue our work towards more complex and higher order gestures. | ||
=Internal Documents= | =Internal Documents= |
Version vom 7. November 2016, 13:23 Uhr
Inhaltsverzeichnis |
Description
Hier entwickeln Master-Studierende eine Software, um mit einem speziellen Armband (MYO) eine Gestenerkennung für Gebärdensprache zu realisieren.
Gesture recognition with the help of armband sensor.
Targets
- ..
- ...
Project-Team
Project-Status
- We started with the task of digit recognition, once the band is worn by subject and a specific digit gesture is made. Once this task is completed, our assumption is the same model can be extended to alphabets and other complex gesture recognition. ...
- Created two sets of training instances
- One with 10 instances per class
- One with 20 instances per class
- Evaluated models using the following algorithms,
- HMM - Raw Data
- HMM - Windowed Features
- Naive Bays
- KNN (1 neighbour)
- SVM (Parameters using grid search)
- Analysed the accuracy precision, F-Score for all the models in all the folds
- Analysed the features and tried to decide which features to eliminate and which features are not significant using.
- Parallel Coordinates
- Andrews Curves
- Now we aim to wrap up these results via an application that is capable of capturing, analyzing (fixed set of)gestures in real time and classifying them. Once this is done we can give a live demo/presentation of our results so far and continue our work towards more complex and higher order gestures.
Internal Documents
Die hier verlinkten weiteren Seiten zu diesem Projekt sind nur für angemeldete SWLab-Teilnehmer lesbar.