HomeAboutPublicationsProjectsGitHubContact




Speech control of prosthetic hands with lightweight convolutional neural networks

  Speech recognition is one of the key topics in artificial intelligence, as it is one of the most common forms of communication in humans. Researchers have developed many speech-controlled prosthetic hands in the past decades, utilizing conventional speech recognition systems that use a combination of neural network and hidden Markov model. Recent advancements in general-purpose graphics processing units (GPGPUs) enable intelligent devices to run deep neural networks in real-time. Thus, state-of-the-art speech recognition systems have rapidly shifted from the paradigm of composite subsystems optimization to the paradigm of end-to-end optimization. However, a low-power embedded GPGPU cannot run these speech recognition systems in real-time. In this project, we showed the development of deep convolutional neural networks (CNN) for speech control of prosthetic hands that run in real-time on a NVIDIA Jetson TX2 developer kit. First, the device captures and converts speech into 2D features (like spectrogram). The CNN receives the 2D features and classifies the hand gestures. Finally, the hand gesture classes are sent to the prosthetic hand motion control system. The whole system is written in Python with Keras, a deep learning library that has a TensorFlow backend. Our experiments on the CNN demonstrate the 91% accuracy and 2ms running time of hand gestures (text output) from speech commands, which can be used to control the prosthetic hands in real-time.

Control of prosthetic hands with speech

Control of prosthetic hands with EMG signals (deep learning)

  Natural muscles provide mobility in response to nerve impulses. Electromyography (EMG) measures the electrical activity of muscles in response to a nerve’s stimulation. In the past few decades, EMG signals have been used extensively in the identification of user intention to potentially control assistive devices such as smart wheelchairs, exoskeletons, and prosthetic devices. In the design of conventional assistive devices, developers optimize multiple subsystems independently. Feature extraction and feature description are essential subsystems of this approach. Therefore, researchers proposed various hand-crafted features to interpret EMG signals. However, the performance of conventional assistive devices is still unsatisfactory. In this project, we propose a deep learning approach to control prosthetic hands with raw EMG signals. We use a novel deep convolutional neural network to eschew the feature-engineering step. Removing the feature extraction and feature description is an important step toward the paradigm of end-to-end optimization. Fine-tuning and personalization are additional advantages of our approach. The proposed approach is implemented in Python with TensorFlow deep learning library, and it runs in real-time in general-purpose graphics processing units of NVIDIA Jetson TX2 developer kit. Our results demonstrate the ability of our system to predict fingers position from raw EMG signals. We anticipate our EMG-based control system to be a starting point to design more sophisticated prosthetic hands. For example, a pressure measurement unit can be added to transfer the perception of the environment to the user. Furthermore, our system can be modified for other prosthetic devices.

Control of prosthetic hands with EMG signals


Design an IoT wearable vest for human-robot interaction (IoRT)

  In recent year, the Internet of Things (IoT) technologies has been developed such that it is already part of our life. The internet of things idea has a high impact on modern smart devices. Many published papers proposed hardware and software architectures for IoT devices. However, the architectures are application based designed and cannot be transferred to other IoT devices. The architectures for a humanoid robot is not addressed yet and it is open problem. In this project, integrated hardware and software architecture for a social humanoid robot (HBS2) have been designed. The HBS2 uses an NVIDIA Jetson TX2 developer kit as the main processor. The kit runs Robot Operating System (ROS) on top of the Ubuntu. The proposed architectures are modular, adaptable and can be expanded easily.

Details comming soon!



Stochastic modeling and control of artificial muscles

control of artificial muscles

  In this project, black box system identification methods are used to examine the behavior of silver-plated twisted and coiled polymer (TCP) muscles under different input conditions. Prediction error method (PEM) was used for parameter estimation of discrete-time state space models to find the order of the model. The result shows that the first order fits better than other orders in most cases. In some cases, second or third orders show better fitting; however, the difference is negligible. The average fitting accuracy is more than 90% in self-evaluation and more than 85% in cross-evaluation. In addition, we suggest a fast method (rule of thumb) to model a TCP muscle. The accuracy is sufficient for controller design.

  Then, a proportional–integral (PI) controller was used to regulate the force of the muscles. The PI controller is robust to the model parameters, especially the DC gain of the model, which is crucial because the models always contain some uncertainties. To increase the speed of actuation, the Takagi–Sugeno–Kang (TSK) view of a general fuzzy inference system (FIS) is used to design a fuzzy controller. In the fuzzy controller, the muscle receives the maximum actuation voltage when the error is positive large. On the other hand, the muscle receives the proportional–integral controller output when the error is small, and the voltage will be zero when the error is negative large. The transitions between these three rules are smooth. The Gaussian membership function was used as a fuzzifier, and a weighted average method is used as defuzzifier. The experiment shows the advantages of the fuzzy controller over the proportional–integral controller. Our experimental results demonstrate how the muscle can be controlled in practical settings and shows the superiority of TSK over the PI controller.

  The proposed controllers can be used in many mechatronic and robotic systems which use TCP muscles as the actuating mechanism. For example, a humanoid robot must provide some amount of force to objects during picking and placing. The amount of force depends on the object's mechanical property such as stiffness and coefficient of friction. The proposed controller can be used in the humanoid robots that are actuated by the TCP muscles.



Vision based path planning for a humanoid robot in unknown environment

Humanoid robot NAO H25 V4
Humanoid robot path planning with fuzzy Markov decision processes

  In this project, we showed novel methods in real-time path planning of a humanoid robot in unknown environments. We used an NAO H25 V4 for experiments. This project has two parts. These methods have been developed and successfully tested on an experimental humanoid robot (NAO H25 V4).

  In the first part, we combine artificial potential field path planning method with two different fuzzy inference systems. In the first approach, the direction of the moving robot is derived from fuzzified artificial potential field whereas, in the second one, the direction of the robot is extracted from some linguistic rules that are inspired from the artificial potential field.

  In the second part, we used fuzzy Markov decision processes (FMDP) to find a locally optimal path. The reward function has been calculated without exact estimation of the distance and shape of the obstacles. We also use value iteration to solve the Bellman equation in real time. Using the fuzzy inference system leads to a smoother optimal path. The method can work with noisy data. This method requires only one camera and does not need range computing.