A silent speech interface with machine learning recognition model using microneedle array electrodes and polymer-based strain sensors
Journal
Sensors and Actuators Reports
Journal Volume
11
Start Page
100407
ISSN
26660539
Date Issued
2026-06
Author(s)
Abstract
Silent speech interfaces (SSIs) recognize verbal expressions when speech signals are not accessible and serve as promising translator tools for people with voice disorder conditions. This work presents a wearable electromyogram (EMG)-based SSI device utilizing five microneedle array (MNA) electrodes and a conductive polymer-based strain sensor. An AI speech recognition model, which processes the EMG and strain signals, was implemented to enable assisted speaking without relying on the vocal folds. The proposed MNA electrodes can bypass the electric barrier of the stratum corneum layer of human skin and significantly enhance signal quality without the need for skin abrasion or conductive gel during electrode application. To enhance recognition accuracy, a conductive polymer-based strain sensor is used to measure the strain variation induced by the movement of the mandible bone during silent speech. The AI speech recognition model exhibited a solid word error rate (WER) (8.5%) for a dataset of 1,396 words. High recognition accuracy (>90%) was achieved on various datasets covering commonly used words and easily confusable word pairs. This proposed wearable SSI potentially helps people with vocal cord injuries regain their ability to speak, and potentially enables human interactions in special situations and environments.
Subjects
AI speech recognition model
Electromyography
Microneedle array
Polymer-based sensor
Silent speech interface
Publisher
Elsevier B.V.
Type
journal article
