Lin, Sheng-KaiSheng-KaiLinLee, Jui-HuaJui-HuaLeeTsai, Hao-SinHao-SinTsaiChen, Yen-ChunYen-ChunChenZhang, Ming-XiangMing-XiangZhangKuo, Wen-ChengWen-ChengKuoYAO-JOE YANG2026-01-152026-01-152026-06https://www.scopus.com/record/display.uri?eid=2-s2.0-105023664930&origin=resultslisthttps://scholars.lib.ntu.edu.tw/handle/123456789/735340Silent speech interfaces (SSIs) recognize verbal expressions when speech signals are not accessible and serve as promising translator tools for people with voice disorder conditions. This work presents a wearable electromyogram (EMG)-based SSI device utilizing five microneedle array (MNA) electrodes and a conductive polymer-based strain sensor. An AI speech recognition model, which processes the EMG and strain signals, was implemented to enable assisted speaking without relying on the vocal folds. The proposed MNA electrodes can bypass the electric barrier of the stratum corneum layer of human skin and significantly enhance signal quality without the need for skin abrasion or conductive gel during electrode application. To enhance recognition accuracy, a conductive polymer-based strain sensor is used to measure the strain variation induced by the movement of the mandible bone during silent speech. The AI speech recognition model exhibited a solid word error rate (WER) (8.5%) for a dataset of 1,396 words. High recognition accuracy (>90%) was achieved on various datasets covering commonly used words and easily confusable word pairs. This proposed wearable SSI potentially helps people with vocal cord injuries regain their ability to speak, and potentially enables human interactions in special situations and environments.trueAI speech recognition modelElectromyographyMicroneedle arrayPolymer-based sensorSilent speech interfaceA silent speech interface with machine learning recognition model using microneedle array electrodes and polymer-based strain sensorsjournal article10.1016/j.snr.2025.1004072-s2.0-105023664930