Our new study is out today in the New England Journal of Medicine! We demonstrate a speech neuroprosthesis that decodes the attempted speech of a man with ALS into text with 97.5% accuracy, enabling him to communicate with his family, friends, and colleagues in his own home. 1/9
Our speech neuroprosthesis works by deciphering intracortical neural activity during attempted speech into the phonemes being spoken, and then assembling those phonemes into words that are shown on-screen in real time and read aloud in his own voice. 2/9
If you're attending SfN 2023 and you're interested in speech decoding, come check out my poster on Wednesday morning! We demonstrate a very high accuracy and rapidly calibrating brain-to-text BCI for restoring communication.
PSTR488.12 / JJ23
The speech neuroprosthesis worked on the first ever day of use, achieving over 99% word decoding accuracy with a 50-word vocabulary. On the second day, we expanded the vocabulary to over 125,000 words and still achieved over 90% decoding accuracy. 3/9
By the 15th session, the speech neuroprosthesis could decode neural activity into words with an error rate of just 2.5% - about ten times better than previous speech BCIs. This decoding accuracy was sustained for over 8 months after device implantation. 4/9
Throughout over 248 hours of use, he’s used the system to say more than 21,000 sentences to his family, friends, and colleagues. The speech neuroprosthesis is now his preferred method of communication. 7/9
We achieved and sustained this decoding accuracy by implanting four 64-channel Utah arrays into speech motor cortex, optimizing the decoding pipeline, and continuously finetuning the neural network that predicts phonemes from neural activity, enabling long-term stability. 5/9
High decoding accuracy enabled the participant to use the speech neuroprosthesis for day-to-day communication. The BCI sits idle until he tries to speak, and then decodes his attempted speech. He controls the system using gaze tracking (but it can decode a hand squeeze too). 6/9
If you're attending SfN 2023 and you're interested in speech decoding, come check out my poster on Wednesday morning! We demonstrate a very high accuracy and rapidly calibrating brain-to-text BCI for restoring communication.
PSTR488.12 / JJ23
I'm thrilled to have been selected as a recipient of the A. P. Giannini Postdoctoral Research Fellowship and Leadership award! I'm excited to continue my research with
@sergeydoestweet
and
@DrDavidBrandman
at the UC Davis Neuroprosthetics Lab.
If you’re at SfN, come see my poster about conversational speech decoding with an intracortical speech neuroprosthesis!
Today from 1pm-5pm at poster H29.
At
#SfN24
our lab will be presenting updates on building multi-functional intracortical speech neuroprostheses and understanding the cortical basis of speech and movement. The whole lab including
@DrDavidBrandman
and I will be there!
#tweeprint
I am excited to share BRAND, our software platform for building closed-loop neuroscience experiments with support for
✅ deep neural network inference with minimal code changes
✅ fast high-bandwidth communication
✅ 54 programming languages
1/8
I’ve got a poster at
#SfN22
tomorrow morning (11/15, 8am-12pm): [GG9] Optical resting state reveals network architecture of sensorimotor cortex in monkeys. Come see it!
@CoganLab
We quantified speech detection accuracy during the copy task because we knew exactly when the participant was speaking or not during that task. Qualitatively, I can say that it was also reliable during conversation mode. We rarely got false positives when he was not speaking.
@CoganLab
Yes! Compared to Willett, Kunz, Fan et al. 2023, we have doubled the recording channel count and also record from two additional areas (4 and 55b). That, plus several data pipeline tweaks, an upgraded online language model, and early hyperparameter optimization all helped.
@CoganLab
Similar to Willett, Kunz, Fan et al. 2023 (see extended figures), we found that spike-band power motives phoneme decoding more than threshold crossings, but using both still provides slightly better performance than only using SBP.