Researchers at the University of California, San Francisco (UCSF) have used deep learning algorithm, to translate brain signals of four test subjects into sentences successfully. The algorithm uses brain implants to track the neural activity of the test subjects to convert them into a string of numbers, which then gets translated into a sequence of words through another algorithm. The AI system was more accurate than professional human transcribers – the AI had just 3% errors while the human error is 5%. The breakthrough provides a ray of light to people who can’t communicate through speech or text.
Joseph Mackin of the UCSF and his team developed this tool to work with the test subjects and epilepsy patients -who already had brain electrodes implanted to monitor and transmit data about seizures. The test subjects repeatedly read out a set of sentences out loud as the electrodes transmitted this ‘data’ into the algorithm, teaching it to identify reoccurring patterns that may be associated with repetitive aspects of speech – for example, a combination of sounds and vowels. These patterns were then put into the second neural network, which tried turning them into words in order to form sentences. “Memorising a person’s brain activity while reading sentences will not help, so the algorithm should instead understand what is similar in the patterns and summarise these data”, explains the study which first appeared in the journal Nature Neuroscience.
However, the algorithm currently has limitations – it can only decode 30-35 sentences. The researchers are hopeful that the algorithm will improve over time with training, as more sentences and data are added to improve the translation. “Although we should like the decoder to learn and exploit the regularities of the language, it remains to show how many data would be required to expand from our tiny languages to a more general form of English,” the researchers wrote in their Nature Neuroscience paper. The tech currently only works when someone is speaking aloud.
However, the team is working on upgrading it to translate the thoughts of people who can’t communicate verbally, such as those with locked-in syndrome, a neurological disorder that causes paralysis.