"For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity,” said researcher Professor Edward Chang.
According to the study, the cranial plug-in works in two stages. First, an electrode implanted in the brain will read electrical signals received from the lips, jaw, tongue, and voice box. It then translates the movements to sounds and words associated with various words, allowing a synthesized speech to emerge in a “virtual vocal tract.”
"We and others actually have tried to look at whether it's actually possible to decode just thoughts alone. And it turns out, it's a very, very difficult and challenging problem.
"That's only one reason why, of many, we really focus on what people are actually trying to say," the researcher said.
In order to test the tool, scientists had five subjects read sentences which were later translated and recorded for a group of listeners to interpret.
Of the hundreds of test recordings, listeners said they could identify up to 70 percent of the material when provided a list of choices.
"This is an exhilarating proof of principle that, with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss," Chang said, although whether an implant will actually work in patients is still unknown.
“The study that we did was involving people having neurosurgery. We are really not aware of currently available noninvasive technology that could allow you to do this from outside the head,” he said.
“Believe me, if it did exist it would have profound medical applications.”
A note from the authors which accompanied the report said: "We can hope that individuals with speech impairments will regain the ability to freely speak their minds and reconnect with the world around them."