Joe Francis sat at the desk with his two-screened computer, a half-eaten banana lying near the keyboard and his shoes kicked off for added comfort. It was clear that he had spent many long nights in his office from the eye drops that sat on his desk and the extra clothes that were hanging in the corner of the room. On his computer screen were the brain scans which led to his team’s discovery.
UH professor of biomedical engineering Joe Francis is working on advancing the technology to help artificial intelligence and the human brain work better together. He and this team on June 6 published findings that explores brain function and could potentially give robotic prosthetics the ability to self-correct, which would aid in treating movement disorders.
The study documented brain activity in primates as they completed tasks, such as touching colored dots on a table, and were then given a reward. The data allowed the team to consistently predict whether an individual was expecting a reward after completing tasks. This is helpful in understanding how the brain functions.
“We’re showing that we can basically determine what the animal is expecting, up to 97 percent accuracy,” Francis said. “I think this work is certainly very important because the level of certainty that we’re finding; it’s clear as day.”
The neural data from animal primates recorded in this study is rare, said Junmo An, who was a UH postdoctoral fellow under the supervision of Francis as well as an author of the paper.
While this study focuses on data from animals, it still allows the researchers to understand how the brain functions and, eventually, will be able to aid in the treatment of many human disorders.
“I believe it can be used for decoding movement-related information for the control of a robotic arm or a computer cursor for quadriplegic and paralytic patients,” An said. “Also, it can be used for stimulating brain regions for the treatment of movement disorders, including Parkinson’s disease, essential tremor, and dystonia.”
The next phase in research will be human studies. The team currently has conducted noninvasive human experiments, like those used in this study. However, Francis said the signal they receive from the surface of the body is generally too “noisy,” and only allows them to successfully predict the subject’s expectation 70-80 percent of the time.
“We’re moving towards getting funding to do implants in humans,” Francis said, which he believes may begin as early as the end of this year.
Several companies like Facebook, Google and SpaceX are investing in studies like the one conducted by Francis’ team. Beyond simply creating an autonomous prosthetic, Francis wants to allow the interface to update and correct its understanding of the user’s brain activity without needing to reconfigure or reprogram the device.
“The interfaces work really well but over time still need to be totally updated and retrained,” Francis said. “This doesn’t take long, but the idea is for these devices to work seamlessly. You don’t want the person in the middle of the road driving and suddenly things aren’t working so well.”
“There’s this disconnect between when something is in existence versus when it can be out, and purchased.”
- Biomedical engineering professor Joe Francis
The autonomously updating brain-machine interface technology will revolutionize prosthetics, but Francis still doubts this technology will be readily available to the public any time soon.
“There’s this disconnect between when something is in existence versus when it can be out, and purchased, and easily available, and well-supported and paid for by insurance,” Francis said. “That’s just a huge gap between those two worlds.”