In some visions of the future, you’ll drive your car with little more than your mind. Electrodes on your head, you can climb into your car, think about how much you’d like a Big Mac, and let the car take you automatically to the nearest McDonald’s.
That’s one vision, at least — that of Navy Cdr. Joseph Cohn, program officer for the Office of Naval Research’s Neurocognitive Patterns project. This two-year joint effort between ONR, Brandeis University and Aptima aims to identify and capture neural patterns with enough accuracy to enable a machine to divine a user’s intentions from his brain activity. This could be useful for everything from driving a car to controlling a squadron of UAVs — and change how soldiers of the future train.
Brain-computer interface (BCI) technology has made great strides in recent years. Gamers can play video games with a BCI headset. Scientists at the University of Pittsburgh have trained a monkey to move a robotic arm by mental commands. DARPA’s Cognitive Technology Threat Warning System links cameras, neural sensors and a human operator so that an image of an object that triggers a threat neural pattern in the brain will be detected by the system.
But the high frontier of neuroscience still confronts the problem of mapping. Since the 1920s, scientists have been able to detect non-specific neural patterns. Yet among the mass of impulses that comprise the human brain, exactly which neural pattern corresponds to a desire to walk in the park versus a desire for a hamburger?
The ONR team is approaching this question on several levels. One way is to identify neural patterns and, perhaps more importantly, identify them in context. These patterns vary depending on external influences, from nighttime rhythms to the stress of being under fire.
There is also the computational and modeling level, where sensor data on a user’s neural pattern is combined with models so that a machine can determine what a user wants. Cohn likens it to a sports team.
“The way that folks become an expert team is not just to have a model of themselves, but models that represent the other people on their team,” he said. “They know how each member will behave.”
At the base of neuroscience is an attempt to decode the brain, Cohn said. This requires developing a grammar or lexicon that can represent how the brain attends to information, processes it, and acts upon it. “Part of doing this requires us to have the right tools in place for structuring and making sense of this data, and that’s really what we are trying to do in neurocognitive patterns.”
Cohn doesn’t expect the Neurocognitive Patterns project will accomplish all of this, but he hopes it will begin laying the foundation by identifying which neural patterns correspond to desired goals, such as driving to a specific destination. Phase I of the project saw pre-recorded neural patterns controlling an unmanned ground vehicle, while the second phase focuses on moving a prosthetic arm.
However, Cohn’s goal is more than just have a human think a series of instructions to be executed by a machine. If that were the case, he said, then “we have probably added to the workload, you might as well just use voice recognition software and have the person speak them.”
The true goal is to make a vehicle or a robot arm just another extension of the human body and brain. And in many ways, this is already how the brain works.
“If you use a sledgehammer long enough, you’re no longer clumsy with it,” said Cohn. “Your brain has internalized the dynamics of that sledgehammer, and it treats it like it’s just a new part of your arm. We’re taking advantage of the plasticity of the human brain to find another of communicating with our machines.”