Jacob Robinson, Ph.D., works in his lab at Rice University. Photo by Cody Duty/Texas Medical Center.
Jacob Robinson, Ph.D., works in his lab at Rice University. Photo by Cody Duty/Texas Medical Center.

Why scientists are working with the military to develop headsets that can read minds

Researchers hope to develop tools that allow brain-to-brain communication without surgery

Why scientists are working with the military to develop headsets that can read minds

8 Minute Read
Jacob Robinson, Ph.D., works in his lab at Rice University. Photo by Cody Duty/Texas Medical Center.

If all goes precisely as planned, something astonishing will occur at the Texas Medical Center in 2023. In a scenario that seems borrowed from science fiction, two patients, linked by their minds, will be able to transmit information back and forth without speaking, typing or writing.

And, as if that’s not ambitious enough, they’ll be able to do it without having undergone any surgery.

Brain-to-brain and brain-to-computer communication are part of a major research effort by the Pentagon, which views these types of links as critical to supporting the soldier of the future.

The work happening in the medical center, led by researchers at Rice University, is one of several projects occurring throughout the country to support a government-funded initiative called Next-Generation Non-Surgical Neurotechnology, or N3. The research and development wing of the U.S. Department of Defense will sink $18 million into the Rice-led project alone.

This research aims to solve a key problem facing those who seek to improve these types of communication. On one hand, technology already exists that allows researchers and clinicians to establish connections between groups of neurons in the brain and machines. But that technology typically requires surgery, and it’s considered too invasive to use on those who haven’t suffered injuries or illness—such as able-bodied soldiers. On the other hand, noninvasive neurotechnology exists, but it lacks the precision and sophistication for application in the real world. Researchers, then, are working to help the military have the best of both worlds: a high-quality connection between brains and computers—or brains and other brains—without the need for surgery.

Military interest

The Defense Advanced Research Projects Agency (DARPA), which develops emerging technologies for the military, is funding six different research teams across the country that are trying to make advances on this front. If successful, the techniques could have diverse, seemingly inconceivable applications. Soldiers might gain the ability to control unmanned aerial vehicles—or, theoretically, an entire swarm of them—using only their minds.

“Just as service members put on protective and tactical gear in preparation for a mission, in the future, they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete,” said Al Emondi, Ph.D., the N3 program manager, in a statement.

For patients, treatments previously seen as unimaginable without surgery— restoring sight to the blind or movement to the seriously injured—may be within reach. For example, if a patient loses the ability to see or hear due to disorders of the eye or ear, but the underlying part of the brain that receives those signals remains intact, the technology could be applicable.

“You can imagine there’d be people who might benefit from a visual prosthetic but are still uncomfortable with the idea of brain surgery,” said Jacob Robinson, Ph.D., an associate professor in Rice’s Brown School of Engineering who leads the Rice research team.

Theoretically, the technology could not only support military operations, but also could open up treatments to a broader pool of patients who may not be interested in having brain surgery but would still benefit from neurotechnology.

Rapid transmission

The Rice-led effort, dubbed MOANA, includes 15 co-investigators from Rice University, Baylor College of Medicine, the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital, Duke University, Columbia University and the John B. Pierce Laboratory affiliated with Yale University. Under the initiative, researchers aim to establish the mind-machine link via a special cap worn by patients and outfitted with lasers, optical detectors and magnetic field generators.

Robinson’s team is charged with proving it’s possible to use non-surgical technology to both detect and control brain signals—specifically, that light can be used to measure the activity of cells in the brain and that magnetic fields can control activity in brain cells. The team will also have to show that the process can occur quickly—at the speed of thought.

“Our goal is to access information from the individual cells that might be communicated 100 times per second,” Robinson said. Any slower, he added, and the information gets “washed out” and difficult to interpret.

In order for the cap to function, though, the brain must be prepped. Viral vectors that edit genes will be delivered to precise locations in the brain. These vectors change the way neurons respond to light when they’re active, taking advantage of the property of certain light wavelengths that can penetrate the skull. That would allow the cap to “read” brain activity. Meanwhile, neurons would be reprogrammed to fire in response to magnetic activities, which would enable researchers to “write” to the brain.

Initially, researchers will test the technology on rodents and non-human primates. And that’s where the science fiction comes in.

“What we’re aiming to do … is to be able to transmit one animal’s sensory perception to another animal,” Robinson said. For example, researchers could present one mouse with a stimulus—a certain tone or a specific image—and the “connected” mouse would behave as if he heard or saw it.

By the end of four years, the team hopes to be able to sustain that same process with humans.

First, the team would develop an image—say, a car or a house—and try to transmit it to the mind of a blind person through the cap. Next, the subject would be able to describe exactly what he or she “saw.” Then, the team would test whether a blind person can imagine an image that can be transferred back to a computer for researchers to see. The final test would determine whether images can be transferred back and forth between the minds of blind patients. Researchers are working with blind patients because they are required to work with patients who could benefit from the technology— eventually, those patients could potentially be connected to cameras that help them “see” without brain surgery.

Visual creatures

Working with blind patients provides an important opportunity to study brain-computer interfaces, said Michael Beauchamp, Ph.D., professor and vice chair of basic research at Baylor ’s department of neurosurgery.

“Humans are primarily visual creatures,” he said. “A big chunk of the brain is dedicated to vision. If you want that interface, the visual cortex [of the brain] is a natural target.”

In a separate project, Beauchamp and his colleagues at Baylor, along with a team at the University of California, Los Angeles, are working with a company called Second Sight that has developed a pair of glasses outfitted with a video camera that transmits images to a tiny computer chip implanted in the brains of the blind. The resolution isn’t great—only about 60 pixels—but it’s enough to allow for basic functionality.

Paul Phillips, who lost his sight more than 13 years ago and is using the device, says the technology hasn’t restored his sight. But it does allow him to detect the difference between dark and light. He can identify the location of his white sofa, for example, and he can tell the difference between the sidewalk and the grass when he’s outside his home. Although the device doesn’t allow him to perceive color, he was able to recently detect the light and motion of fireworks.

Over time, researchers hope to improve the resolution of those images. And in theory, the technology being developed as part of the DARPA project may help patients like Phillips one day “see” without the need for brain surgery.

Twice a week at Baylor, researchers work with Phillips to determine how well he “sees” patterns of light on a monitor, using the glasses-mounted camera that connects to his brain. They can also turn the camera off and prompt him to “see” moving patterns of light by triggering electrodes implanted in his brain. “Essentially, we’re trying to draw on Paul’s visual cortex,” said William Bosking, Ph.D., assistant professor of neurosurgery at Baylor. Bosking compares the technique to tracing the shape of a letter on someone’s palm. Phillips, for his part, says the experience is “pretty cool”—especially after seeing nothing but darkness for so long.

Through that process, they’re mapping the brain’s visual cortex and learning more about how triggering those electrodes prompts the perception of light and lines. Though that study is separate from MOANA, some of the lessons the researchers learn about the brain’s visual cortex could be applied to the MOANA project.

Limitless possibilities

As for the potential wireless brain-to-computer technology, the possibilities are seemingly limitless. “We don’t have to wait for someone to move the muscles in their mouth to say what they’re seeing; we don’t have to wait for them to move the muscles in their finger to type what they’re seeing,” Robinson said. In other words, those patients could share information through their minds faster than any other way currently possible. If the technology works, it may mean that, one day, people could communicate with devices or vehicles faster than speaking, typing or controlling a steering wheel or joystick.

“If I want to tell another soldier there’s a bad guy around the corner, I’d have to pick up a walkie talkie,” Beauchamp said. “If I could flash an image of what I’m seeing, that’s more effective.”

Robinson acknowledges that the whole idea of tapping into people’s brains wirelessly may make some people uncomfortable. But he is quick to acknowledge that his team includes neuroethicists, who consider how the techniques might be misused and offer potential safeguards. He also emphasizes that he’s not developing devices that can read patients’ private thoughts.

“An important thing to realize is that the [images and sounds] we are seeking to decode are processed in ways that are very different from, say, your stream of consciousness or private thoughts,” he said. “The idea is that, throughout the process, we are making sure the user is in control of how their device is being used.”

Back to top