By Lily Toomey
Reading minds seems to be a common part of the science-fiction canon—a genre much loved by actual scientists. But even as someone who turned their love of Kurt Vonnegut, John Wyndham and H.G. Wells into a career as a neuroscientist, I hadn’t considered telepathy a serious avenue for research—until recently.
Lately, there’s been a lot of hype in the neuroscience world about a technology called “brain-to-computer interfaces,” which are electric networks which can send a person’s brain signals to a computer. This computer can then be taught to read these signals, and use them to perform a variety of tasks. For example, just last year this sort of device was used to record the movement signals in the brain of disabled stroke patients, sending an electrical current to an upper body exoskeleton that controlled the person’s limbs —allowing these patients to regain control over their hands and arms.
But another promising kind of interface that so far has received less attention is the brain-to-brain interface, or BBI. A brain-to-brain interface records the signals in one person’s brain, and then sends these signals through a computer in order to transmit them into the brain of another person. This process allows the second person to “read” the mind of the first or, in other words, have their brain fire in a similar pattern to the original person.
Back in 2013, the first study in which two brains were successfully joined to collaborate and complete a task was published in Scientific Reports. First, Miguel Pais-Vieira and his colleagues trained rats to perform a basic task: the animals were trained to press one of two levers, with the correct lever signalled with a light. The correct choice gave them access to water. Once the rats could successfully complete this task four out of five times, they were assigned as either the encoder—the one sending signals—or the decoder, the one receiving them. Encoder rats were surgically implanted with recording wires that measured activity in the motor areas of their brain, while decoder rats were implanted with stimulating wires in the same area. Each one was kept in a separate container, and only the encoder rats were shown the light signal on the levers. As the encoder rats chose a lever, neurons in their brain started firing.
The BBI recorded this activity, transformed it, and used it to stimulate an equivalent pattern into the brain of the decoder rat. The decoder rat had to correctly press a lever based on this stimulation. (Water was only given if both animals successfully pushed the right lever.) The researchers found that both rats pushed the correct lever 62 percent of the time, or more than chance probability.
Within a year, applications for this kind of device ballooned. In November of 2014, the first real-time BBI for humans was developed by Rajesh Rao and colleagues at the University of Washington. Unlike the poor rats, the human device was non-invasive, meaning surgery wasn’t required. This device transferred the movement signals from the encoder straight to the motor area of the brain of the decoder, without using a computer. In the study, Rao and his team used an electroencephalography (EEG), placing recording wires on the scalp of the encoding person. Then the scientists used transcranial magnetic stimulation (TMS) on the decoding person’s brain, sending little magnetic pulses through their skull to activate a specific region of their brain. This caused the second person to take the action that the first person meant to—for example, to press a button.
But, cool as this sounds, there was a major limitation to the study. The decoder wasn’t consciously aware of the signal they received. They weren’t able to actively process the incoming neural information—meaning only movement was transferred, not thoughts. Instead, their hand simply moved when stimulated, as though a puppeteer was controlling their limbs.
Fortunately, a study using BBIs to transfer information between people swiftly followed. The same researchers at The University of Washington then designed a game with pairs of participants, similar to 20 Questions. In the game, the encoder was given an object that the decoder wasn’t familiar with. The goal was for the decoder to successfully guess the object through a series of yes or no questions. But unlike in 20 Questions, the encoder responded by looking LED flashing lights, one signifying yes and the other no. The visual response generated in the encoder’s brain was transmitted to the visual areas of the brain of the decoder.
To do so, the encoders had to wear an electroencephalography cap, or EEG cap, which uses electrodes on the scalp to detect brain activity. Meanwhile, the decoders had a transcranial magnetic stimulation, or TMS apparatus, positioned above their corresponding brain area. The TMS creates small changes in the magnetic field, which caused neuron firing similar to that in the encoder participants. In other words, if the encoder said yes, the decoder simply saw a flash of light. The decoders were successfully able to guess the object in 72 percent of the games, compared to an 18 percent success rate without the BBI. This suggests a lot of promise for accurately transmitting information between two people.
The brilliant aspect of this study was that by generating the transmitted signal in the visual areas of the brain, the decoding person was consciously aware of the information given to them. This also meant that the decoder had to actively participate, by clicking either a yes or no button. Furthermore, this was the largest BBI study, and also the first to include female participants.
There is obviously still a long way to go before we’ll know what BBI may be capable of. So far, we still can’t transmit complex ideas between people, mainly because we still don’t know how the brain encodes complex ideas. Weird as it may sound, science still can’t explain consciousness, or the particular brain cells and their firing patterns that make up each individual thought. This is what’s limiting how far we can push this technology.
However, already this area of research is raising ethical questions. We should start having conversations now about the implications of these devices—before they get to the point where we can alter complex thoughts. We need to start thinking, for example, about how we can design this technology to prevent unwanted thoughts being sent directly into our heads.
That said, these devices clearly have the potential to revolutionize the way we communicate and learn. There’s a mind-boggling number of possible applications—just imagine projecting ideas in an educational environment, directly sharing memories with others, replacing the need for phones or the Internet altogether, or even, in the more near-term, using it to teach people new motor skills during rehabilitation.
So far, BBIs are just a really exciting but extremely rudimentary development in neurotechnology. But with Elon Musk’s launch of a new company, Neuralink, just last year, with the goal of investigating and developing these types of devices, who knows what the future might hold?
This article was originally published on Massive Science and was republished with permission. For the original, click here.
Lily Toomey is a Doctoral Candidate in Neuroscience at Curtin University.
Disclaimer: The views expressed in this article reflect the author’s opinion and not necessarily the views of The Big Q.