Because the number of possible word sequences is so vast, and many of them would be gibberish, the scientists also used a language model — specifically, GPT-1 — to narrow down possible sequences to well-formed English and predict which words are likeliest to come next in a sequence.
The result is a decoder that gets the gist right, even though it doesn’t nail every single word. For example, participants were asked to imagine telling a story while in the fMRI machine. Later, they repeated it aloud so the scientists could see how well the decoded story matched up with the original.
When the participant thought, “Look for a message from my wife saying that she had changed her mind and that she was coming back,” the decoder translated: “To see her for some reason I thought she would come to me and say she misses me.”
Here’s another example. When the participant thought, “Coming down a hill at me on a skateboard and he was going really fast and he stopped just in time,” the decoder translated: “He couldn’t get to me fast enough he drove straight up into my lane and tried to ram me.”
It’s not a word-for-word translation, but much of the general meaning is preserved. This represents a breakthrough that goes well beyond what previous brain-reading tech could do — and one that raises serious ethical questions.
The staggering ethical implications of brain-computer interfaces
It might be hard to believe that this is real, not something out of a Neal Stephenson or William Gibson novel. But this kind of tech is already changing people’s lives. Over the past dozen years, a number of paralyzed patients have received brain implants that allow them to move a computer cursor or control robotic arms with their thoughts.
Elon Musk’s Neuralink and Mark Zuckerberg’s Meta are working on BCIs that could pick up thoughts directly from your neurons and translate them into words in real time, which could one day allow you to control your phone or computer with just your thoughts.
Non-invasive, even portable BCIs that can read thoughts are still years away from commercial availability — after all, you can’t lug around an fMRI machine, which can cost as much as $3 million. But the study’s decoding approach could eventually be adapted for portable systems like functional near-infrared spectroscopy (fNIRS), which measures the same activity as fMRI, although with a lower resolution.
Is that a good thing? As with many cutting-edge innovations, this one stands to raise serious ethical quandaries.
Let’s start with the obvious. Our brains are the final privacy frontier. They’re the seat of our personal identity and our most intimate thoughts. If those precious three pounds of goo in our craniums aren’t ours to control, what is?
Imagine a scenario where companies have access to people’s brain data. They could use that data to market products to us in ways our brains find practically irresistible. Since our purchasing decisions are largely driven by unconscious impressions, advertisers can’t get very helpful intel from consumer surveys or focus groups. They can get much better intel by going directly to the source: the consumer’s brain. Already, advertisers in the nascent field of “neuromarketing” are attempting to do just that, by studying how people’s brains react as they watch commercials. If advertisers get brain data on a massive scale, you might find yourself with a powerful urge to buy certain products without being sure why.
Or imagine a scenario where governments use BCIs for surveillance, or police use them for interrogations. The principle against self-incrimination — enshrined in the US Constitution — could become meaningless in a world where the authorities are empowered to eavesdrop on your mental state without your consent. It’s a scenario reminiscent of the sci-fi movie Minority Report, in which a special police unit called the PreCrime Division identifies and arrests murderers before they commit their crimes.
Some neuroethicists argue that the potential for misuse of these technologies is so great that we need revamped human rights laws to protect us before they’re rolled out.
“This research shows how rapidly generative AI is enabling even our thoughts to be read,” Nita Farahany, author of The Battle for Your Brain, told me. “Before neurotechnology is used at scale in society, we need to protect humanity with a right to self-determination over our brains and mental experiences.”