In a future with brain-computer interfaces like Elon Musk’s Neuralink, we may need to rethink freedom of thought

Our minds are buffeted by all kinds of influences, though some seem more menacing than others.

Socrates, the ancient Greek philosopher, never wrote things down. He warned that writing undermines memory – that it is nothing but a reminder of some previous thought. Compared to people who discuss and debate, readers “will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing.”

These views may seem peculiar, but his central fear is a timeless one: that technology threatens thought. In the 1950s, Americans panicked about the possibility that advertisers would use subliminal messages hidden in movies to trick consumers into buying things they didn’t really want. Today, the U.S. is in the middle of a similar panic over TikTok, with critics worried about its impact on viewers’ freedom of thought.

To many people, neurotechnologies seem especially threatening, although they are still in their infancy. In January 2024, Elon Musk announced that his company Neuralink had implanted a brain chip in its first human subject – though they accomplished such a feat well after competitors. Fast-forward to March, and that person can already play chess with just his thoughts.

Brain-computer interfaces, called BCIs, have rightfully prompted debate about the appropriate limits of technologies that interact with the nervous system. Looking ahead to the day when wearable and implantable devices may be more widespread, the United Nations has discussed regulations and restrictions on BCIs and related neurotech. Chile has even enshrined neurorights – special protections for brain activity – in its constitution, while other countries are considering doing so.

A cornerstone of neurorights is the idea that all people have a fundamental right to determine what state their brain is in and who is allowed to access that information, the way that people ordinarily have a right to determine what is done with their bodies and property. It’s commonly equated with “freedom of thought.”

Many ethicists and policymakers think this right to mental self-determination is so fundamental that it is never OK to undermine it, and that institutions should impose strict limits on neurotech.

But as my research on neurorights argues, protecting the mind isn’t nearly as easy as protecting bodies and property.

Thoughts vs. things

Creating rules that protect a person’s ability to determine what is done to their body is relatively straightforward. The body has clear boundaries, and things that cross it without permission are not allowed. It is normally obvious when a person violates laws prohibiting assault or battery, for example.

The same is true about regulations that protect a person’s property. Protecting body and property are some of the central reasons people come together to form governments.

Generally, people can enjoy these protections without dramatically limiting how others want to live their lives.

The difficulty with establishing neurorights, on the other hand, is that, unlike bodies and property, brains and minds are under constant influence from outside forces. It’s not possible to fence off a person’s mind such that nothing gets in.

nstead, a person’s thoughts are largely the product of other peoples’ thoughts and actions. Everything from how a person perceives colors and shapes to our most basic beliefs are influenced by what others say and do. The human mind is like a sponge, soaking up whatever it happens to be immersed in. Regulations might be able to control the types of liquid in the bucket, but they can’t protect the sponge from getting wet.

Even if that were possible – if there were a way to regulate people’s actions so that they don’t influence others’ thoughts at all – the regulations would be so burdensome that no one would be able to do much of anything.

If I’m not allowed to influence others’ thoughts, then I can never leave my house, because just by my doing so I’m causing people to think and act in certain ways. And as the internet further expands a person’s reach, not only would I not be able to leave the house, I also wouldn’t be able to “like” a post on Facebook, leave a product review, or comment on an article.

In other words, protecting one aspect of freedom of thought – someone’s ability to shield themselves from outside influences – can conflict with another aspect of freedom of thought: freedom of speech, or someone’s ability to express ideas.

Neurotech and control

But there’s another concern at play: privacy. People may not be able to completely control what gets into their heads, but they should have significant control over what goes out – and some people believe societies need “neurorights” regulations to ensure that. Neurotech represents a new threat to our ability to control what thoughts people reveal to others.

There are ongoing efforts, for example, to develop wearable neurotech that would read and adjust the customer’s brainwaves to help them improve their mood or get better sleep. Even though such devices can only be used with the consent of the user, they still take information out of the brain, interpret it, store it and use it for other purposes.

In experiments, it is also becoming easier to use technology to gauge someone’s thoughts. Functional magnetic resonance imaging, or fMRI, can be used to measure changes in blood flow in the brain and produce images of that activity. Artificial intelligence can then analyze those images to interpret what a person is thinking.

Neurotechnology critics fear that as the field develops, it will be possible to extract information about brain activity regardless of whether or not someone wants to disclose it. Hypothetically, that information could one day be used in a range of contexts, from research for new devices to courts of law.

Regulation may be necessary to protect people from neurotech taking information out. For example, nations could prohibit companies that make commercial neurotech devices, like those meant to improve the wearer’s sleep, from storing the brainwave data those devices collect.

Yet I would argue that it may not be necessary, or even feasible, to protect against neurotech putting information into our brains – though it is hard to predict what capabilities neurotech will have even a few years from now.

In part, this is because I believe people tend to overestimate the difference between neurotech and other types of external influence. Think about books. Horror novelist Stephen King has said that writing is telepathy: When an author writes a sentence – say, describing a shotgun over the fireplace – they spark a specific thought in the reader.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Related
The next big tech trend will start out looking like a toy
In “Read, Write, Own: Building The Next Era of the Internet,” investor Chris Dixon explains why the biggest trends often go overlooked.
Constitutional warning shot for social media “deplatforming” laws
Can the government tell private websites what they have to publish?
How smart devices helped me unlock hidden health wins
By measuring many different body metrics, smart health devices can help support the mental game as much as the physical fitness gains.
Ray Kurzweil explains how AI makes radical life extension possible
Life expectancy gains in developed countries have slowed in recent decades, but AI may be poised to transform medicine as we know it.
The Supreme Court will soon decide the future of social media
Should social media platforms have the right to decide what speech is permitted? Should the government?
Up Next
Exit mobile version