On Friday, Neuralink gave us a progress update, and there has been a TON of progress since its last update. Not only has the company made the device smaller and simpler, but it is already testing the device on animals. To say that the device has great potential to improve lives and advance civilization would be a vast understatement. If you’re not familiar with what Neuralink is, and the amazing potential it has, I’d recommend Wait But Why‘s lengthy article on the subject, or at the very least, watching this past week’s update:
Like many other writers (especially in Musk fandom), I’m excited about the technology. Most great inventions have proven to bring net improvement to our lives. Sadly, most powerful technologies also have a dark side that we need to deal with. For example, explosives can be used to clear the way for roads through the wilderness, but they can also be used to kill people and destroy property. We’re still better off with explosives in the world, despite the bad things people can do with them.
Ultimately, technologies are neither good nor bad. What matters is who uses them and what their intentions are. We will likely find the same to be true of Neuralink.
Homeland Security and Emergency Management personnel, private consultants, and insurance professionals all over the world are constantly working to protect us from potential risks brought by the bad people looking to do bad things (along with many other ways bad things can come to us). While it’s a complex and demanding job, it really comes down to four things:
- Identifying potential risks and finding ways to mitigate the risks
- Preparing for the ones that we can’t or won’t mitigate
- Responding to disasters that result from unmitigated or unmitigatable risks
- Recovering from these disasters, and applying hard-earned lessons back to step 1
Risk management professionals can’t do all of this alone, but try to supply us with the best knowledge they can. By letting policymakers, first responders, private entities, and citizens know what the risks are and what some of the mitigation methods might be, we can all contribute to doing better in steps 2-4.
In this article, I’m going to use these methods (which I learned in graduate school) to cover some of the potential risks Neuralink poses to individual safety, freedom, and free societies when misused by bad actors. I’ll organize the threat vectors by the type of malicious actors that might benefit from misuse of Neuralink technology, and then propose some possible ways to mitigate these threats.
Other non-malicious threats (outages, defects, failures, infections, etc.) will not be covered here. These other risks have been discussed in Neuralink livestreams, so we probably don’t have to worry much about them. Neuralink employees seem to be thinking about making their technology part of a resilient system (one that can fail gracefully), which is the key to mitigating those sorts of risks.
Threats From Private Actors
Let’s start by looking at the risks that can originate from private, non-governmental actors. For the sake of simplicity, and because Neuralink has not done anything to indicate malicious intent, I’m going to assume that Neuralink itself is not a potentially malicious actor. I know not all readers will agree with this assessment, so feel free to let us know in the comments and on social media what your thoughts are on this.
The most obvious threat comes from hackers, be they individuals or groups working for various organizations. To be fair, Neuralink’s team did field a popular question about security at the most recent event. They told us that by having control of all hardware and software, they are able to build security in from the beginning, even going as far as to physically segregate sensitive systems in hardware to prevent hacking of the system. No system can be perfectly secure, though. For example, Apple has control over both hardware and software. While once much safer than Windows computers (whose hardware comes from a variety of vendors), Apple computers now are outpacing Windows computers in terms of malware that exploits system vulnerabilities.
The 2010 film Inception explores many of the reasons one might want to hack another’s brain through a Neuralink interface. Government secrets, personal information for blackmail, trade secrets, and many other things could prove valuable for a hacker to get their hands on. Manipulating someone into doing something they wouldn’t otherwise do could also prove valuable. This could all potentially happen without the user’s knowledge, or by breaking encryption and monitoring the implant’s outputs from nearby.
Malware & Pranks
“We’re no strangers to love. You know the rules and so do I. A full commitment’s what I’m thinking of. You wouldn’t get this from any other guy. I just wanna tell you how I’m feeling. Gotta make you understand. Never gonna give you up. Never gonna let you down. Never gonna run around and desert you. Never gonna make you cry. Never gonna say goodbye. Never gonna tell a lie and hurt you.” – Rick Astley
Yes, you just got “Rickrolled,” a harmless prank where you unexpectedly encounter Rick Astley’s Never Gonna Give You Up. While infected computers can cause great harm, we do usually have the option of turning them off or closing the window if all else fails. A device embedded in your skull probably won’t be as easy to turn off if it gets hit with adware or a virus that continuously plays a once popular pop song now used to annoy people. We’ve all had random songs stuck in our head from time to time, but if we were to get Rickrolled in a brain-computer interface, we’d better hope Neuralink comes up with an off switch of some kind that we can use at 3AM so a seemingly harmless prank doesn’t ruin your sleep until the battery dies.
Abuse/Exploitation of Vulnerable Populations
Another threat from misused Neuralink technology (and other future competing brain-computer interfaces), could come from well-meaning but misguided people. While Neuralink’s experts want to improve lives by finding a cure for everything from paralysis to depression, they’re already venturing into questions of what human conditions are something that we should even cure. For example, autism spectrum disorders are considered by many to be a problem to be solved, but by others as Neurodiversity, or just a different kind of healthy person that we should adapt to accommodate in society.
We already see Christian fundamentalists try to “cure” homosexuality and “fix” transgender people with fraudulent or misguided “treatment” programs meant to turn them into straight and/or cisgender people, with disastrous results. While a growing number of states outlaw this practice with youth, it’s often legal for closeted adults to be conned into participating or pressured by family and friends to try to be straight. It’s not unimaginable that the churches and con artists running these programs may eventually try to use a brain-computer interface to “fix” LGBT people. While this might still be impossible (and could greatly harm people in the attempt), there are real ethical issues that even successful attempts to change sexual orientation and gender identity would raise.
Human traffickers, abusive spouses, pimps, blackmailers, con artists, abusive parents, elder abusers, cult leaders, and many other exploiters could all find Neuralink technology extremely useful. We need to think about all of the ways these various abusive people might abuse brain-computer interfaces and memory manipulation technology.
Even exploitative employers could probably find creative ways to misuse brain manipulation to deprive workers of their rights and pay for work.
In theory, democratic governments have legal safeguards to prevent abuse of the population by government officials. Even with these restrictions on power, democratic governments have a history of abusing the populations they are supposed to serve. Most of the examples I’ll give here are from the United States, but if you’re in another country that generally respects rights, you’ll probably find at least some similar abuses if you look for them.
In the United States, we’ve seen Japanese-American internment, the Tuskegee Syphilis Experiments, Project MKUltra, the release of biological weapons in US cities, the massacre of Native Americans, the MOVE bombing, the Waco Massacre, warrantless surveillance of US citizens, and extraordinary rendition, among many other abuses. Some of these things were later found to be illegal, and some practices continue. The United States and state and local governments still struggle with extrajudicial killings of minorities under questionable circumstances, abuse of peaceful protesters, and abuse of every kind of political power by politicians.
We would be fools to think that even the freest governments wouldn’t find ways to abuse brain-computer interfaces when faced with dark times or when under the leadership of unscrupulous politicians. With an illegally-obtained warrant (or at the very least, a very immoral and unconstitutional use of warrants), through NSA hacking, forced installation, the PATRIOT Act, or through many other supposedly legal methods, the government could conceivably steal information from your mind, spy on your every move, or manipulate your behavior.
While we can always challenge these abuses in court, challenging the government is both expensive and takes years to do, by which time the harm (which may be irreparable) will have already been done. Even so, many shockingly unethical programs have been found to be legal by courts, leaving citizens with no lawful recourse.
We also shouldn’t forget that some rights we take for granted today haven’t been the norm for terribly long. Interracial marriage has only been legal in all 50 states since 1967, and even most Democratic politicians didn’t support gay marriage until the polls favored it, with 50-state legalization only coming in 2015. Women were often not allowed to get a credit card without a husband to cosign for it until 1974.
Most recently, we’ve seen how easily political candidates can misuse data just from social media to manipulate elections. With stolen data (something that could potentially happen with Neuralink), a number of elections were manipulated to elect right-wing authoritarian figures to office. The temptation of even deeper data from brain-computer interfaces being misused is too great to ignore.
If the situation looks bleak for theoretically democratic governments, it’s downright terrifying under governments with no real legal protections for their citizens. There’s nothing stopping authoritarian governments from doing whatever they want with brain-computer interfaces in places under their tyrannical control.
China’s use of technology to control populations should be particularly instructive.
Chinese police interrogators grill a man for making a joke about them on social media. It was a private chat room. pic.twitter.com/S01IP300m3
— Ian Miles Cheong (@stillgray) December 2, 2019
If you think a government that would interrogate a citizen for making disagreeable comments online wouldn’t heavily abuse brain-computer interfaces to exert control over their people, I have some oceanfront property in New Mexico for sale at low prices!
While there are numerous science fiction dystopian thrillers that explore the possible outcomes, my favorite is probably The Alliance, a 1988 novel by Gerald N. Lund. In a post-apocalyptic North America, a former military general uses brain implants to control the population of four cities under his control. With a few electrodes implanted in the pain centers and a few others in the emotional parts of the brain, a computer chip automatically inflicts pain on any citizen experiencing anger, guilt, or other emotions deemed undesirable by the book’s antagonist, and allows basic monitoring and tracking. This gives “The Major” complete control of the population through the resulting Pavlovian conditioning.
That outcome was possible with just a few electrodes. It’s scarcely imaginable what evils could be put upon a population with something as advanced as a Neuralink or a reverse-engineered version produced by tyrannical governments without safeguards. Combine this with modern machine learning, mass surveillance, and full control of information, and there are probably no limits to the amount of control a government with unlimited power might exert.
More Exotic Threats
Given what Elon Musk tells us about the threat of artificial general intelligence, and the need to “merge” with AI by augmenting human intelligence, there are additional threats we must consider, albeit briefly.
One possible threat is the very AI superintelligence Musk wants us to use Neuralink technology to compete or cooperate with. A superintelligent AI might just as easily use the interfaces to control and manipulate us, possibly without our conscious knowledge. We might lose our freedom without having any opportunity to resist, or having any clue that we had even lost anything. A benevolent superintelligence might only use this power over us for our own good, but there’s no guarantee that emerging AI superintelligence would be benevolent, or that its values would align with ours. It’s possible that horrible things could be done to us “for our own good” that we would never do to ourselves as a species.
Possible Mitigation Strategies
While this article can only speculate about a few ideas on how to handle these potentially malicious actors, it’s an important topic that needs further consideration going forward to prevent Neuralink technology from becoming a disaster for humanity.
There are three main ways we can protect ourselves from abuse of this technology by malicious non-governmental actors: regulation, education, and self-defense.
On the regulatory front, we need to encourage lawmakers to place ethical limits on the use of brain-computer interfaces while also being careful to not strangle the technology with excessive controls. The development of such technologies is already regulated by a number of government agencies in most jurisdictions due to their medical nature. Strict protocols for approval on testing of humans and animals are already in place and likely don’t need to be beefed up.
What we do need to encourage is the criminalization of any non-consensual use of the technology. It should be a felony crime, punishable the same as murder or rape, to install a brain-computer interface on anyone unwilling. It should also be a felony crime, on par with rape, to manipulate a brain-computer interface or otherwise use data from a brain-computer interface without the explicit consent of the person with the interface installed. Theft of data obtained from brain-computer interfaces should be treated the same as an armed burglary, with stiff prison sentences for all intentionally involved.
Non-discrimination laws should be extended to include one’s brain-computer status. Employers and other organizations should not be able to exert any form of financial or social pressure for or against such technology. Nobody should be treated better or worse for having or not having a brain-computer interface implant.
On the education front, we need to make sure all citizens are aware of the risks of misuse of this technology, especially if one is a candidate for installation. We teach our children how to be safe on the internet and avoid things like scams, predators, and cybercrime. Similar efforts are needed with brain-computer interfaces.
Finally, if we take the above recommendations and make non-consensual use of a brain-computer interface a felony, we enable citizens to use force (including deadly force when reasonable and necessary) to stop people misusing the technology in extreme circumstances. This may sound extreme, but the rape of one’s mind or the minds of others is nothing any citizen should tolerate. It is perfectly ethical and moral to kill to stop such a violation of the rights of the human soul if there are no other ways to stop it from happening.
There should be an outright ban on the misuse of this technology by government officials. No citizen should be required to have an interface installed, especially in prison or any type of detainment. Military personnel also need to be protected from any mandates, with jobs requiring the devices being strictly optional and voluntary. There also needs to be an absolute ban on the use of warrants or court orders to obtain information from anyone’s brain-computer interfaces, and the warrantless collection of such data should be strictly prohibited, no matter the excuse (national security, exigent circumstances, etc).
Those caught violating people’s rights via brain-computer interfaces need to face serious consequences and be removed from any position of authority. There should be no kind of immunity from this crime, and it should be made clear that citizens are perfectly justified in using force to stop such abuses by a government official.
Brain-computer interface technology should be treated like something worse than nuclear weapons in international relations. The development and possession of the technology by governments without strict safeguards for the rights of their citizens should not be tolerated, because the danger of its misuse by such governments is simply too great. Every effort should be made, including military force, to prevent crimes against humanity.
If democratic governments aren’t willing to stop the authoritarians from possessing this technology, we will not only see great atrocities committed on the citizens of such regimes, but will also see the technology used against the rest of us. For the sake of freedom everywhere, this threat must be taken seriously.
It is also probably a good idea to research methods for remotely disabling brain-computer interfaces in a given area, possibly with the use of electromagnetic pulses, without damaging brain tissue. The ability to liberate a population being abused by such technology without invading and killing the victims, would be a great way to prevent and deter such misuse.
Protecting brain-computer interfaces from threats like artificial superintelligence goes pretty far beyond my education and experience, but it’s indisputable that the ability to disable a computing device prevents continued misuse. I may sound a bit like a neo-Luddite saying this, but it may prove advisable to allow the battery on one’s Neuralink to die periodically and go without it running for a few days a month. If you’re being manipulated, having some time to think without the device might give you an opportunity to see the difference.
Making sure there are no political mandates or economic pressure (discrimination) to force the unwilling to get one installed could also go a long way toward keeping us safe from such risks, along with many of the above. If there’s always a portion of the population without a brain-computer interface, that portion of the population will be immune from such interference in their mental functioning. The rest of us might think they’re crazy when they warn us of danger, but we’d at least be warned.
I don’t pretend to have all of the answers here, and hope this article can serve as a starting point for further discussion and research. To ensure the long-term success of the technology and prevent abuse, we need to mitigate the risks posed by malicious actors. The right time to have this discussion is now, while the technology is emerging, instead of later, when we are already facing massive problems that may have been preventable.