Again — Garbage In, Garbage Out
I feel like Marty McFly in Back to the Future when I see this stuff. “Wait, I’ve seen this one! This is a classic!”
Just like every other whizbang computer science invention, it suffers from the same weakness that we’ve seen every other “AI” thing do. If you put bad data into it, you’ll get bad data back out. Only this time, it’s worse, because the computer is deciding what bits of the data are important, and you can’t always know what it’s looking at and why it’s getting things right. Something that works 99% of the time can fail spectacularly or in comical ways when the neural net is found to be looking at the wrong stuff.
These problems can probably be solved, though, but let’s explore this a little further.
What Artificial Neural Networks Try To Do (& Often Get Right)
By now, I hope readers know three things:
- Artificial neural networks aren’t magic. They’re computer programs, using math under the hood just like any other program of the past.
- They can’t be trusted to produce better outputs than their inputs (garbage in, garbage out).
- Math and computers can’t be given too much or too little trust. Only an appropriate level of trust that fits the system’s limitations is safe.
But, artificial neural networks are still amazing. One super cool thing they do is help computers chew on qualitative information (uncountable things — something they’ve always struggled with). Traditional programs can readily deal with hard facts like an object’s size, position, and velocity (these are all expressible as numbers), but they couldn’t tell us what the thing was. Autonomous vehicles will be impossible if a vehicle’s computer can’t identify objects, so this is vitally important to that mission.
They don’t identify things the way we do, though. Artificial neural networks exist to convert qualitative judgments (what is that thing?) into quantitative ones for a program to deal with (this is “Thing #3481,” so now these mathematical rules will apply to the program). This enables a computer to do things that were previously not well suited to it, and that’s amazing.
But, the ability to make limited qualitative judgments (categorizing objects) doesn’t mean a computer system is good at making all such judgments like we are. Once they get to the end of their training, they have no ability to improvise or adapt and go on.
Brains Are Not Meat Computers, & Computers Are Unlikely To Achieve Consciousness In The Short Term
It is the pinnacle of bad AI thinking to compare the human mind to a computer. We have popularly done this for decades, but it’s fundamentally wrong.
Now, notice I didn’t say brain. I said mind. We often use those terms interchangeably, but when we do that, we ignore the fact that we don’t know how the human mind and the human brain relate to each other. The mind may be in the brain and due to a physical process we don’t yet understand, or it could be something else. We don’t know that much yet.
We do know a lot about the brain, including how it’s wired up to our nerves, how different parts of the brain connect to different senses, and how diseases or problems in the brain lead to problems a person subjectively experiences. We know that when a person is happy, certain parts of the brain light up with activity in scans. We know that when a person smells pheromones, different parts of the brain light up depending on the person’s gender identity and/or sexual orientation (and not necessarily their “sex”).
We also know that we can get the brain to affect people’s consciousness through manipulation. Chemicals can make a person enter altered states of consciousness, lose consciousness, or see things that aren’t there. Electromagnetic stimulation, ultrasound, and even direct electrical stimulation can all have predictable effects. Neuralink isn’t lying to us when they say they could eventually do things like pipe audio or even image overlays into the brain that our consciousness would perceive.
Using the human brain for inspiration has led to the development of artificial neural networks, and those are doing amazing things, but they can’t reproduce the mind at this point, and may never be able to do so.
The biggest roadblock is that we haven’t solved the Hard Problem of Consciousness. Despite the many things we do know about the brain, we don’t know what mechanism drives a human being’s experience of consciousness. Somehow, the human brain is doing something that’s beyond the sum of its parts, and a mind somehow exists that the brain or body interacts with. How do we know that a mind and consciousness is happening? Only because the person tells us that they experience consciousness.
This idea of believing people without evidence may seem to fly in the face of science, but science was never meant to be a faith, nor was it meant to explain stuff like this. Again, see Goff’s book on the topic for a lot more details (or a video here where he goes over it).
As stated earlier, this goes all the way back to Galileo. We don’t know what consciousness is, or how it happens, because Galileo deliberately set that very issue aside for later so scientists could focus on that which could be measured and computed. Now, we’re trying to take a philosophical approach to inquiry that was specifically designed to exclude consciousness and use it to explain something we can’t even prove exists beyond taking each others’ words for it (our experience of consciousness). Will that approach work? We simply don’t know.
While physical science as started by Galileo has been hugely successful, there’s simply no guarantee that it will lead to an understanding of consciousness, and if it does, it might not be something we can reproduce with computers.
In the last part, I’ll finish explaining how these artificial neural networks aren’t alive or conscious, but that it really doesn’t keep companies like Tesla from doing what it aims to do (build self-driving cars).
For ease of navigation for this long series of articles, links to all of them will be here once they are published:
Part 1: Why Computers Only Crunch Numbers
Part 2: Miscalibrated Trust In Mathematics
Part 3: Computers Only Run Programs
Part 4: How Neural Networks Really Work
Part 5 (you are here): What Artificial Neural Networks Can’t Do
Part 6: Self Driving Cars Are Still Very Much Possible, Despite Not Being Alive
Featured image: Screenshot from Tesla’s AI Day
I don't like paywalls. You don't like paywalls. Who likes paywalls? Here at CleanTechnica, we implemented a limited paywall for a while, but it always felt wrong — and it was always tough to decide what we should put behind there. In theory, your most exclusive and best content goes behind a paywall. But then fewer people read it! We just don't like paywalls, and so we've decided to ditch ours. Unfortunately, the media business is still a tough, cut-throat business with tiny margins. It's a never-ending Olympic challenge to stay above water or even perhaps — gasp — grow. So ...
Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.