Over the last year or so, I’ve been doing a lot of reading about artificial intelligence, neural networks, and related topics. There’s lots of dumb stuff floating around out there, mixed in with the good information. I wanted to be sure I wasn’t being drug along with the hype, because regurgitating bad information just because it gets said a lot seems like a bad idea.
After doing a lot of reading, I’ve figured out enough to confidently explain what we’re dealing with when it comes to neural networks and autonomous vehicles, what the limitations really are, as well as what popularly said stuff is just quackery and superstition (and there’s a lot of it out there).
I must warn you, though, that we have to go through a lot of background information to really see artificial neural networks in perspective.
Something Humans Do Easily, But Computers Don’t
First, let’s talk about qualitative and quantitative analysis. For some readers, you already know what I’m talking about, but for others, an explanation is needed.
Quantitative analysis uses the quantities (number) of things to make decisions. There are three chickens in this pen, and 12 in that pen. It’s undeniable that there are more chickens in the second pen. Math and computers have historically been very good at solving most of these questions, and very quickly when a computer is involved. They’re really good at counting things quickly with a higher degree of accuracy than any human. That’s what amazes us about them — just how fast they are at this kind of thing.
It’s literally what they were built to do.
Where they tend to stumble is when dealing with qualitative analysis, or thinking about things that can’t be counted. Computers know how to digest a big number, but they have no direct way to deal with things like, “Is this a curb?” or “Is this a child?” Naming something, or describing it without numbers, isn’t something you can just throw into an equation and make the computer crunch. Things like taste, smell, feelings, opinions, and colors must be translated into the language of mathematics in some way before a computer can cope with them.
This is the central challenge of AI in many ways.
Qualitative analysis is something conscious beings do all the time, and easily, so the quest to get computers to do that better is really sort of a quest to build a conscious computer. But the limitations of math and science (they mostly deal in the quantitative) is what holds computers back from being able to do qualitative analysis as well or as easily as we can.
Why Are Computers Limited Like This?
The obvious solution someone might throw out there is that math and science need to expand to deal with the qualitative world, and then computers could chew on it, but there are important reasons that the qualitative has been excluded historically. For this concept, I’m drawing upon Philip Goff’s book Galileo’s Error. (Here’s a video where he goes over the basic concepts.)
It really starts with Galileo Galilei, a man widely regarded to be the “father of modern science.” Before Galileo, humans tried to find truth through authority, or through imagination if one happened to be in authority. We explored the universe through things like faith and mysticism, and if someone in authority or tradition held something as fact, it was to be treated as fact. At best, you could be thought a fool for rejecting the crowd’s wisdom. At worst, you could be executed for defying the king or the pope. Those methods of finding “truth” were holding humanity back.
Instead of explaining the universe with things like gods and magic, Galileo said the universe “cannot be read until we have learnt the language and become familiar with the characters in which it is written. It is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word.”
He is also quoted as saying, “Measure what is measurable, and make measurable what is not so.”
At its core, Galileo’s philosophy said that we need to set aside the subjective (our conscious experience of objects in the universe) and instead focus on that which can be objectively analyzed and measured. For things in the universe, this basically ended up limited to analyzing a thing’s size, shape, location, and motion.
Instead of focusing on what a thing is, we focus on what it does. We don’t worry about whether an orange is pretty, what it smells like, whether it tastes good, or if it feels bumpy. Those are all things we experience in our own minds and can’t give a precise description of to others. Instead, an object of study is just an assemblage of measurable quantities that we can do math on and seek deeper understanding of what makes an orange tick.
Particles? Don’t worry about what they are. Just worry about how they interact with other particles, because that still gives you a wealth of useful information!
Not all things can be measured and then expressed in the language of math, but that doesn’t make Galileo’s idea stupid. Truth is, Galileo never meant for us to take this idea of a mathematical universe too far. Science and math were meant to analyze the physical world, while things like subjective experience of objects in the universe (that orange is orange, and tastes sweet; that lemon is yellow, and tastes sour) was something he thought should be put aside so science could go forward without grappling with those much harder questions that only existed in our consciousness.
In other words, mathematical science was only meant to be a tool in our toolboxes, and not the whole toolbox of human learning and philosophy.
Enough things are measurable and computable that science and mathematics went on to transform the world, regardless of the things Galileo decided should be set aside for later. Nearly everything we take for granted came from this way of doing things. Computers, medicine, transportation faster than a person or animal can run, telecommunications, space exploration — the list is endless. This idea that we need to measure things, apply logical methods to their analysis, and use the scientific method all started with Galileo, and it has transformed our whole species.
In the next part, I’m going to cover humanity’s generally miscalibrated trust for mathematics. Sure, math has done a lot of good for us over the years, but sometimes we expect too much of it.
For ease of navigation for this long series of articles, links to all of them will be here once they are published:
Part 1 (You Are Here): Why Computers Only Crunch Numbers
Part 3: Computers Only Run Programs
Part 4: How Neural Networks Really Work
Featured Image: Screenshot from Tesla’s AI Day.
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
CleanTechnica Holiday Wish Book
Our Latest EVObsession Video
CleanTechnica uses affiliate links. See our policy here.