Demystifying Neural Networks: Teslas Are (Probably) Not Alive, But That’s OK! (Part 2)

“Count All The Things!” Our Number-Obsessed Society

Sadly, people are pretty good at putting the cart before the horse. We forget that numbers are a tool, and not a complete philosophy to look at the world with. We even forget that the numbers aren’t the goal to work toward. Even when something is not a good fit for quantitative analysis, we try to do it anyway, and pretend that we’re getting it right.

We can’t just say “Hey, that was a good movie!” We need to give it a rating from one to five when we’re done watching it, so the video app can calculate how desirable the movie is and present that data to other potential viewers. Pollsters call and ask people questions, requiring a simple “yes” or “no,” or for them to choose from a limited set of options that can then be turned into math, so that society can be presented with a simple number to give them an overly simplistic view of the world to base their own views on.

The problem is, I’ve seen a number of movies with low Rotten Tomato scores that I really liked. I’ve seen products and TV shows that were obviously designed and written by focus groups. Sure, this kind of math can be useful at creating products and experiences that can be sold to a wide audience, but they won’t be good enough for any particular segment of the audience to enjoy deeply. We end up with really enjoyable TV shows like Firefly that only last for a season or two, and brainless (or, if we want to be precise here, limbically oriented) reality TV shows that last for dozens of seasons because they appeal to the lowest common denominator.

Are students and schools judged on how well they prepare students to be productive adult members of society? No. Instead, we find a way to drag the school systems into the chamber of the Almighty Number, and judge them there. Standardized testing gives the regulators and politicians numbers they can crunch instead of useful, actionable intelligence. The schools, knowing they’ll be judged only on these narrow metrics, teach students how to pass the standardized tests well, to the detriment of things the school should be focusing on.

When we ditched subjective experience and decided to quantify everything, we lost a lot along the way and let “garbage in, garbage out” rob us of good things.

The Real Problem Here: Miscalibrated Trust

Credit scores are a great example of faith-based mathematics instead of real mathematics. To eliminate bias in lending decisions, private industry came up with some math that could be used to objectively and color-blindly look at every loan applicant to judge them.

Before this, banks would hide racism by “red-lining,” or picking out the Black and Hispanic neighborhoods and treating them different based on the “risk” they perceived in each neighborhood. When that was found to just be racism with extra steps (and banned by governments), lenders came up with even more complex arrangements that looked at a cross section of an applicant’s financial situation and credit history.

In theory, all people are scored alike now when it comes to credit decisions, but the undisclosed computer programs that make lending decisions based on credit scores are even better at discriminating against people of color than redlining was. Before, you’d have to live in “the hood” to get discriminated against and could escape by moving elsewhere (assuming someone would sell to you), but now the banks have a pseudoscientific way to pick you out from the crowd and determine that society doesn’t think you’re worth much, regardless of where you live.

Instead of making things better, we forgot that bad inputs lead to bad outputs (in other words: garbage in, garbage out), and assumed that math could save us from our darkest impulses and deepest systematic prejudices. Somehow.

Some would argue that this was done intentionally, and in some cases it probably is, but I generally subscribe to Hanlon’s Razor (“never attribute to malice that which is adequately explained by stupidity”). The deeper problem is that we think we can throw science and mathematics on top of something stinky in society and it will magically become fair and good. In reality, throwing dirt on poop doesn’t make the poop become dirt. It only hides the smell from people who don’t want to know.

Calibrating Our Trust In Math (And By Extension, Computers)

I don’t mean to say that mathematics and science are bad. They obviously have done a lot of good in the world, but we have to properly calibrate our trust in them.

When we trust math, science, and the machines too little, we miss out on possible benefits from using them. At the core of the thinking of anti-vaxxers, flat-earthers, and climate denialists are the same thing: distrust. Sadly, that mistrust has been earned in some cases (look at things like the Tuskegee Syphilis Experiments), and intentionally sown into a certain portion of the population in others (oil companies would rather we distrust climate science).

But, it’s just as stupid and dangerous to trust these things too much, too. Overtrust Autopilot, and you’ll eventually get in an accident when it does the wrong thing at the wrong time. Overtrust math’s ability to push human bias out, and you risk entrenching the bias even deeper through false legitimization of that bias.

We have to know the actual limits of something to give it an appropriate amount of trust. Treating science like it’s faith strips the science out of it, and leaves us with junk.

If we are going to understand what neural networks are, we need to understand what computer programs are.

What Computer Programs Are

Before we can discuss neural networks and the other latest and greatest in AI, we need to talk about computer programs in general.

Anyone taking a programming course figures out pretty quickly that it’s all “if,” “then,” and “else.” The code for a dumb little program to ask the user if he or she is hungry would go something like this:

If hungry=Yes
Then Order_Food
Else end

The “hungry” variable has to be put into the program somehow, perhaps through a prompt the program gives the user to check yes or no, and that response gets stored in the computer. Then, the logic of the simple program checks to see if the human answered “yes” or “no.” If they answered yes, then it calls up another program or section of a program to order food. If the answer is anything but “yes,” it ends the program.

Yes, I know that’s an oversimplification, but the underlying point I’m going to make here is true. The things that have amazed and wowed generations of computer users are almost always simple logic (if, then, else), and its underlying math (which can be very complex), combined in different ways to build a program that the computer runs.

In the next part, I want to solidify that fact that computers only run programs. They (at least today) aren’t alive.

For ease of navigation for this long series of articles,

links to all of them will be here once they are published:

Part 2 (you are here): Miscalibrated Trust In Mathematics

Part 3: Computers Only Run Programs

Featured image: Screenshot from Tesla’s AI Day.

Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.