Connect with us

Hi, what are you looking for?

Featured image: Screenshot from Tesla's AI Day.

Autonomous Vehicles

Demystifying Neural Networks: Teslas Are (Probably) Not Alive, But That’s OK! (Part 4)

The Man Behind The Neural Network Curtains: Yep, It’s Still Computer Programs

Again, this is going to be a simplification, but it will be an instructive one nonetheless.

An artificial neural network is a series of math operations (more specifically, statistics), but the math in each little “cell” in the neural network isn’t manually set by humans the way most programs are. Instead, the main concern is getting the network to give you the proper outputs for a given set of inputs.

You “train” and “evolve” these networks by giving them sample data. Here’s some sample data, little fake brain. Shape yourself internally with weights and connections so that you provide outputs that match the sample outputs for this data. Some of you won’t make it, but it’s a chance that I’m willing to take because you’re just programs and can’t really die during training and evolution.

Image by, CC-BY-SA 3.0

Then, once the training is complete, let’s test the survivors and see who does the best job on a different data set (not the training data). If the training went well, it should be able to come up with things that we didn’t give it the answer to, and that will prove that it actually “learned” something.

I don’t mean to piss in the Wheaties of the hardworking people making better and better artificial neural networks, because they really are enabling computers to do really cool things using these methods. It’s perfectly OK to appreciate these methods for what they really are. They really are awesome!

You can show a trained artificial neural network a picture, and it can tell you useful things about the image if you’ve trained it to do so before. This is a curb. This is a stop sign. This is a child. Ideally, we don’t let cars drive through any of these things because that would be very bad. Put a few of these networks together and have them run as part of a program (infinitely easier said than done), and you can start imitating human tasks like driving, recognizing a face, approving/denying loans, and even making initial hiring decisions. Awesome!

But, don’t forget that the self-organizing nature of these networks and their complexity doesn’t make them more than a computer program. They don’t feel. They don’t have opinions. Just like all forms of computing that came before them, it’s “garbage in, garbage out.” They won’t be able to give us anything that they weren’t trained to give us. They still won’t improvise, adapt, or do new things without some prodding and more training.

The Weakness Of Neural Nets

Edge cases are the enemy of neural networks, but edge cases carry a different meaning than they do in other fields. The normal idea is that humans design things to work well in a range of expected conditions, and outside of those conditions we have no idea whether the thing will continue to work.

For example, a building in New Mexico generally only has minor adaptations (if any at all) to handle big earthquakes, and probably no real adaptations to make it withstand hurricanes, because we don’t get those things in New Mexico. The few earthquakes we get are weak, and the few hurricanes that make it this far inland just don’t have enough gusto by the time they get here to rip stuff up or flood us out like they do on the coast. Their remnants do show up sometimes, though.

Here, those things are edge cases. They fall pretty far outside of the statistical distribution of events here. People don’t build for those here. In coastal California? Yeah, you’d better build for earthquakes. In Houston? You’d better prepare for hurricanes. Thus, edge cases differ.

For neural networks, the edge cases revolve more around the networks’ training than their chances of happening in the real world. Something that happens frequently could end up being an edge case if the neural network isn’t prepared to handle it properly. If the training data didn’t prepare it for the task, it won’t perform the task properly, and it can’t improvise like we can.

This is where things get really bad. The happenings inside neural networks aren’t completely unknowable black boxes, but we don’t always know why they’re making the decisions they do about things. There’s the famous example of a machine learning tool getting better than humans at seeing the difference between photos of a Husky dog and a wolf, only for the researchers to discover that the neural network was looking at the trees and the snow (or lack thereof) to categorize dogs from wolves, and not at the dog itself.

Really, that’s pretty smart, but it wasn’t what the researchers intended at all, and could lead to false positives, like the occasional Husky in the snow. Just because the neural network gets it right 90% of the time doesn’t mean it’s even remotely capable of improvising or even guessing the other 10% of the time. When it gets to the edge of its program, that’s as far as it can go, and it will start getting things wrong.

A business using a stop sign in their logo can trip autonomous vehicles or driver assist systems into thinking there’s a real stop sign. If the moon shows up in the wrong part of the sky, it could be mistaken for a yellow traffic light. Give the network a stack of resume PDFs and tell it which ones got the job, and it might decide to never hire a woman because the training data (past hiring decisions) had a bias against women.

In the next part, we’ll explore the limitations of neural nets.

For ease of navigation for this long series of articles, links to all of them will be here once they are published:

Part 1: Why Computers Only Crunch Numbers

Part 2: Miscalibrated Trust In Mathematics

Part 3: Computers Only Run Programs

Part 4 (you are here): How Neural Networks Really Work

Part 5: What Artificial Neural Networks Can’t Do

Part 6: Self Driving Cars Are Still Very Much Possible, Despite Not Being Alive

Featured image: Screenshot from Tesla’s AI Day

Don't want to miss a cleantech story? Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Written By

Jennifer Sensiba is a long time efficient vehicle enthusiast, writer, and photographer. She grew up around a transmission shop, and has been experimenting with vehicle efficiency since she was 16 and drove a Pontiac Fiero. She likes to get off the beaten path in her "Bolt EAV" and any other EVs she can get behind the wheel or handlebars of with her wife and kids. You can find her on Twitter here, Facebook here, and YouTube here.


You May Also Like

Climate Change

Up until now, manual mapping of wetlands was the only real choice, and it was time consuming due to data processing, curation, and manual...

Climate Change

This article, including headlines, was written by ChatGPT with prompting by Michael Barnard as an example of the kind of attention-grabbing, good-quality content you...

Clean Power

ChatGPT knows more stuff than you do. A lot more stuff. The entire internet of stuff. Asking it about anything is more likely to...

Autonomous Vehicles

A few companies are starting to offer geographically limited robotaxi service in China, most notably Baidu and AutoX. You can now add to the...

Copyright © 2023 CleanTechnica. The content produced by this site is for entertainment purposes only. Opinions and comments published on this site may not be sanctioned by and do not necessarily represent the views of CleanTechnica, its owners, sponsors, affiliates, or subsidiaries.