AI Day Makes People Feel Like Idiots, And That’s A Challenge For Tesla & Other AV Companies
I was busy dealing with some other articles during the Tesla AI Day event, and finally got a chance to start listening to the Tesla AI Day presentation the next day. I knew from social media that there was a dancing person pretending to be a robot at the end, and that Elon Musk should have hired a street performer to act more robotic, but that’s another story (look out for it). What really caught my attention toward the beginning is just how stupid I am. Some CleanTechnica readers already knew this, but now I’m in on just how chin-droolingly dumb I am.
I Feel So Dumb Watching This
Let’s look at this bit of Andrej’s presentation:
“You initialize a raster of the size of the output space that you would like, and you tile it with positional encodings, with sins and cosins in the output space. And then, these get encoded with an MLP into a set of query vectors. And then all of the image and their features also emit their own keys and values….”
Let’s decode this using my knowledge: You make a drawing (using pixels) the size you want, and you make spots on it, with math stuff I can’t remember from high school for each spot. Then, the characters from My Little Ponies help you ask the right questions about each of these spots. Then, the answers come out of each of these spots (presumably through Pony magic or the help of Q from Star Trek who appears in that series as Discord).
I know I’m wrong about probably 99% of this, but don’t sit there and pretend I’m the only one who’s completely lost here. You probably have no idea what an MLP is, either. It could be ponies for all we know, which would actually be pretty cool as long as the car drives like it’s supposed to (which is what really matters here).
When Andrej speaks, I have basically no way to know whether he’s telling us something about the car or giving us a technobabble-laden speech that aims to sound smart and fool the rubes (like I did in this article to prove a point). Before anyone jumps me, I don’t actually think Tesla is trying to deceive us just because this stuff goes far, far over my head. I admitted in the beginning that I’m dumb, and I know that the main point of the AI Day presentation was to recruit people who know what an MLP is and could help them solve these complex problems.
It still feels a lot like watching this video, though:
And I feel like the Ferengi (the alien with the big ears) in this one:
Of course I know what a bilateral kelilactiral is! I’m not stupid!
I know Elon Musk isn’t trying to fool us or play some sort of trick. I do think this shows us a real problem Tesla is going to increasingly face in the next few years, though: explaining all of this stuff to the public.
Public Understanding Is Getting Harder
The average person isn’t a machine learning developer or computer scientist. Most of us learned some math in high school, use maybe 10% of it in our daily lives, and promptly forgot the rest not long after high school. For those of us who went to college but didn’t pursue a degree that uses math, we probably took a basic math and/or a “math appreciation” class of some kind as part of the general education requirements, and then pursued the rest of our degrees.
I probably know a little more than the average liberal arts major because I worked as a computer technician and I know what raster versus vector is from graphic design work I did, as well as photography, so I had a very small advantage here. Many people would have been completely lost.
And, to be honest, I exaggerated the MLP=Ponies thing just for fun (in case that wasn’t obvious). Nobody would really think there are cartoon ponies involved in something like machine learning or autonomous vehicle research and development. Would they?
This Makes Communicating Things To The Public Difficult
I know from my experience dealing with Tesla that this isn’t a scam. We aren’t being fed a bunch of bullshit technobabble to make sure fools and their money are more quickly parted. If it was a bunch of Hollywood technobabble like you’d see in a sci-fi show, there are writers and commentators out there who know more about this math stuff than I do, and they’d call BS. I think. And hope.
The real problem is that companies pursuing this kind of technology really don’t have any way to give the public a look under the proverbial hood, because even with a normal car’s hood, the average person (even most math majors) have no idea what’s going on under it. Most people don’t know how a combustion engine or an electric motor work, what a transmission even is, what a differential is, or how the brakes work. They certainly don’t know the importance of a Fetzer valve’s proper operation either (or what movies that fictional component appeared in).
But, we can at least simplify the ideas behind internal combustion and electric motors. For a piston combustion engine, it’s “suck, squeeze, bang, blow” (four strokes) in most cases, even if that puts people’s minds in the gutter. For electric motors, we can remind people about how a magnet can push another magnet away, and explain that an electric motor works on the same principle. In other words, with some work and language skill, those ideas can be presented to people who don’t need a deep-level understanding, and allow them to have enough understanding to calibrate their trust and hopefully not get ripped off by a mechanic.
Things like machine learning are far more complex, though. Sure, machine learning is basically multiple layers of math operations that tune themselves to fit the situation properly, but that’s a far more vast oversimplification than “suck, squeeze, bang, blow” is to internal combustion.
This disconnect between the public’s understanding leads to half of the crowd feeling like they’re being fooled somehow, and most of the other half thinking it’s magic, or that the cars are alive (Elon calling the cars semi-sentient didn’t help with that misunderstanding).
Getting the public to understand and trust these technologies is going to be a real challenge.
Editor’s addendum: Personally, I don’t think it really matters. If the technology works, it works. If it doesn’t, it doesn’t. The more it works, the more people will trust it. As pointed out above, most people don’t know how their cars work, but they still drive them. Whether a presentation like this scares and alienates some “influencers” (old-school and new-school ones), well, that’s another matter, and I don’t know the answer or where it takes us. However, I don’t think the average person cares that they don’t understand how technology works — much of our life is based on technology we don’t understand (phones, computers, etc.).
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one if daily is too frequent.
CleanTechnica uses affiliate links. See our policy here.
CleanTechnica's Comment Policy