Published on November 9th, 2019 | by Alex Voigt0
The Man & Machine Issue: Artificial Intelligence vs. Human Behavior
November 9th, 2019 by Alex Voigt
The world is currently discussing if artificial systems are good or bad, will help us or destroy us, and if they will ever function or not, and by doing that people make the mistake of actually trying to answer the wrong question. As of today, the biggest question about artificial intelligence is not the system itself, but the biggest challenge is the interface consequences between the human and the machine, or to be more precise the system existent out of two elements — a carbon and a silicon body.
We all have learned in our life how difficult, dangerous, or even fatal the coordination, cooperation, and operation between these two objects — the human and a machine — can be, and some of us may have been hurt by it or even worse. At least I can say I have been many times, and if you read the news today you will certainly find many other examples.
This has been true since humans invented the first machines moved by any energy form, be it animals, steam, or oil, and it is true for the new era we are entering — a time when software-driven artificial intelligence performs better in defined areas than a human will ever be. They already do, be it a train driving autonomously, a plane, or any kind of computer game winning against most of us, or at least against me. For those tasks the new machine interface can keep you safe, keep you healthy, more relaxed, and better performing, simply making more out of your life. It’s the new promise of the Holy Grail and promised land, everything goes.
Since the first tools were invented by humans, and every day after, we learned that you better be careful with that new thing as it’s promising to help you but it also may hurt or even kill you. It is engraved into our DNA, and if not as a child, you learn it the hard way later. Anyone can give plenty of examples, as we know to be careful with anything new. The doubt against ‘the new’ is a part of us because many who did not have that doubt simply didn’t survive and their DNA was not reproduced and given to the next generation. It’s a perfect example of nature eliminating risks for humankind by selecting those who have been careful.
One of ‘that new’ is the artificial intelligence system used in autonomous driving vehicles. In the selective recognition of our brain we focus only on the fact that a person sitting in such a vehicle was once killed and confirm our assumption that it is somehow dangerous regardless of the fact that not using it will expose you to a much higher likelihood of being killed. Our brain does not work with relativity and probability if it comes to a ‘kill threat’ but only with binary elimination of all other information, even if that person did die by a heart attack in a parked vehicle our brain disposition will declare the car to be somehow dangerous. It’s an odd form of generalization and elimination that works together with fear while all logical thinking and information is ignored and our fight or flight instincts are activated, while thinking is — for good reasons — totally deactivated. If you try to start a discussion in a calm way with someone in such a situation you feel a lot of aggression, and the reason is that the fight and run situation prohibits any sane conversation. It is a behavior that has been proven in the past to save lives, but in the world of artificial systems this behavior is outdated and actually puts lives at risk. Allow me to explain why.
Since ‘Smart Summon’ was released from Tesla — a feature allowing your car to drive to you or a place of your choice within a parking lot — the media and people are all over it. This includes excited owners as well as people who fear being killed by such a system. It’s an autonomous driving system feature, but the driver has still full control of it with a ‘dead man switch’ but it is just for stopping it in case something goes wrong.
After the release of Smart Summon, many videos have been posted on social media and if you watch them carefully you will realize the system works flawlessly in empty parking lots, but if it’s crowded and humans are driving or walking around, it stops and waits. In a very few cases other drivers did hit a Tesla because they simply did not see it, which is something that happens every day in the world between drivers in parking lots.
The human brain works in patterns, and if we sit in a car and another car is approaching us at a parking lot we do not differentiate if a human is sitting behind the wheel or a system, we just remember that situation and pattern and anticipate a human-like behavior from whoever controls that vehicle. In fact, most people believe you should not differentiate because we need to expect those systems to work like a human, but I claim we should definitely differentiate because those systems will not communicate with you like another human because it’s just a software system that cannot look in your eyes like we do unless it is conscious — which we don’t even want to consider to be an option today. It can’t wave with hands or communicate with signs or gestures or other subtle ways we are used to and mostly are not aware about ourselves. Just take the subtle communication of not using words between a man and women which causes a lot of confusion, and you know exactly what I am talking about.
To be accurate we should expect those systems to work like a human but not communicate like a human. Their communication is limited, or let’s say different, but if you still believe you need to use the same ways used to communicate with other humans don’t be surprised if a Smart Summon system does act different than expected if you wave to that car to drive by. The pattern your brain selected to deal with that vehicle is the wrong pattern unless you are a computer yourself.
What is required from all of us is therefore to learn how to deal with this new system, something many people feel to be forced on them without benefiting from, and if someone asks you to put effort into something but you feel you’re not getting anything in return most feel bad about it, and many even develop anger and aggression. Learning takes effort, so why should you as a pedestrian comply for that damn autonomous car developed for the rich and wealthy?
These emotions are what you see with people damaging Tesla vehicles without a visible reason why. They feel anger, mistrust, and pressure, and express those with violence be it keying the car, damaging the side mirror on a highway, or ICEing a Supercharger. These people feel like they are forced to change and their way of life is questioned by Tesla just because that company and its products exist. The natural reaction is therefore to somehow make it go away. It’s a basic behavior and makes a lot of sense in the context of humankind in history and is caused by one of our oldest parts of the brain which has been developed early on. One of my school teachers usually said to us pupils to ‘don’t forget to switch your brain on before you talk’ and he was damn right about that.
Communication is one of the challenges, and the other is the interface we communicate with.
If you think about what Neuralink, one of the many companies Elon Musk has started, is trying to accomplish, it is simply to help the computer to communicate better with humans and for humans to communicate better with the computer by inventing a new direct interface to the human brain that a computer chip would have direct access to. For those of you who believe this to be science fiction, allow me to say it has for years been standard practice for many people, for instance those having lost a limb being able to move an artificial hand with a computer which gets direction directly from your brain. It’s like a proof of concept that the approach does work. Our brain can effectively communicate directly with a computer and it’s also true that scientists do not really understand how that works in detail, but it works.
One of the reasons why this is of importance is that the existing input and output systems of a human have never been optimized for digital data transfer with an AI, and our abilities to use keyboards, touchscreens, and voice commands in that regards are just falling short of what a modern chip and computer would consider an even decent basic conversation. I call it the ‘man and machine issue’ and that’s why I chose the title for this article. At the end of the day all that matters is the language, the bandwidth, and the connection itself between us and whatever device we use to perform an action. As those actions move more and more in the “hands” of the artificial system, with us taking the role of a supervisor and just interfering if something does not go according to our expectations and plan, the ability to communicate and control effectively is even more important. We are lacking that ability because the human body needs time to adjust to new challenges and this challenge is brand new even with the always changing plasticity of our brain that develops within our lifetime if we train and feed it.
I am not talking about us growing a chip in our brain that communicates with an outside computer better, but about for instance the difference you realize in how your kids work with a computer as compared to yourself. We are all astounded by how fast and easy kids are able to use computer devices, performing and knowing them quickly better than we adults do. Most say ‘well, kids just learn better and faster and I am older and that’s, likely why’. My interpretation is that this is partly true, but the core is that our brain is highly malleable and adjusts to the task up to the point where physical parts that were never intended to be used for certain tasks change and now take that task — such as for the old or damaged one portion.
On a limited scale this is what has been proven by scientists and this is also what is happening to kids using whatever form of computer or its language early on. This is a frightening thought but inevitable, and if you compare your ability to work with a computer and how your parents did you may realize it’s just the normal process of evolution. When your kids have kids they will have certain markers in their brain that makes it easier for them to start on a level you have never seen before or had the pleasure to work with. It sounds spooky and I agree it is, but again it is nature and just how human are built and have succeeded in achieving unbelievable things that our predecessors would never ever have expected.
This existing interface challenge is true for every machine or computer we ever come in contact with and there are many places, more than we usually realize, where we contact each other. One example where it is in particular interesting to watch is when a human is trying to drive or supervise a vehicle that has a semi-autonomous system like the Tesla Autopilot. In almost all cases where the human complains that the system did not work as it supposed to, it has not been the machine that did not work but the human misunderstanding the system or interfering by anticipating something is going to happen that did not happen because of the interference. In our memory that is usually not the reality, but instead think that what our brain said happened actually happened, and if someone questions that, people get quickly aggressive because of fear that they are wrong. Often the interface between the two, be it the language, the bandwidth, or the connection itself, failed causing another set of confusion and miscommunication.
Allow me to give you a simple example to illustrate what I mean with that. Yesterday I drove with my Model 3 on Autopilot, or better to say I did not drive but stood in a traffic jam and the car did not move. It was on a German Autobahn, so you just wait until movement starts again. Suddenly the display showed me a collision warning, which is a visual red blinking alarm on the display as well as a loud audio signal informing me that I am about to collide with something. Sitting in a standing car with other cars standing still in front as well as behind and on the left side as well, you ask yourself where the collision would come from.
My initial reaction, and I believe most would have the same thought, was that this must be a system failure. In that very second of that thought a motorbike passed by my car on the right side, using the tight available space to sling through the tight alley the lines of cars made, and the autonomous system was able to identify that this vehicle was on collision course with my car. The autonomous system was right in its warning and I was wrong with my doubts and this miscommunication between a human and an AI happens again and again.
Sure, I could not escape being trapped in that queue anyway, but what matters here is the improved recognition of a potential danger as such. In all situations that I recall where I assumed the system did fail, I was unable to process and conclude the same result out of the available information I had around me in that time, but by thinking about it with more time could recognize the system was actually right. This is the moment where we as humans need to accept that AI systems perform better than we do and be careful with calling a situation we are unable to comprehend a system failure. But exactly that is happening every day and every minute in social media, people on the streets, and in conversations. People believe they know what they don’t know and judge things based on our limited ability to evaluate and assess.
It is a taboo to say that humans are not capable of doing something, as we consider ourselves the best of the best that nature has developed, and yet it’s a reality in many minor tasks where computers are just better, and we do accept that. It’s accepted for many aspects of life where we call a computer better suited, such as for complex math calculations, weather forecasts, or video games, but not for driving a vehicle on our streets, for which we still consider humans to be superior. A truly autonomous driving car on Level 5 where you can sleep or read a book, something that does not exist yet, is for us like a computer stepping out of its box and deciding to discover the world as an independent consciousness. It’s spooky, it’s strange, it’s confusing, and not what people want to hear because they feel a loss of control. It is breaking a taboo and that’s a reason for many to be angry, aggressive, threatened and against it all.
We struggle to understand most of what we do as humans, and emotions are one of those, so how can we understand us communicating with a system that does not have any?
Personally, I do expect many unpredictable behaviors and reactions from humans with the increasing amount of new AI use cases to be released in the next weeks, months, and years.
The future is here, and it’s time that we try to learn to deal with it, as the true issue is not the AI dealing with us but us dealing with the AI.
Featured Image: Navigate on Autopilot, CleanTechnica
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.