Machine Learning Has Had Prejudice Problems, So Why Would An AI Velociraptor Be Immune?

Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

Originally published on Medium. Podcast version available here.

Plastic Dinosaur has evolved to the point where it’s wandering freely around the warehouse-like lab in Silicon Valley it was created in. Curiousnet keeps it poking into different corners and behind things. Cerebellumnet pays attention to its level of charge and grumbles when it’s getting ‘hungry’. Amygdalanet arbitrates between the two and reacts to danger.

VW robotic assembly machine
Imagine if this VW assembly robot didn’t like people of a particular color? Image credit: Volkswagen

And danger is provided by Josh. He’s a junior member of a diverse team, the only Caucasian in fact. The leaders of the team are Fang Soon and Wei Soong, Malaysians of Chinese extraction working in San Jose California. Fang is the machine learning expert, with degrees from Singapore’s NUS and Stanford. Wei is the roboticist, with degrees from the University of Melbourne and MIT. Josh is no slouch either, coming out of the engineering program at University of California — Berkeley with top marks.

But it’s not Josh’ engineering degree that leads to him being Danger Boy, as they’ve taken to calling him. It’s just that he’s big and played some football, so he has a lot of muscle. When you are hip-checking a robotic velociraptor, poking it with a stick or throwing things at its head to make it duck, it’s useful to have some weight and muscle behind you, as well as a certain tolerance for physical interactions.

PD has learned to recognize Josh and treat him as a threat. Josh has to sneak up on PD or amygdalanet bellows then turns tail and runs away when curiousnet recognizes him. It’s a fascinating emergent behavior that the team was hoping would occur, and papers are frantically being written by members of the team.

Another member joins the team, Alice Gently. She’s brilliant, of course, and comes out of Saint Francis Xavier in Canada, where she played a lot of rugby. Josh may have 30 centimeters and 40 kg on her, but she’s got muscle density too. Of course, coming out of StFX, she’s rocking the big ring with the X and the engineer’s ring.

But PD takes one look at her, bellows and runs away. Everything comes to a screeching halt as the team tries to figure out what is going on. Alice looks nothing like Josh. Much longer hair of a different color, very different body shape, different clothes, different size.

Eventually they figure it out, or think that they do. She’s Caucasian too. They test this by bringing in a dozen students from nearby Stanford, and sure enough, PD bellows and runs away from the Caucasian ones, but not the black or Asian ones. They’ve created a racist velociraptor that assumes all white people are dangerous.

This is another article in the series that David Clement, Principal at Wavesine and Co-Founder of Senbionic, and I are collaborating on to introduce interesting aspects of machine learning in its current state. Our foil is a fictional neural net driven robotic velociraptor, because what’s more fun than a terrifying AI predator? Articles so far have introduced its mechanical body and sensorium, its neural net architecture, and explored how it might learn to play goalie. And now, it’s turned out to be a racist. What could possibly go wrong?

The specific element that this little story is pulling out is that we really don’t know much of what goes on inside a neural net. It’s not entirely a black box. It’s possible to infer certain things, but a common failing is that some slight thing that seems mostly irrelevant from the outside turns out to be what the neural net is paying attention to as the key feature.

And in this case, Josh’ skin color is what Plastic Dinosaur used to identify him. Everyone else on the team has differently colored skin, so the easiest feature PD had to create a pattern around was pale, pinkish skin. So it did.

David and I were exploring language to describe this emergent negative pattern matching that occurred inside neural nets. He’d just returned from TWIMLCON in San Francisco, where most of the featured commercial applications were moderately evil, focused on online behavior modification of humans to drive profit. A bigoted robotic velociraptor seems like an obvious extension, even if in this case it just shrieks and runs away.

We consider stereotype and bigotry, but patterns are also useful. We considered idioms, where a commonly used phrase is understood to have a meaning distinct from its words, such as saying ‘cats and dogs’ to mean it is raining hard. We considered axioms, a statement that is taken to be true to serve as a premise or starting point for further reasoning and arguments. That had the advantage of not necessarily being true, but useful for a logical progression. We discussed our common background in pattern languages, David having introduced me years ago to the gang of four’s object-oriented software design patterns, and me having in turn introduced him to Christopher Alexander’s A Pattern Language, a precursor to the software patterns.

Human brains work much the same way. We only create substantial differentiation in our pattern matching when we see a lot of examples of highly similar things that forces it upon us. This is part of why all white people tend to look alike to people from mainland China. They don’t need to learn to tell individual Caucasians apart because there aren’t a lot of Caucasians around. By the same token, they see infinite richness of differentiation in one another, where someone who has lived among only white people their entire life would see a sea of identical Asian faces.

This is part of where bigotry and stereotypes emerge in humans. The things we are familiar with we see variety in. The things we are not familiar with, we lump together based on some simple pattern and then our brains ascribe a lot of simplistic attributes based on nothing much at all. And it’s difficult for humans to overcome this pattern recognition. Psychologically, the rule of thumb is that we have to see ten examples that counter our prejudice for every one that confirms our bias in order to overcome it and establish a new baseline for patterns.

Another nuance in this is that until we break our bias entirely, replacing it with a new and more nuanced model, we mostly run off of exceptions. There are innumerable people who exclude the local shopkeeper or police officer of color from their bias because they are exceptions to the rule. They maintain the stereotype, but exclude some people from it. This happens with neural nets as well. It’s possible to observe from the outside if they are stabilizing or not, but not what they are actually stabilizing on.

This simple pattern matching and response makes a lot of sense for evolving mostly furless, fangless bipedal meat sacks. We had to be able to identify threats very rapidly and turn to fight or flight in order to survive. It’s useful as we grow up as well. We quickly learn that putting our hand in the fire or on the glowing burner is a very bad idea and only do it once. Over time, we expand our neural networks, the pattern matching that we’ve established develops exception after exception after exception until it breaks and hopefully a new pattern emerges that’s more useful and nuanced.

And so, with machine learning as well. Neural nets which perform perfectly well with one set of conditions, when shifted to what appears to be the same problem in a slightly different setting, stop working. They have to be retrained, often from scratch, to produce exactly the same results.

Log boom floating in river
Log boom image courtesy BC Government Archives

One use case David and I return to is that of stray log identification in coastal waters. As rafts of logs float down rivers and out into the ocean on their way to lumber mills, logs escape. They end up floating freely, washed up on beaches and creating boating hazards. They remain valuable. Old growth fir and cedar logs are worth $300 to $600 per cubic meter. The total value of coast logs, not the escaped ones but all of them, was over $95 million CAD for just three months this year, so if even 1% of them escaped, there’s a potential loss of close to $4 million per year. One offshoot of these economics is that Canada had an entire TV series called The Beachcombers devoted to the competitive shenanigans of the small business people who hunted for these stray logs.

Our use case is straightforward. We would equip the float planes in the Vancouver-based Harbour Air fleet, which fly the coastal waters on regular commuter loops up the coast and across to Vancouver Island, with iPhones that point down and across. Identifying logs is straightforward from the air with the pixels in iPhone video and the relatively low altitude the planes fly at. It would be trivial to identify larger vs smaller logs. It’s highly likely that we could identify the type of log as well with a high degree of probability. This would provide valuable intelligence to the owners of the logs so that they could prioritize scooping up the most valuable of the escaped logs. It would be inexpensive and easy to establish the floatplane+iPhone sensor system and train the neural nets for potentially millions of dollars in returns annually.

But we wouldn’t know how the neural net was doing it exactly. It might be picking up on albedo cues we ignore. It might be doing it on color differentials. It might be doing something else. We could train it to find and identify valuable logs in the coastal waters of BC. It’s likely that if we expanded the regional coverage down the coast toward Seattle and up the Fraser River into the interior, it would continue to work with some retraining, especially in river vs ocean waters.

But if we then took exactly the same neural net to the coastal waters of northern Brazil at the mouth of the Amazon, it might not work at all. And if it did work, it would likely spot logs with lower accuracy and obviously wouldn’t identify the type of log. It would have to be retrained, possibly from scratch, although that wouldn’t be that onerous.

We closed that thread on what to call these patterns with the phrase compact models. Just as with other pattern languages, we decided a useful approach would be to template out what we thought we were observing and that it would also be useful for sharing helpful compact models with one another to explore our thinking. It would include the stereotype mechanism and description, why we thought that the stereotype existed, the increasing list of exceptions, and typical responses.

However, there’s a large dissonance in explainability. We would be projecting our biases onto the neural net, making potentially very incorrect assumptions about what compact model the truly alien neural net was running. It’s like trying to figure out what octopuses are thinking with their highly distributed brains. When David and I were discussing this, he asked me to explain 1+1. It stopped me in my tracks as my mind wandered over the origins of numbers, what numbers actually mean, the degrees of perception we have of things as unitary objects when they are made up of innumerable atoms and how something like a neural net might see them as a specific collection of 47 shapes and colors instead of as a singular countable object. We barely are able to describe how our brains operate, never mind the truly alien.

For PD, that might mean something as simple as the compact model for Josh being a threat. The team would initially hypothesize that PD recognized Josh because he was the tallest member of the team. They would assume that the next tallest person was considered an exception in some way. The response would be to bellow and run away. When Alice was introduced, the hypothesis about what the neural net was doing would be proven to be wrong because the response was identical, so the assumption of how the neural net was operating needed to be abandoned and a new hypothesis formed, that pale, pink skin was the feature that PD was paying attention to. Even that might very well be wrong.

With increasingly sophisticated neural nets, these compact models, exceptions, and responses could become baroque and increasingly unreadable from the outside. At what point does that begin to matter as long as the outcomes are still useful?


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.

Latest CleanTechnica.TV Video


Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

Michael Barnard

is a climate futurist, strategist and author. He spends his time projecting scenarios for decarbonization 40-80 years into the future. He assists multi-billion dollar investment funds and firms, executives, Boards and startups to pick wisely today. He is founder and Chief Strategist of TFIE Strategy Inc and a member of the Advisory Board of electric aviation startup FLIMAX. He hosts the Redefining Energy - Tech podcast (https://shorturl.at/tuEF5) , a part of the award-winning Redefining Energy team.

Michael Barnard has 708 posts and counting. See all posts by Michael Barnard