Connect with us

Hi, what are you looking for?

CleanTechnica

Consumer Technology

Elon Musk: “Will Those Who Write The Algorithms Ever Realize Their Negativity Bias?”

Computer calculations are increasingly being used to steer potentially life-changing decisions.

As part of an insightful discussion on Twitter, Elon Musk and followers deliberated over the effects of bias on algorithms. In doing so, they opened up conversations about the role of algorithms in our lives and the ways that algorithms persuade us to think and behave in particular ways. The Tesla CEO spoke to the responsibility that “those who write the algorithms” have and underscored the importance of thinking carefully about the labels used during algorithm development.

“Algorithm bias” refers to systematic and repeatable errors in a computer system that create unfair outcomes. This can be privileging one arbitrary group of users over others, favoring particular solutions to problems over equally viable ones, or creating privacy violations, for example. Such bias occurs as a result of who builds the algorithms, how they’re developed, and how they’re ultimately used.

What’s clear is that they are sophisticated and pervasive tools for automated decision-making. And a lot depends on how an individual artificial intelligence system or algorithm was designed, what data helped build it, and how it works.

Behind the Looking Glass

Algorithms are aimed at optimizing everything. The Pew Research Center argues that algorithms can save lives, make things easier, and conquer chaos. But there’s also a darker, more ominous side to algorithms. Artificial intelligence and machine learning are becoming common in research and everyday life, raising concerns about how these algorithms work and the predictions they make.

As researchers at New York University and the AI Now Institute outline, predictive policing tools can be fed “dirty data,” including policing patterns that reflect police departments’ conscious and implicit biases, as well as police corruption.

Stinson at the University of Bonn speaks to classification, especially iterative information-filtering algorithms, which “create a selection bias in the course of learning from user responses to documents that the algorithm recommended. This systematic bias in a class of algorithms in widespread use largely goes unnoticed, perhaps because it is most apparent from the perspective of users on the margins, for whom ‘Customers who bought this item also bought…’ style recommendations may not to produce useful suggestions.”

Rozaldo conducted research that revealed, in addition to commonly identified gender bias, large-scale analysis of sentiment associations in popular word-embedding models display negative biases against middle- and working-class socioeconomic status, male children, senior citizens, plain physical appearance, and intellectual phenomena such as Islamic religious faith, non-religiosity, and conservative political orientation.

Algorithms and the CleanTech World

AI systems are often artificial neural networks, meaning they are computing systems that are designed to analyze vast amounts of information and learn to perform tasks in the same way your brain does. The algorithms grow through machine learning and adaptation. We’ve been writing quite a bit on this at CleanTechnica.

A constant thread through all these articles is the concept that algorithms have profound implications for critical decisions, and a machine’s thought process must be fully trustworthy and free of bias if it is not going to pass on bias or make mistakes. Clearly, there is still work to be done at the same time that artificially intelligent personal assistants, diagnostic devices, and automobiles become ubiquitous.

Final Thoughts

Wired article posed the questions, “Are machines racist? Are algorithms and artificial intelligence inherently prejudiced?” They argue that the tech industry is not doing enough to address these biases, and that tech companies need to be training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it.

One researcher who admits to having created a biased algorithm offers suggestions for alleviating that outcome in the future:

  • Push for algorithms’ transparency, where anyone could see how an algorithm works and contribute improvements — which, due to algorithms’ often proprietary nature, may be difficult.
  • Occasionally test algorithms for potential bias and discrimination. The companies themselves could conduct this testing, as the House of Representatives’ Algorithm Accountability Act would require, or the testing could be performed by an independent nonprofit accreditation board, such as the proposed Forum for Artificial Intelligence Regularization (FAIR).

Harvard Business Review suggests additional layers of prevention in which businesses can engage so that algorithmic bias is mitigated:

  • Incorporate anti-bias training alongside AI and ML training.
  • Spot potential for bias in what they’re doing and actively correct for it.
  • In addition to usual Q&A processes for software, AI needs to undergo an additional layer of social Q&A so that problems can be caught before they reach the consumer and result in a massive backlash.
  • Data scientists and AI engineers training the models need to take courses on the risks of AI.

And as we return to the inspiration for this article, Tesla CEO Elon Musk, we can also look at his vision for Level 5 autonomy. With his consciousness about algorithms and negativity bias, there’s hope that the newest and best in the highest levels of driver assistance will incorporate the most innovative R&D, so that Tesla sets an example of being Bias Detectives — researchers striving to make algorithms fair.


Appreciate CleanTechnica’s originality? Consider becoming a CleanTechnica Member, Supporter, Technician, or Ambassador — or a patron on Patreon.
 
 
Have a tip for CleanTechnica, want to advertise, or want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Written By

Carolyn Fortuna (they, them), Ph.D., is a writer, researcher, and educator with a lifelong dedication to ecojustice. She's won awards from the Anti-Defamation League, The International Literacy Association, and The Leavy Foundation. As part of her portfolio divestment, she purchased 5 shares of Tesla stock. Please follow her on Twitter and Facebook.

Comments

#1 most loved electric vehicle, solar energy, and battery news & analysis site in the world.

 

Support our work today!

Advertisement

Power CleanTechnica: $3/Month

Tesla News Solar News EV News Data Reports

Advertisement

EV Sales Charts, Graphs, & Stats

Advertisement

Our Electric Car Driver Report

30 Electric Car Benefits

Tesla Model 3 Video

Renewable Energy 101 In Depth

solar power facts

Tesla News

EV Reviews

Home Efficiency

You May Also Like

Autonomous Vehicles

In another recent piece, I expressed that I think California law enforcement (and probably most other places) don’t take the misuse of Tesla’s autopilot...

Autonomous Vehicles

YouTuber “Wham Baam Teslacam” has shared one Tesla owner’s incredible footage of something that’s usually seen in the movies — an insane police pursuit...

Cars

Recently, Texas took a pretty harsh stance against owners of electric vehicles by proposing Senate Bill 1728, which would punish EV owners for simply...

Cars

Tesla’s Model S Plaid recently set a new record for the fastest quarter-mile time of any production car, reports Drive Tesla Canada. The official...

Copyright © 2021 CleanTechnica. The content produced by this site is for entertainment purposes only. Opinions and comments published on this site may not be sanctioned by and do not necessarily represent the views of CleanTechnica, its owners, sponsors, affiliates, or subsidiaries.