Life Cycle Assessment and Blockchain Research at NREL Boosts Sustainability and Decarbonization Efforts for Burgeoning Electric Vertical Take-Off and Landing Aircraft Industry

Elon Musk & Other Techies Educate US Senators About AI Basics





Sign up for daily news updates from CleanTechnica on email. Or follow us on Google News!

It was an interesting list of invited guests: X CEO Elon Musk, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, and AFL-CIO labor federation President Liz Shuler. The who’s who of the tech world were on Capitol Hill this past week to discuss generative artificial intelligence (AI) tools and regulation with lawmakers. Everything from AI basics to deep fakes and attacks on critical infrastructure was on the table.

The unusual private briefing was a time when experts outlined their visions for AI’s potential to change daily existence, and usually-loquacious US senators sat mutely. It was a session of lawmakers who were learning and listening rather than boasting and braying.

At the end, the goal to regulate the emerging disruptive tech seemed desirable, but also elusive.

Over 60 senators sat in on the unusual session with the tech scions. The lawmakers seem eager to fend off the dangers of the emerging technology. Lots of ideas were floated, according to Wired, such as the need for highly skilled workers, feeding the globe’s hungry, a new AI agency, and empowering the National Institute of Standards and Technology (NIST). As expected, a series of ongoing debates were repeated, such as preference for open- or closed-source AI basics, foregrounding how/if AI models harm people, and anticipating upcoming yet serious AI risks that may emerge.

OpenAI’s ChatGPT has been a harbinger call since its release last year. The chatbot has prompted competitors to speed up their own R&D and to establish comparable language models. The senators seemed to relax a bit when they were reminded that AI isn’t simply for the future; rather, we’ve been using it for years to identify books we might like to read, autocomplete our smartphone texts, feed and benefit from social media algorithms, or even in baby monitors.

The session was subsequent to commitments announced in July unveiled by the Biden-Harris administration which sought to “seize the tremendous promise and manage the risks posed by Artificial Intelligence (AI) and to protect Americans’ rights and safety.” Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI announced then that they signed voluntary commitments to help move toward safe, secure, and transparent development of AI technology. On Tuesday, Adobe, IBM, Nvidia, and 5 other companies said they had signed President Joe Biden’s voluntary AI commitments requiring steps such as watermarking AI-generated content.

Looking at the issue objectively, does it make any sense for the creators of AI to be the ones who spearhead AI systems assessments before deployment? Saying they’re the ones who understand AI basics the best seems like a feeble rationale.

One speaker cited Section 230 of the 1996 Communications Decency Act, which protects freedom of expression online for US users by protecting the providers of the platforms. “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” In the new world of AI basics, users and creators of the technology would be held responsible for their AI generation — no more guaranteed immunity from liability.

Regardless of different approaches starting with AI basics, tech CEOs seemed unanimous in a desire for US leadership in AI policy.

Musk’s Quest to Move beyond AI Basics

In February, California Governor Gavin Newsom joined Musk to announce that the former Hewlett-Packard headquarters in Palo Alto would become Tesla’s engineering and AI base of operations.

In March, Musk and a group of AI experts and executives had called for a 6-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society. “Pause Giant AI Experiments: An Open Letter” described how “advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” In addition to the open letter, the signatories published a set of policy recommendations to help combat risks from advanced AI systems.

“AI stresses me out,” Musk said near the end of a more than 3-hour presentation in March to Tesla investors about company plans. Tesla’s own ambitious artificial intelligence efforts had a featured role in the presentation of Musk’s Master Plan 3, which continues the company’s vision to convert the world to clean energy.

He is one of the co-founders of industry leader OpenAI, and Tesla uses AI more and more for its Autopilot and Full Self Driving system. Musk has expressed frustration over regulators critical of efforts to regulate Tesla’s Autopilot system.

In July, Musk announced plans for a new artificial intelligence company called xAI.

In August, Tesla related that its productive Q2 2023 performance was directly related to artificial intelligence (AI) development, which entered a new phase with initial production of Dojo training computers. The Dojo supercomputer will be able to process massive amounts of data, including videos from its cars, to further develop software for self-driving cars. Tesla’s complex neural net training needs will be satisfied with this in-house designed hardware, as the company has determined that “the better the neural net training capacity, the greater the opportunity for our Autopilot team to iterate on new solutions.” Tesla plans to spend more than $1 billion on Dojo through next year.

A week ago, Tesla rallied 6% after Morgan Stanley said the all-electric carmaker’s Dojo supercomputer could power a near $600 billion surge in market value by helping speed up its foray into robotaxis and software services. Dojo can open up new addressable markets that “extend well beyond selling vehicles at a fixed price,” Morgan Stanley analysts led by Adam Jonas wrote in a note.

“It’s important for us to have a referee,” Musk told reporters after the session with the senators. He explained that a regulator would “ensure that companies take actions that are safe and in the interest of the general public.” He confirmed he had called AI “a double-edged sword” during the forum. Musk expressed his opinion that the meeting was a “service to humanity” and said it “may go down in history as very important to the future of civilization.”


Chip in a few dollars a month to help support independent cleantech coverage that helps to accelerate the cleantech revolution!
Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.
Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one if daily is too frequent.
Advertisement
 
CleanTechnica uses affiliate links. See our policy here.

CleanTechnica's Comment Policy


Carolyn Fortuna

Carolyn Fortuna, PhD, is a writer, researcher, and educator with a lifelong dedication to ecojustice. Carolyn has won awards from the Anti-Defamation League, The International Literacy Association, and The Leavey Foundation. Carolyn invest in Tesla and owns a 2022 Tesla Model Y -- as well as a 2017 Chevy Bolt. Buying a Tesla? Use my referral link: https://ts.la/carolyn80886 Please follow Carolyn on Substack: https://carolynfortuna.substack.com/.

Carolyn Fortuna has 1482 posts and counting. See all posts by Carolyn Fortuna