ϳԹ

© 2024 ϳԹ

FCC Public Inspection Files:
· · ·
· · · · ·
Public Files Contact · ATSC 3.0 FAQ
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

The dangers of unregulated AI systems are already here, technology expert warns

In a still image from C-SPAN video, Sen. Richard Blumenthal talks after demonstrating an AI deep fake. "That voice was not mine,” said Blumenthal, “The words were not mine and the audio was an AI voice cloning software...the remarks are written by ChatGPT when it was asked how I would open this hearing."
Screen Capture
/
C-SPAN
In a still image from C-SPAN video, Sen. Richard Blumenthal talks after demonstrating an AI deep fake. "That voice was not mine,” said Blumenthal, “The words were not mine and the audio was an AI voice cloning software...the remarks are written by ChatGPT when it was asked how I would open this hearing."

Sen. Richard Blumenthal, a ϳԹ Democrat, recently opened remarks at a Senate Judiciary Hearing with a clip that sounded like him, talking about the dangers of technology outpacing regulation.

But the audio was actually a voice clone trained on floor speeches given by the senator. Blumenthal never said the words, which were written on a program called “Chat GPT.”

Blumenthal used this demonstration to open up the conversation on government regulation of powerful artificial intelligence technologies and just how deceptive they can be.

During that same hearing, the head of the AI company that makes ChatGPT, Sam Altman, told Congress that government intervention will be critical to mitigating the growing risks of powerful AI systems.

Adam Chiara, associate professor of communication at the University of Hartford, has spent years extensively studying AI and so-called “deep fakes.”

He said the technology is nothing new. The term “deep fakes” was coined in 2017, Chiara said. “Think of how many years and how much technology has developed since then.”

“If we don’t get our heads around this and try to figure out the best approaches moving forward, it’s only going to get us further and further down the hole,” Chiara said. “That's when those potential harms and risks are definitely going to become prominent.”

While many are looking for a simple answer to control AI systems, Chiara said developing any regulation would be a complex task involving many people.

“There needs to be minds from all different aspects of our society here. We need to have academics, people who work in technology, and law enforcement,” ,” Chiara said. “We need every kind of person who is going to oversee or deal with this in their professional or personal life to be at the table to try to figure this out.”

But Chiara added that “we are at least in the first phase that we need to be.” He urged legislation to move slowly and include hearings and dedicated groups to talk about possible regulations.

AI technology is not “inherently good or bad” Chiara said. It all depends on which way society decides to use it.

“We’re at a crossroads right now, which direction are we going to take?” Chiara asked. “The longer we wait to take any action, the harder it's going to be when it's time to put the guard rails up.”

The Associated Press contributed to this report.

Stand up for civility

This news story is funded in large part by ϳԹ’s Members — listeners, viewers, and readers like you who value fact-based journalism and trustworthy information.

We hope their support inspires you to donate so that we can continue telling stories that inform, educate, and inspire you and your neighbors. As a community-supported public media service, ϳԹ has relied on donor support for more than 50 years.

Your donation today will allow us to continue this work on your behalf. Give today at any amount and join the 50,000 members who are building a better—and more civil—ϳԹ to live, work, and play.