Entertainment

Good AI Is Ethical AI: The Media & Entertainment Industry Has to Check Itself

Good AI Is Ethical AI: The Media & Entertainment Industry Has to Check Itself

“Our job is to build tools to help artists and help broadcasters and help engineers do their jobs better.

“And so as we’re building these types of tools, and we’re integrating this type of technology, we also have to make sure that we are being ethical in what we are putting together,” SMPTE President Renard Jenkins said during a recent session focused on ethics and regulation in AI

This conversation featured representatives from SMPTE’s Joint Task Force on AI and Video, who each shared their perspective. You can watch the full video below or read on for highlights.

The task force was formed in 2020. ETC AI and Neuroscience in Media Director Yves Bergquist said the group found “both an issue and an opportunity was… all the ethical and legal questions around deployment of artificial intelligence in the media industry.”

Jenkins emphasized that the media industry “are consumers of this technology.” While that alone is table stakes for the ethics debate, he said, “We also have a great responsibility in ourselves because we are able to touch millions with a single program or a single piece of content.”

Bergquist, who also serves as CEO of Corto AI, said, “I love looking at artificial intelligence from within the media industry because the media industry is a technology industry.” 

He explains, M&E “has a massive track record in marrying human creativity with technology. It’s also not a producer of artificial intelligence. It’s a consumer of artificial intelligence products.

“And so that really brings some kind of sobriety to looking at the technology. And it’s also a very, very society-conscious tech industry.” 

Bergquist also noted that technology’s omnipresence has had “some very substantial consequences and impact on the way we live.” Therefore, he said, “The ethical issue now has to be baked into every single conversation about technology.” 

The Good(?) News

However, Bergquist said, “The practice of ethical AI is identical to the practice of good, methodologically sound AI. You need to know biases in your data. You need to have a culturally and intellectually diverse team.”

In fact, he said, “I have yet to see a requirement of ethical AI that isn’t also a requirement of rigorous AI practice.”

To be both ethical and intellectually rigorous, Bergquist said, “You need to understand the impact …of your models on your organization, on society at large.”

AMD Fellow Frederick Walls concurred, adding, “Transparency and explainability…they’re part-and-parcel of making sure that your model does what it’s supposed to do.”

Understanding Bias in AI

“The issue of transparency is critical,” Bergquist said. “It’s also something that we have tools to address.” 

He cited IBM researcher Kush R. Varshney’s “Trustworthy Machine Learning” (downloadable as PDF and found here), which lays out the “food label model” to detail important elements such as “how those models were trained, what data they were trained on, what biases were identified in the training, what are the variables that are participating the most in the model.”

Bergquist also said Google researchers have proposed “model cards” to pair with LLMs, featuring “metadata about how the model was trained, how much data was trained, how it performs, what methodologies are baked in the model, which biases are based on the data.” 

After all, Jenkins pointed out, “As we know, you have to actually input some bias into your model because if not, it can go off the rails. And we have to think of bias … essentially from its original definition, which is to show a preclusion.”

Walls added, “There are sources of bias everywhere in an AI model, and I don’t think there’s a way to really get rid of it.

“But I think there’s definitely responsibility for those who are … implementing a model to understand what those biases are, and where they’re where they might be coming from.” He also noted that documentation (logging) is also critical.

The Human Element & Policy

Bergquist emphasized that AI “is not independent of humans. It is built by humans and reflective of human biases.”

He believes we need to dial down on the Silicon Valley hype, which claims “AI is the sort of this magical technology that is going to take over our lives.” 

This false advertising is damaging to progress because, Bergquist said, “Eighty-seven percent of all AI initiatives in large organizations fail because either people think that it’s magic and [will] solve all their problems, or they think that it’s just really completely incapable and can do nothing and therefore shouldn’t be looked at.”

Jenkins said, “Most of the time, the reason that those types of things fail is because individuals have not taken the time to put in the proper infrastructure, or taken the time to figure out who should be the right person leading these types of things internally.”

Walls advised that organizations start with the NIST AI Risk Management Framework when they begin to develop “a corporate strategy around mitigating risks with using AI.” He described it as “an excellent tool” and recognized that policies will differ among organizations.

He also referenced C2PA, an organization “that’s working on standards related to ensuring that you can verify the provenance and authenticity of content.”

Jenkins suggested that SMPTE’s own AI report provides “a good foundation” or perhaps “a roadmap” for organizations to create their own AI working groups to determine internal policies.

The second part of this article can be found here: Good AI Is Ethical AI: Everyone in M&E Needs to Experiment



Good AI Is Ethical AI: The Media & Entertainment Industry Has to Check Itself

Why subscribe to The Angle?

Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.

Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.

Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.

Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!




Read More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button