For better or worse: Artificial intelligence permeates our lives | Waverly Newspapers
This article originally ran in the Jan. 2, 2024, issue of the Bremer County Independent. It is rerunning here as the first in a series of articles discussing artificial intelligence and how it now affects many aspect of our lives.
Here in Iowa farm country, for years “AI” was primarily shorthand for “artificial insemination.”
These days, by far the more common thing AI stands for is “artificial intelligence.”
The term is everywhere, because the technology is everywhere, surrounding us in our everyday lives, whether we’re aware of it or not.
Bernard Marr wrote in Forbes magazine back in 2019, “[A]rtificial intelligence is encountered by most people from morning until night.”
It has grown only more ubiquitous since then.
“Virtually every American is using artificial intelligence in ways where it’s behind the scenes,” said Professor John Zelle of Wartburg College. “They may not realize that artificial intelligence is acting on their behalf.
“For example,” he continued, “if you use any kind of social media, the recommender algorithms [programs] that decide what goes in your news stream, in your feed, are based on machine learning artificial intelligence algorithms.”
Zelle has long concerned himself with artificial intelligence. He has a Ph.D. in computer science, was in the artificial intelligence group at the University of Texas-Austin and teaches the artificial intelligence course at Wartburg. (Disclosure: He is also married to this reporter.)
“Artificial intelligence is my area of specialty within computer science,” he said.
We may be swimming in artificial intelligence technology, but what is it, exactly?
“Broadly, I like to say it’s the attempt to get computers to do things that, when humans do them, they require intelligence,” Zelle said.
He noted that we see this “intelligent” activity in smart speakers/digital assistants, Roomba vacuum cleaners, facial recognition programs, maps and route finders, social media, banking, driving assistance and healthcare, to name some common areas.
“When you use your credit card, your data is being scanned by AI programs that try to detect fraud,” he said. “If you use any kind of writing assistance, like a grammar checker, that’s a kind of artificial intelligence that’s helping you make your writing more accurate.”
Like any other technology, AI on its own isn’t good or bad, but it can be used in good or bad ways.
Zelle enumerated ways AI “is definitely improving our lives,” such as in fraud detection, spam identification, grammar assistance and spell check, and accident avoidance in cars.
“These have been very, very successful systems, and they have definitely made our lives better,” he said. “I don’t think anybody would say that they would prefer a world where they have to go through and identify all that spam themselves.”
Intelligent map and routing programs are also largely successful applications of artificial intelligence.
“So, you want to take a trip somewhere, you use your Google Maps or some other kind of GPS system,” he said. “Not only is it finding your routes, but it’s also doing things like monitoring traffic, so it can indicate that you should take another route because it will be faster given the current traffic conditions.”
That’s the “intelligent” part of the program. It doesn’t just spit out data; it “evaluates” it and comes up with recommendations.
AI programs do a lot of recommending. We see this in social media (friend recommendations), on sites like Amazon (product recommendations) and, yes, Google Maps (route recommendations).
“There are all kinds of ways that AI is making everyday life better,” Zelle said, “but there is also a lot of concern that many of the uses of AI may not be good for society at large.”
Many of the posts we see in our social media, most of the ads we see online, get to us because the AI of those platforms has determined that’s what we should see—that’s what we’re interested in or that’s something we’re likely to buy.
Zelle explained that algorithms show us material with the goal of maximizing the amount of time we spend on, say, social media—Facebook, Instagram or TikTok, for example.
“In some ways, it’s kind of programming us,” he said, “because these companies are using AI to learn what will keep people on pages. And what we’re discovering is that rage and indignation are the things that keep people going and reading more.”
That’s a concern because it feeds division in our society.
“It’s kind of siloing people into their own information worlds and keeping them in a constant state of agitation instead of helping people work together,” he said.
Zelle pointed out that another concern about addictive programming is that as people spend more time on their devices, they interact less with people in the real world.
Healthcare is another area where the assistance of artificial intelligence looks very promising, such as being able to identify tumors on scans. However, Zelle urges caution about relying on AI for something that has such high stakes.
“Oftentimes, when these systems are actually put into practice, they don’t do as well as the research had indicated,” he said.
That disconnect comes down to how the artificial intelligence “learns” to identify things.
“They might take hundreds of thousands of scans of patients, and then they learn from this training data to identify, say, tumors,” he said. “But if the training data is not representative of all the different kinds of scans that might be taken by different operators in different settings, then what works really well on the initial data turns out not to work in practice.”
In other words, real life is messier than a limited number of case examples, and that can be dangerous.
“When these kind of machine learning techniques work, they work well,” Zelle said. “But we have no way of predicting the cases in which they’re not going to work, and when they don’t work, they can fail spectacularly.”
He said the solution, for now at least, is to use AI programs as a tool but have an expert evaluate results for anything that is important.
“Don’t let AI do things for you that matter, where mistakes could be catastrophic,” he said. “If an AI has access to your bank account, that would be something I’d be very concerned about.”
In addition to AI decision failures, Zelle is concerned about the technology further empowering big corporations over individuals.
“As with any powerful technology, we have to worry about how it’s being used and who it’s giving power to,” he said. “What AI tools are doing is giving big corporations and very powerful players even more powerful tools for insinuating themselves into our lives, and I’m worried about that.”
He observed that the United States is not regulating AI technology or ensuring that it’s being used for good purposes. He looks to the European Union’s regulatory efforts as a possible model for the U.S. government to keep AI use safe here.
As individuals, “it’s not clear what power we have,” Zelle said. “We can’t stop Silicon Valley from developing this technology. The genie is already out of the bottle, and we can’t put it back in.”
Next: What is generative AI
Read More