close
close
Local

For artificial intelligence, is the long term in the near future? | News

NBC host Lester Holt interviews two of Silicon Valley's top CEOs at the Aspen Ideas festival on Wednesday. OpenAI's Sam Altman and Airbnb's Brian Chesky have been friends since they met in 2008.

Artificial intelligence is not a new technology, as Vivian Schiller, executive director of Aspen Digital at the Aspen Institute, emphasized repeatedly during the Aspen Ideas festival, which concluded Saturday.

AI models have been around for decades, but they recently captured the public imagination in a new way after one model, OpenAI's ChatGPT, made the technology accessible and usable for the public. The result was a “hype cycle,” as Airbnb co-founder/CEO and Ideas guest Brian Chesky called it, that attracted more attention than the advent of the Internet.

Chesky said the hype cycle might be overblown, explaining that AI isn't even an essential component of most phone apps. Similarly, Schiller frequently mentioned “Amara's Law”, which states that humans tend to overestimate the short-term effects of new technologies and underestimate the long-term effects.

But with the breakneck pace of AI development and the acceleration of technological innovation in general, Ideas experts believe the long-term effects may not be so far off.

During a panel discussion on Monday, Schiller asked David Olusoga, a professor and historian at the University of Manchester, how quickly new technologies typically lead to large-scale disruptions in society. Olusoga recognizes that technologies can take a long time to reach the public and change the world: James Watt's steam engine, for example, was invented in the 1760s but did not change the world until the 1830s. However, Olusoga said new technologies tend to be adopted more quickly.

“We can see that the gap between innovation and disruption is closing in the 20th century,” Olusoga said, arguing that the adoption of electricity and the internet was faster than that of the steam engine.

Despite his suspicions about hype, Chesky has stressed in his own panels that 21st-century internet platforms have moved rapidly from innovation to widespread disruption, changing the way Silicon Valley operates. Chesky argued that attitudes toward the recent tech revolutions of the 2000s have already shifted from starry-eyed naiveté to sober caution.

Attitude changes

When they first met at Silicon Valley startup accelerator Y Combinator in 2008, Chesky said he and OpenAI co-founder and CEO Sam Altman were part of a fast-paced culture fast-paced, act-first, think-later approach, and which was largely naive about the negative impacts big tech companies could have.

“When I came to Silicon Valley, the word ‘technology’ might as well have been a dictionary definition of the word ‘good,’” Chesky said. “Facebook was a way to share photos with your friends, YouTube was cat videos, Twitter was about what you were doing today. I think there was this general innocence.

Today, Chesky said, that culture has changed. In the decade since the two tech giants took over Y Combinator, the world has seen social media facilitate government overthrows in the Middle East and election interference in the United States. US politicians regularly talk about the effects of social media on the mental health of today's children, and governments have passed sweeping regulations for big tech companies.

“I think over time we realized… that when you put a tool in the hands of hundreds of millions of people, they're going to use it in ways that you didn't anticipate,” said Chesky.

Technology journalist Kara Swisher acknowledged that attitudes in Silicon Valley appear to be changing. She said she has enjoyed meeting young tech entrepreneurs in recent years who often tend to have “a better sense of the dangers of the world we live in.”

These attitudes have translated into a certain nervousness and controversy surrounding the advent of large, publicly available linguistic models.

Altman, who spoke on the “Afternoon Conversation” Wednesday, was fired from OpenAI in November because board members at the time were concerned about how quickly their AI was progressing. Former board members have since said Altman repeatedly lied to them about the company’s security processes. Altman later returned to the company, which now has a new board.

He described the ordeal as “extremely painful” while speaking to the Ideas audience Wednesday, but said he understood the former board members. He described them as “nervous about the continued development of AI.” Altman disagreed that technology was developing too quickly.

“Even though I totally disagree with what they think, what they've said since and how they've acted, I think they're generally good people who are nervous about the future,” Altman said.

“A lot of confidence to be gained”

Whether it's “too” fast or not, Ideas experts agree that technology is evolving rapidly. Government officials and private sector players have all said that technology is evolving faster than governments can keep up.

“Politics doesn’t move at the same pace as technology,” said Karen McCluskie, deputy director of technology at the UK Department for Trade and Business. “If technology is about moving fast and breaking things, then diplomacy is about moving slowly and fixing things. They’re opposing ideas. But that’s going to have to change.”

Technology is evolving so quickly, some experts say, that many technologists worry they'll run out of data to train AI models (Altman doubts this is a major problem). The dilemma is serious enough that some experts propose using “synthetic data” to train the models. And even though the computing power and electricity required to run the models make them prohibitively expensive, experts say those costs will certainly come down in the near future, which could make development faster and more competitive.

Tech leaders say they are responding with unprecedented speed and unprecedented caution. Rather than struggling to accelerate the slow acceptance of their new technology, Ideas executives said they are intentionally delaying product releases while they conduct security reviews. Altman said OpenAI has sometimes not released products or taken “long periods” to evaluate them before releasing them.

“What will our lives be like when the computer understands us and learns about us and helps us do these things, but we can also tell it to figure out physics or start a big company?” Altman said. “That’s a lot of trust that we have to earn as stewards of this technology. And we’re proud of our results. If you look at the systems that we’ve built and the time and care that we’ve put into getting them to a generally accepted level of robustness and security, it’s way beyond what people thought we could do.”

Chesky compared technological acceleration to that of driving.

“If you imagine you're in a car, the faster the car goes, the more you have to look ahead and anticipate turns,” he said.

Ideas government officials said some of those corners are already flying out the window. In a session on the role of AI in elections, Schiller cited several examples of attempts to mislead voters or interfere in elections using fake news and fake media generated by AI. So-called “bad actors” have used AI to mislead voters in Slovakia, Bangladesh and New Hampshire.

Ginny Badane, general manager of Microsoft's Democracy Forward program, said the Russian government also used artificial intelligence to produce a fake documentary ad ridiculing the Olympic Committee and the upcoming Paris Olympics, which Russia is banned from. The video uses a simulated voice of Tom Cruise as the narrator.

NBC anchor Lester Holt, who interviewed Chesky and Altman, used a different vehicle metaphor than Chesky, saying, “Most of us are just passengers on this bus, watching you do these things amazing and listening to you compare this to the Manhattan Project and asking, “Where is this going?” » »

Michigan Secretary of State Jocelyn Benson spoke about the role of artificial intelligence in elections at the Aspen Ideas Festival on Friday. Michigan has launched a campaign to educate voters about the potential for malicious actors to use fake videos and images to influence elections. Jason Charme/Aspen Daily News

Some successes

Despite its rapid development, experts say AI is still far from the revolution it promises to be.

While the successes have been revolutionary (one company, New York-based EvolutionaryScale, can now use AI to generate specialized proteins for personalized cancer care), AI still doesn’t play a central role in most of our lives. For a technology that’s been compared to the internet and even the harnessing of fire, experts say we’re only seeing the beginning of its potential impacts.

“If you look at your phone, your home screen, and wonder which apps are fundamentally different because of generative AI, I would say there are almost none. Maybe the algorithms are a little different,” Chesky said.

But while AI may not have changed the world yet, executives say it has certainly changed the world for some individuals.

“One of the most fun parts of my job is getting an email every day from people who are using these tools in amazing ways,” Altman said. “People were like, 'I was able to diagnose this health problem that I've had for years but I couldn't figure it out, and it was making my life miserable, and I just typed my symptoms into ChatGPT and I had this idea, I went to see a doctor and now I am completely cured.

Holt asked Altman where he would like to be in the next five years.

“Further along the same path,” he replied.

Related Articles

Back to top button