close
close
Local

Elon Musk blasts Apple's OpenAI deal for alleged privacy concerns. Does he have a point?

When Apple holds its annual Worldwide Developers Conference, its software announcements typically spark cheers and excitement from tech enthusiasts.

But there was one notable exception this year: Elon Musk.

The CEO of Tesla and SpaceX has threatened to ban all Apple devices from his companies, alleging that a new partnership between Apple and Microsoft-backed startup OpenAI could pose security risks. As part of the new operating system update, Apple said users who ask Siri a question could choose to have Siri pull additional information from ChatGPT.

“Apple has no idea what actually happens once they pass your data to OpenAI,” Musk wrote on X. “They are selling you down the line.”

The partnership allows Siri to ask iPhone, Mac and iPad users if the digital assistant can bring up answers from OpenAI's ChatGPT to help answer a question. The new feature, which will be available on select Apple devices, is part of the company's operating system update due later this year.

“If Apple integrates OpenAI at the operating system level, then Apple devices will be banned from my businesses,” Musk wrote on X. “This is an unacceptable security violation.”

Representatives for Elon Musk and Apple did not respond to a request for comment.

During a presentation at its developers conference on Monday, Apple said ChatGPT will be free for iPhone, Mac and iPad users. Under the partnership, users of Apple devices would not need to create a ChatGPT account to use it with Siri.

“Privacy protections are built in for users who access ChatGPT: their IP addresses are masked and OpenAI does not store requests,” Apple said on its website. “ChatGPT's data use policies apply to users who choose to connect their account.”

Many of Apple's AI models and features, which the company collectively calls “Apple Intelligence,” work on the device itself, but some requests will require the information to be sent through the cloud. Apple said the data is neither stored nor made accessible to Apple and that independent experts can inspect the code running on the servers to verify it.

Apple Intelligence will be available for select Apple device models, such as iPhone 15 Pro and iPhone 15 Pro Max as well as iPad and Mac with M1 and later.

So, is Musk right? Technology and security experts who spoke to the Times expressed mixed opinions.

Some rejected Musk's claim that Apple's OpenAI deal posed security risks, citing a lack of evidence.

“Like many things Elon Musk says, this is not based on any technical reality, but simply on his political beliefs,” said Alex Stamos, chief trust officer at Mountain View, Calif.-based cybersecurity company SentinelOne. . . “There is no real factual basis for what he said.”

Stamos, who is also a professor of computer science at Stanford University and a former head of security at Facebook, said he was impressed by Apple's data protection efforts, adding: “They promise a level of transparency that no one has ever really provided.

“It's hard to fully prove at this point, but what they've presented is about the best you can do to provide this level of AI services running on people's private data while protecting their lives private,” Stamos said.

“To do the things that people are used to with ChatGPT, you can't do it on phones yet,” Stamos added. “It'll be years before we can use these kinds of designs on something that fits in your pocket and doesn't burn a hole in your jeans because of the amount of energy it burns.”

Musk criticized OpenAI. He sued the company in February for breach of contract and fiduciary duty, alleging it diverted its attention from a deal to develop artificial general intelligence “for the benefit of humanity, not for a for-profit enterprise seeking to maximize shareholder profits.” On Tuesday, Musk, co-founder and investor of OpenAI, withdrew his lawsuit. Musk's San Francisco company, xAI, is a competitor to OpenAI in the fast-growing field of artificial intelligence.

Musk has taken aim at Apple in the past, calling it a “Tesla graveyard” because, he said, Apple had hired people that Tesla had fired. “If you don’t succeed at Tesla, you go to work at Apple,” Musk said in an interview with German newspaper Handelsblatt in 2015. “I’m not kidding.”

However, Rayid Ghani, professor of machine learning and public policy at Carnegie Mellon University, said that at a high level he thinks the concerns Musk raised about the OpenAI-Apple partnership should be raised.

While Apple said OpenAI doesn't store Siri queries, “I don't think we should just take that literally,” Ghani said. “I think we need to ask for proof. How does Apple ensure processes are in place? What is the recourse if this does not happen? Who is responsible, Apple or OpenAI, and how can we fix the problems? »

Some industry observers have also raised questions about whether Apple users with a ChatGPT account can link it to their iPhone, and what information OpenAI collects in this case.

“We have to be careful with this: linking your account to your mobile phone is a big deal,” said Pam Dixon, executive director of the World Privacy Forum. “Personally, I wouldn't link until there's a lot more clarity about what's happening to the data.”

OpenAI highlighted a statement on its website that says: “Users can also choose to connect their ChatGPT account, meaning their data preferences will apply in accordance with ChatGPT's policies.” » The company declined further comment.

As part of OpenAI's privacy policy, the company says it collects personal information included in entries, file uploads or comments when account holders use its service. ChatGPT allows users to opt out of having their requests used to train AI models.

As the use of AI becomes ever more intertwined with people's lives, industry observers say it will be crucial to ensure transparency for customers and test the reliability of AI tools .

“We're going to have to understand something about AI. It will look a lot like plumbing. This is going to be integrated into our devices and our lives everywhere,” Dixon said. “AI will need to be reliable and we will need to be able to test that reliability.”

Night Filing Supervisor Valerie Hood contributed to this report.

Related Articles

Back to top button