Shortly after the US military announced it had obliterated Iran’s nuclear facilities without sustaining any damages or casualties, a photo circulated on the internet appearing to refute those claims.
It showed a B-2 military plane – the type used to bomb the facilities – crashed into the dirt with its left wing busted, surrounded by emergency workers. It was enough to make people question whether the attack was as seamless as the President maintained.
However, the eagle eyed could see something amiss: an emergency worker is unnaturally blended into the background in a manner that could never happen in real life.
Another picture showed purported Iranian soldiers by a downed B-2, but they were way too large in comparison to the supposedly downed jet. Both pictures were AI generated.
“Anything that can be used for good can also be used for bad,” Gary Rivlin, author of “AI Valley,” told The Post.
He says the cleverest AI – which tech companies are expected to pump $320 billion into this year alone – is now “getting to 95 percent undetectable [as fake].”
The Pulitzer Prize winning expert admits, “Sometimes I can’t tell the difference.”
Another example, this time a video, also homed in on politically sensitive events.
Circulated during the recent protests against Immigration and Customs Enforcement on the streets of Los Angeles, it showed a National Guard soldier named ‘Bob’ eating a burrito and joking about it being “criminally underrated.”
The tell-tales it was fake were more subtle – “Bob” does not remove his mask as he eats and “police” isn’t written correctly on the car behind him – but it was enough to hit a nerve with the Latin community.
Set-ups like these fuel the plot of the recent HBO movie “Mountainhead” – where a group of tech bros meet to the backdrop of governments and world order collapsing under the weight of mobs misguided by AI generated deepfakes that one of the high-flying “brewsters” (as they call themselves) is responsible for.
“There will be important implications, and, as a society, we will have to deal with them.
“You can see something fake and believe that it’s real. I worry that we will let AI run things, and AI has no common sense,” Rivlin added.
Nightmare scenarios aside, AI has many positive applications and is already vastly enriching scientific study, upending entire industries, cutting down on people spending time on repetitive tasks and helping them do their jobs better.
According to Wired.com, Microsoft claims it has developed an AI system which is four times more accurate at diagnosing diseases than doctors.
According to a recent poll, 43% of people admitted to using AI to help them with their work, according to the New York Times.
And, for the most part, the casual user has been able to take advantage of the technology for free – at least for now. A report from Menlo Venture claimed only three percent of an estimated 1.8 billion users pay for artificial intelligence.
Video capabilities may have only recently got to the point where experts decided it was good enough to fool the general public – the world’s first entirely AI generated TV ad aired in June – but now the floodgates are open. Showing people partying in various US locations, the AI ad took just two days to create and is virtually indistinguishable to the naked eye from real footage.
And all the necessary tools are available to the public. DeepFaceLab swaps faces, HeyGen clones voices, Midjourney, OpenAI, Google’s Veo 3 and others can create video of real people in unreal situations.
In a world dominated by robocalls and texts and rogue states spreading propaganda, how will we know what we can trust? Already, the lines are blurring with deepfakes in everyday usage on the internet.
In minutes, The Post found a supposed “Oprah Winfrey” peddling diet products and “Mick Jagger” and “Clint Eastwood” apparently hawking T-shirts saying “Don’t mess with old people, we didn’t get this age by being stupid.”
More fantastically, you can AI chat with bots such as “Kurt Cobain,” the rocker who killed himself in 1994 — years before he would have been able to sign up for an email address.
“In a world where there are bad actors, there will be detectors,” Mike Belinsky assured The Post.
Belinsky — who works as a director in the AI Institute at Schmidt Sciences, the science philanthropy wing operated by former Google CEO Eric Schmidt and his wife Wendy —would not reveal the exact nature of these detectors, but suggested the AI battlefield will resemble a high-tech game of Whack A Mole.
Likening it computer viruses, he added: “This is not a static problem. Everybody will need to keep updating. Sometimes the bad actors are ahead and sometimes the defenders are ahead.”
Like video technology, Meta boss Mark Zuckerberg says AI chatbots can cross the line into seeming as real as your friend who lives down the block, and he’s betting big on them.
“The reality,” he said on a recent podcast, “is that a lot of people just don’t have the connections, and they feel more alone a lot of the time than they would like.”
Meta has reportedly plowed $14.3 billion into a start-up called Scale AI and hired its founder. The now-Zuckerberg run company is said to have spent as much as $100 million to bring in top AI researchers.
Elsewhere in Silicon Valley, OpenAI kingpin Sam Altman and Jony Ive – a former Apple architect of devices including the iPhone – have joined forces.
Altman’s OpenAI purchased Ive’s one-year-old AI devices startup, io, for some $6.4 billion and they are working on launching a hardware device.
“I think it is the coolest piece of technology that the world will have ever seen,” Altman claims.
Clues as to what exactly the device will do are scant, but the Wall Street Journal wrote Ive and Altman are planning to build “companion devices,” which Mark Gurman’s Power On newsletter speculated was “a machine that develops a relationship with a human using AI.”
It sounds like a life co-pilot without the drawbacks and complications of an emotion riddled human wingman.
But do we really want to replace our friends with computer chips?
Rivlin has his own thoughts: “Humans have imperfect memories. This could be like a court transcript of life. You can ask it a question about something that was discussed months [or years] ago and it would call it up.”
He’s excited about various new AI technology, but also has concerns about both data collection and privacy.
“There is an expression that if you don’t pay for the product, you are the product. We search the web for free, but it gets sold to the highest bidder for advertising.
“I don’t trust big tech and AI is in the hands of big tech. They have not figured out how to make money on it yet, but they will,” he ominously added.
Read the full article here