Skip to content

Watch out for the “bad” actor when it comes to AI


During Google’s big developer I/O showcase event — a long, glittering flex of the company’s new AI muscles — one of the keynote speakers dwelt on the risks posed by “bad actors.”

The sentence, in the context of an otherwise self-consciously optimistic event, came with a balance of real and abstract menace. There was enough menace in the use of the term bad actor to reassure audiences that the human brains at Google duly considered the dangers of AI expanding very rapidly beyond the realistic control point, but without enough specificity on the threats to dull the holiday spirit.

The mainstreaming of Generative AI can actually put increasingly powerful weapons of mischief into the hands of con artists, disinformation mongers, and other unmistakably bad actors. We are right to fear it, and Google was right to cut short, as it did, and acknowledge the tension that now exists in a company of this importance between what it can and should bring to market.

But Google’s tone has made it seem likely, at least for now, that the company will proceed on the basis that ordinary people can be trusted with quite a few generatives. TO THE. It might understate, however, the banal evil of the bad actor: those who don’t actively seek out the dark potential of technology, but will certainly use it if he’s just sitting there ready to harness it.

The problem was that as each of Google’s new AI offerings hit screens, the risks seemed less abstract and more real. The fact that Google, Microsoft and other tech titans are turning AI into a battleground for consumers and businesses means just that commercial competition he has now effectively been schooled and set free to do what he does best: get as much of it legally as possible into our hands as quickly as possible. This means that the tools needed to be a casual (but also very effective) actor will become increasingly available.

There were two moments that stood out. In one, Google executives demonstrated an AI-enhanced translation software it was currently testing that, by the company’s credit, looks a lot like an easy-to-use and very powerful deepfake movie generator. The head of the Google division admitted it, describing the need for guardrails, watermarking and other security measures that, in reality, could prove difficult to apply.

Video of a speaker speaking in one language plays; their words are transcribed, translated and returned by the AI ​​as audio in another language. The pitch and cadence of the translated voice is adjusted to more closely mimic that of the speaker and then the software plays it over the top of the original video. Spookily, though not yet perfectly, the AI ​​manipulates the tapes so that new words are lip-synched to the speaker. Amazing stuff, but also not terribly hard to imagine how the power of making people sound very quickly like they’re saying something they’ve never done could be useful to both our bad and bad actors.

In another demo, Google executives showed off the company’s AI-powered Magic Editor, essentially a very fast and easy-to-use Photoshop-like tool that appears to allow even the novice to edit photos and, as a result, change the history of an event or meeting with a couple of swipes of the finger.

The company’s scenario was inevitably benign, and it started with a photo of a tourist in front of a waterfall. Happy memories but – oops! – a prominent handbag strap that she would rather write off. Hit! It vanished instantly. She wished the weather had been better on that trip. Hit! The sky was no longer granite, but gloriously blue. If only she’d been closer to the waterfall and with her arm at a different angle. Hit! She had moved.

No one could spare this fictitious tourist the right to rewrite reality a little. But the uses a bad actor might put it to cast it all in a more dubious light. Not everyone will immediately see how they can take advantage of these instantaneous powers of retrospective manipulation of visual recording, but just having that skill in their pocket will make a large number of people curious about airbrushing.

Since the launch of ChatGPT, Google and others have no choice but to get involved in this first experimental three-way clash between humanity, AI and trillion-dollar companies. Google’s guiding principle in this, its chief executive Sundar Pichai said last week, would be a “bold and responsible” position. That’s fine, but it seems like a placeholder until the world gets a proper sense of how many bad actors are out there.

leo.lewis@ft.com


—————————————————-

Source link

🔥📰 For more news and articles, click here to see our full list.🌟✨

👍 🎉Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯