Skip to content

The ‘Manhattan Project’ theory of generative AI


the rhythm of change in generative AI right now is crazy. OpenAI released ChatGPT to the public just four months ago. took only two months to reach 100 million users. (TikTok, the previous instant internet sensation, took nine.) Google, struggling to keep up, has launched Bard, your own AI chatbotand there are already several ChatGPT Clones as well as new accessories for the bot to work with popular websites like Expedia and OpenTable. GPT-4, the new version of the OpenAI model released last month, is more precise and “multimodal,” handling text, images, video, and audio all at once. Imaging is moving at a similar frantic pace: MidJourney’s latest release has given us the deepfake viral sensations of The “arrest” of Donald Trump and the father watching fly in a silver puffer jacketthat make it clear that you will soon have to treat every image you see online with suspicion.

And the headlines! Oh the headlines. AI is coming to schools! science fiction writing! The law! Gaming! Is making video! Fight against security breaches! Fueling the culture wars! Creating black markets! Trigger a startup gold rush! Take charge of the search! DJing your music! you come for your work!

In the midst of this frenzy, I have twice seen the birth of generative AI compared to the creation of the atomic bomb. What is surprising is that the comparison was made by people with diametrically opposed views on what it means.

One of them is the person closest to a chief architect having the generative AI revolution: Sam Altman, the CEO of OpenAI, who in a recent Interview with The New York Times He called the Manhattan Project “the level of ambition to which we aspire.” The others are Tristan Harris and Aza Raskin of the Center for Humane Technology, who became somewhat famous for noticing that social media was destroying democracy. they are now circling warning that generative AI could destroy none other than civilization itself, putting tools of awesome and unpredictable power in the hands of just about anyone.

Altman, to be clear, does not disagree with Harris and Raskin that AI could destroy civilization. He only claims that it is better intentioned than other peopleso you can try to make sure the tools develop with railings and furthermore you have no choice but to go ahead because technology is unstoppable anyway. It’s a mind-blowing mix of faith and fatalism.

For the record, I agree that technology is unstoppable. But I think the barriers that are being put in place right now, like filtering hate speech or criminal tips from GPT chat responses, are ridiculously weak. It would be a fairly trivial matter, for example, for companies like OpenAI or MidJourney to embed difficult to remove digital watermarks on all of its AI-generated images to make deepfakes like images of the Pope easier to spot. A coalition called Content Authenticity Initiative you are doing a limited form of this; its protocol allows artists voluntarily attach metadata to AI-generated images. But I don’t see any of the major generative AI companies joining such efforts.





Source link