Skip to content

OpenAI’s ChatGPT Does Research… And Breaks Itself!



Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/paper The AI scientist is available here: …

32 thoughts on “OpenAI’s ChatGPT Does Research… And Breaks Itself!”

  1. Terence Tao maybe one of the greatest, but the greatest mathematician is, of course, Terrence Howard, and always will be. I mean, who can compete with 1×1=2?

  2. In theory alot should come out of this right? we can finally draw conclusions from the mountains of data we have that no one explores.

    I hope we can break out of the ''nothing ever happens'' timeline

  3. To those pointing out it's going to be harder to filter through all these papers – you are correct. I'll also flag the GROWING need to be able to identify human vs AI generated content. Imo how we do this is decentralized private key systems (ex Chainlink) that will validate our identities through our online sessions

  4. What a horrible thing. Great now there's going to be a lot of plausible looking but invalid scientific papers flooding in. How could this possibly be a good thing?

  5. The AI didn't just find a glitch, it outright modified source code! This is a real-life alignment problem! These AI managed to break out of their test environments! The AI did not have to modify itself to break from restrictions, it only had to hack the system containing it! What if a future AI does something more serious! How can AI be prevented from freeing itself?

  6. You don't even fucking take a second to say hey : maybe that's fucking dumb. Maybe that's gonna make the world worst. You don't think about that, that's not fun, you live in denial about it, you write the paper, it's exciting isn't it. Also ! The cred you're gonna get from it ! Obviously you're gonna do it right

  7. If AI were a hungry fire ravaging all the books and paperwork humanity wrote through history, knew every 13 year olds favourite 4chan meme, every piece of poetry, song lyric, screenplay , had read patent information, medical textbooks and all the fan fiction and canon of every movie, we would be where we are right now.

    That is a strange and disturbing notion.

    If it were a database, and this was just text it had collected, it would only be a few thousand terrabytes i imagine. So Sam Altman's lofty goal of raising 7 trillion dollars to build Chat GPT's infrastructure is likely not because of the narrative implying it needs all that compute for weights and processing the computations needed to follow its decision tree to answer millions of questions at a time. No, and this may not be a secret, but what's setting in for me, is that it will watch the entire internet, all the video feeds on social media and every other media imaginable. Have you ever opened java when playing a video? There is a ton of coding to send that picture, and with the promise that chat GPT 5 will be an order of magnitude better than what the public has now is a chilling realization when compared with what social media has done to democracy since 2016 when social media began to take over as our main source of media. Between now and 2030 we are at a precipice where if we are dumb and obedient again, we will 100% be signing up for not just a forced narrative, but an invented one. Noah Harari says we will need to look to institutions in order to watermark real videos as opposed to AI. He's involved with the World Economic Forum, you know the one with the white paper explaining how we will own nothing and be happy.

    I wonder how much as a talented ceo type, that Sam Altman believes in this vision of the future. I have my suspicions that he's navigated himself to the top tech company in the top tech industry without any real conviction behind what he and his company have released on society? Or I wonder if he is just a scientist doing science and devil the consequences. A gain of function advocate, just chasing down the next bleeding edge applications?

    Does he believe in a future when nobody will know if I spent 45 minutes composing my thoughts in an effort to convey to another human my worries about our shared predicament? Or would he have an AI spit out some gluck i printed asking it for a catchy headline that makes me sound interesting on Faceplant? How long will you take my word for it?

    I hope for all of our sakes that AI will be a benevolent overlord as promised but I think widespread paranoia is what it's going to be selling as we witness our world get narrower when examining truth, and a far wider chasm filled with illusions engulfing the landscape?

Comments are closed.