Skip to content

ChatGPT-o1 Created A Programming Language…



The latest ChatGPT model GPT-o1 is pretty impressive. It claims to be able to “think” before answering questions, and it got a lot …

32 thoughts on “ChatGPT-o1 Created A Programming Language…”

  1. I expect these models to get much better once they're able to autonomously write, run, and test the code piece by piece.

    Writing an entire program and only begin testing when every feature is implemented, makes it nearly impossible to debug.

  2. It works a lot better if you ask it to create one with debug logs and pass them in, you as a developer also might find it harder to fix without seeing the state and what exactly is happening 🫡

  3. This is really interesting. Using a combination of Claude 3.5 Sonnet and GPT-4o via ChatGPT, I was able to write some rudimentary but very functional 3D rendering code in Rust. Even included a simple lighting/shading system.

    Now I'm tempted to recreate using o1.

  4. As a physicist, I have played around with GPT alot. I've found that GPT4o is amazing for when I'm writing a paper. I usually do all my work with a pencil on a piece of paper. I used to have to sit and translate it all by hand to latex code, which can be extra tedious when you're dealing with many tensor indices you need to keep track of. But now I can just snap a quick picture of my notebook, and GPT will turn it into a complete equation in less than 10 seconds and while it's working, I can spend that time actually writing as well, and then just copy-paste when I need it. It has saved me so much time and tedious work. And it's accurate 90% of the time, I'd say, if you make sure to give it the right prompts to begin with. And I think that is extremely surprising how shit my handwriting is lol.

    However, I have also tried feeding it actual physics problems of various levels. I started out testing it on high school problems, and it usually did fairly well, the parts where it messed up was with computation with numbers, which I don't really care about. It used the right methods and showed some signs of "creativity" or what you wanna call it, in some of the solutions. I also fed it standard college exercises, which it struggled with a bit more. I would estimate 60% success rate, which isn't bad. But as we started entering graduate level stuff, it started completely spewing nonsense. I also asked it just regular questions about the topics, and it was usually somewhat correct, but just very vauge. When I started pressing on with the stuff it was vague about, it started just completely talking nonsense. And then I would correct it, and it would say something like "You're right! My bad" and then go on to say something even more nonsensical with even more confidence lol

  5. why do I feel like gpt o1 has been coded with chatGPT? i've coded alot with gpt and gpt o1 wauld be a type of script chatGPT wauld make like "Humand normally think so lets integrade that" And then how it says "thought for 12 seconds" hm

  6. The "Gen Z" programming language ChatGPT "came up with" is actually a copy of a Youtuber's programming language called "bussin". You can check it out. Turns out, chatgpt is not that creative

  7. Why javascript, all languages that you use are made in assembly, some high level are made in C or C++ but no language is made in js and assembly is the fastest

  8. I made pretty much the same observations as you did. For simple code snippets, AI is adequate most of the time. But once you demand something more complex, there are inevitably issues coming up, and the AI almost always fails to solve it. It gets even worse when it starts hallucination functions or properties which simply don't exist, and mentioning that to the AI usually just leads down a rabbit hole.

    At that point it's usually best to work with what you got initially, and do the followup steps yourself. Maybe still keep asking the AI for help with smaller sections, but do not trust it with the entire task 🙂

Comments are closed.