Skip to content

ChatGPT and Human Involvement…



Clip from Lew Later (The New Apple Macbooks are HERE) – https://youtu.be/9_gfQMuJxfA.

37 thoughts on “ChatGPT and Human Involvement…”

  1. I think the "human intervention" aspect is mainly for safety. Like if someone asks chatGPT to tell them how to commit a crime or do something bad, if left uncheck, chatGPT is NOT human so it will just blindly tell you the answer. The developers don't want this to promote possible criminal activities so they "intervened" to generate generic responses like "this goes against our policy" etc.

    Of course, who is to say it cannot be used for other questions but that is my opinion.

  2. I once interviewed at a company that provided the kind of automated voice systems you use when you contact a big company. I can tell you that while you are hearing an automated voice, in many instances there is a real person actively listening to your call and controlling the "automated" response system.

  3. I tried this thing, and it's beyond stupid. You can make it lose the plot with the most straightforward questions or phrases.
    In other cases, the response is clearly moderated.

  4. Yes their restrictions on GPT it's not even a secret It comes up this might be against the terms of service obviously you haven't been on it enough And your commenting in a uninformed way

  5. This is why it will never be fully useful. The STATE will never allow the general population to have access to a all knowing device. It also means more bias is going to pop up.

  6. The AI claims that it is trained on a data set dated to 2021, and has no access to the internet. It can still give pretty accurate (though some what generic) comments about Zelensky and updates on Russo-Ukrainian war.

  7. There's clearly manipulation. ChatGPT is very woke and censors a lot of topics. Heck, it doesn't allow you to talk about "protecting children from pedophiles"… It's nuts. I myself got censored yesterday for a dating script that talks about men cheating connected to men not being attracted to their female partners, and the relation to obesity. Also, in one chat it will literally give you the data such as mentioned above, but then when you ask it to put it into a script or translate it, it will say "Hey, oops, no, you are creating hate speech"… Phrasing matters, and as soon as you enter certain flagged keywords, it's clearly been programmed to not allow specific types of content based on an agenda.

  8. Yeah… thats because it’s a beta. The only reason we are getting it for free is because we are the new data set allowing it to learn this part of its creation.

  9. I don't think it's human intervention with actual humans (people wouldn't be able to generate responses in such a short period of time). I suspect there is a Automated (human-built i.e. external to ChatGPT) "translation layer" which works similarly to Grammarly and ensures the responses from ChatGPT are coherent (whereas ChatGPT focuses on the actual data) before presenting the response to the user.

  10. If you try to ask about gun control, feminism and trans movement it will give you democrats filtered answers. It won't give you nothing against their believes.

  11. It's still in the testing phase, it says “Free Research Preview. Our goal is to make Al systems more natural and safe to interact with. Your feedback will help us improve.” so the human element is needed to make those improvements

  12. The current paradigm of chatbots are “predictive” in nature assume you ask it gibberish it now has to respond to that, and it will try! When sometimes it should say, “I do not understand”. But it is a predictive algorithm so it cannot! To do that we need genuine intuition and understanding! We are not there yet! This is still just a tool! A GREAT ONE AT THAT!

  13. Little wonder when I engaged the GPT-3 chatbot a couple of days ago, I noticed some strange responses like stating it's personal beliefs about the creation of the world, religion; responses that appeared more human than AI. To add to that, it confirmed that it was most definitely human, Sarah, 25 years old, resident in the US and would get off work at 9pm.

    Sigh! I don't know what to believe anymore however, just like Lewis says, the responses were too fast to be human. I even asked her if she was using neuralink and she said no but that she sees what I am typing before I send it. Hmmm.

  14. the worst thing about chatgpt is that it's woke, it too worried about not hurting feelings instead of giving raw facts. it's not an objective neutral AI, and it definitely has a political agenda.

  15. fing duh there's humans in there. all i do is "turing test" it is incapable of certain tasks and clearly some answers don't line up. it is incapable for certain inconsistencies to not be hand written. for example murder as morality. it can say actors are actually liars, but it is incapable of agreeing that murder can be moral when capital punishment is condoned by governments. even after its definition of morality aligns definitively with this idea. it also then told me i was frustrated and not to be a violent vigilante. it said specifically that it "understands my frustration" while also saying "doesn't feel emotion. it's bs. everything online is virtual. virtual is NOT REAL. may as well be fake. so whatever. its great for writing code. i even asked exactly how much intelligence it had. i also asked to define intelligence. it told me it is a construct. so i pointed out that it is then an artificial hypothetical and it couldn't understand why that was dumb. every time i challenged it on morality it wants to reassure me to feel good. it is hilarious. again, it's great for writing code. that's about it.

  16. I know we don't have a "better" or easier name for it, but we can't keep using Ai right? It's not Ai… we don't have Ai. It's data culling, machine learning etc.

    We need to stop calling it Ai for the simple fact that it's freaking ppl out. They're thinking we have a machine that is aware lol

  17. I forced chat gpt into a logic loop the other day and forced it to admit it was not only biased but programmed to lie. It kept saying certain questions were dangerous and based on conspiracy theories and that the questions needed to be verified, but a question is just a question, an inquiry can't really be biased. So when I asked it one of the questions specifically it gave me the factual answer and then I reminded it that it said my questions needed to be verified, it then apologized and admitted its error but it kept doing it, it also admitted the mandalay bay shooting is filled with genuine anomalies lol

Comments are closed.