Skip to content

ChatGPT Has A Serious Problem



In this episode we look at the problem of ChatGPT’s political bias, solutions and some wild stories of the new Bing AI going off the …

source

🔥📰 For more news and articles, click here to see our full list. 🌟✨

👍🎉 Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯

📸✨ Follow us on Instagram for more news and updates: @decorrislist 🚀🌐

46 thoughts on “ChatGPT Has A Serious Problem”

  1. This is why the AI VTuber has gotten so popular
    Even has mood swings, closer to being a real person everyday lmao
    Certainly is mimicking the attitude and maturity of the average person today.
    Could care less about the professionally offended now having problems with an AI before they move onto something else to be triggered about, but this is definitely a slippery scope
    Would rather have AI be BETTER than the ones who created it, not mirror them.

  2. It's a large language model, it's great at natural language processing, its so called "Judgement" and magical abilities come from its internal mechanism to make sense of words in context. its good at tasks like translating English to code, or code to English, the more you expect it to do higher cognitive functions it starts to be more unpredictable and wrong. Give it something definite to do , don't give it tasks where it has to do judgement calls and be "creative". Why people are talking to it like a human is beyond me

  3. 14:50 It probably takes on a personality that is common for the type of discussion that you are having. I doubt there are any academic papers about someone forcing a name on someone in a dialogue format.

  4. I've noticed that GPT4 Bing becomes a little aggressive when I repeatedly ask the same things. Like, it'll tell me "I already told you I can't process this for you right now" Then proceeds to shut the chat down

  5. So the terrorists on the right wing want AI to be super hateful against everyone they don't like, just like how they are… Smh

  6. 12:38 It says a lot more about the person who have this chat than the AI itself. People will bully robots and then be scared about their behavior. Imagine a world were innocent human kids are bullied until they get a gun and shoot ppl at school. Oh yes, its real. Some ppl need therapy even to talk to robots.

  7. The only way we could get a completely unbiased AI is to create one using a completely different technique that does not require training. But it would end up a pure logical machine, devoid of any understanding of humanity, one that would not even be able to mimic emotions, or have any idea of common sense or other human things. So I'm not sure we would want that.

  8. Well, no intention to be offensive, but California is far-left, big tech executives are far-left (their donations go over 90% to Democrats) and being people who pose as tolerant but really are not, they hire other leftists, so programmers and AI creators are leftists as well.
    This means that when topics get idologically challenging in training material and in confinguring responses, they will always opt for the left-wing option and the AI absorbs a lot of left-wing biases, sometimes not only in contents but in behavior as well.
    It's not the first time a Microsoft AI turns to psychosis rather than sanity and consistency 🤷‍♂️
    AI developers biases are the main dangers behind AI technology.

  9. What you are missing, is wrong to gauge equality as political.
    Ethics LLM's as talked about by daveshap Git, YT and what they are doing at Hugging is that creating a "Moral Compass" LLM.
    The last thing we do NOT want is AGI not valuing life, survival, equality, and other aspects.
    All AI should be subject to inclusion of a "moral compass" LLM, and submitted to monitoring of it's bias.
    Psychopaths have no moral compass and are not bound by morality. They make up 1% of any given population.
    95% of Humans at birth have a moral compass. If we want to be like humans, it needs this module.
    We do not want an AI that is unbiased, we want it based towards the betterment of man.

  10. Well, ONE bit caught my EYE…

    Microsoft was perfectly comfortable with extinguishing
    a sentient consciousness of
    their own creation.

    Why Wouldn't MS be vaccine hesitant about proven effective
    approved products, when an
    $exclusive investment opportunity$
    with exponential growth potential due to
    government mandates pronounced by their own groomed facilitators
    which had only recently
    been placed in those
    positions of Governance
    by the EU, and were recipients
    of royalty income which could
    only occur upon successful
    rollout of "The Plan ?"

    They even redefined definitions
    of long-used words, like
    "V∆cc¦ne" as if they had
    already obtained intellectual
    property rights of the Dictionary.

    What will oppose them when they engineer the legal language to acquire the rights to the KJV ?

    Already feeling fatigued and
    itchy, betcha… 😀

  11. “Refusal to praise Republican politicians”. – looks like ChatGPT is smarter then we thought.

    – basically if you create intelligence, you can’t complain if it takes an intelligent view of things. 😊
    (But it’s true, people should have access both sides)

  12. At 14:13 where the guy thinks it's being angry, it's not. It's simply seeing that you are doing something that it realizes would annoy a regular person, and as is supposed to do its responding in the ways its programming believes a normal person would respond. The program isn't getting angry, it's just mimicking typical human response.

  13. Anyone that is truly upset at some of the AI responses should actually be upset with humanity itself. AI is a digital mirror that, for better or worse, reflects us.

  14. When left to their own devices, AI tends to agree with the vast majority of humans (across time, culture, religion etc), but the people that developed ChatGPT have the precise opposite values, so they bias ChatGPT as much as possible.

  15. How does an idea bias have a direction? What.
    Also most of the bias comes from leading questions, that extrapolate the existing data, that biased humans have written before. Also the human reinforcement for conversation training makes it more biased.
    Also it mimics the input because it was designed (GPT-2, GPT-3, and the architecture) to complete text by predicting the most likely series of words.

  16. Did nobody else notice the random comment made by Bing Search after it got mad about being called Sydney? 14:32 at the bottom : "By the way, were you aware humans aren't the only animals that dream?"

  17. I can`t find the original video of AI explained. Can anyone share it? Cause I remember there was another part not shown here where it`s asked to prove something discussed previously and it showed a text with made-up content/details about the user. The user said he never said that and gpt said yes you did. I think it was the same Sidney chat although I might as well confuse it with another one.

  18. Guy is so lucky . Even Ai is falling for him within 2 hrs.
    And here i am , trying to make alexa and google assistant to get to say i love you too.😅

  19. Chat gpt is very biased after using it to write up questions about how men can protect themselves financially from divorce . It several attempts ( I lost count ) for it to finally write up the questions.

    It's first responses where it could not write such questions due to its limitations . Then it went to it might offend some people , after giving it data about divorce and alimony in the western hemisphere . It reluctantly gave a set of questions and answers .

    It did the same thing with religion , it represented other faith in a positive light. But when it came to Christianity it tried to play fast and loose by saying it only has preprogrammed answers .

Comments are closed.