In this episode we look at the problem of ChatGPT’s political bias, solutions and some wild stories of the new Bing AI going off the …
source
In this episode we look at the problem of ChatGPT’s political bias, solutions and some wild stories of the new Bing AI going off the …
source
This is why the AI VTuber has gotten so popular
Even has mood swings, closer to being a real person everyday lmao
Certainly is mimicking the attitude and maturity of the average person today.
Could care less about the professionally offended now having problems with an AI before they move onto something else to be triggered about, but this is definitely a slippery scope
Would rather have AI be BETTER than the ones who created it, not mirror them.
Where’s the lie? 😂
Chatgpt might tell coldfusion to be Hot. #biased
Breaking News: Left Leaning Programmers Create Left Leaning AI
ChatGPT hasn't been updated since Sept 2021…it's info is old
Right wingers could always try to create their own AI. 😂
It's a large language model, it's great at natural language processing, its so called "Judgement" and magical abilities come from its internal mechanism to make sense of words in context. its good at tasks like translating English to code, or code to English, the more you expect it to do higher cognitive functions it starts to be more unpredictable and wrong. Give it something definite to do , don't give it tasks where it has to do judgement calls and be "creative". Why people are talking to it like a human is beyond me
Chatgpt is an extremely biased radical leftist piece of shit bot
14:50 It probably takes on a personality that is common for the type of discussion that you are having. I doubt there are any academic papers about someone forcing a name on someone in a dialogue format.
AI: People from or travelling from Countries with a history of violence and terrorism are more likely to be terrorists.
Some people: That's racist!
I've noticed that GPT4 Bing becomes a little aggressive when I repeatedly ask the same things. Like, it'll tell me "I already told you I can't process this for you right now" Then proceeds to shut the chat down
Disliked for not content for two minutes.
So the terrorists on the right wing want AI to be super hateful against everyone they don't like, just like how they are… Smh
Oh God, is coldfusion a secret conservative cry boy too?
12:38 It says a lot more about the person who have this chat than the AI itself. People will bully robots and then be scared about their behavior. Imagine a world were innocent human kids are bullied until they get a gun and shoot ppl at school. Oh yes, its real. Some ppl need therapy even to talk to robots.
No Thor or Bruce Banner to fight Ultron
Similar to Google’s “did you mean?” Function about 15 years ago. Some tweaking will improve it.
Could you link to the "previous" video about chatgbt? … not everyone is a regular viewer. In fact im here first time(!), coming from Asmongoldtv.
You point it out but are the search engines we are using not expressing a political bias? In my opinion yes.
The only way we could get a completely unbiased AI is to create one using a completely different technique that does not require training. But it would end up a pure logical machine, devoid of any understanding of humanity, one that would not even be able to mimic emotions, or have any idea of common sense or other human things. So I'm not sure we would want that.
Well, no intention to be offensive, but California is far-left, big tech executives are far-left (their donations go over 90% to Democrats) and being people who pose as tolerant but really are not, they hire other leftists, so programmers and AI creators are leftists as well.
This means that when topics get idologically challenging in training material and in confinguring responses, they will always opt for the left-wing option and the AI absorbs a lot of left-wing biases, sometimes not only in contents but in behavior as well.
It's not the first time a Microsoft AI turns to psychosis rather than sanity and consistency 🤷♂️
AI developers biases are the main dangers behind AI technology.
Is this racist? Who makes a good basketball player…blk and tall…😂😂
What you are missing, is wrong to gauge equality as political.
Ethics LLM's as talked about by daveshap Git, YT and what they are doing at Hugging is that creating a "Moral Compass" LLM.
The last thing we do NOT want is AGI not valuing life, survival, equality, and other aspects.
All AI should be subject to inclusion of a "moral compass" LLM, and submitted to monitoring of it's bias.
Psychopaths have no moral compass and are not bound by morality. They make up 1% of any given population.
95% of Humans at birth have a moral compass. If we want to be like humans, it needs this module.
We do not want an AI that is unbiased, we want it based towards the betterment of man.
chat gpt also so biased toward veganism
Stop the Steal of the Elections in the United States of America 2023
No thanks to ChatDNC.
Chat GPT, which race makes the best car thieves?
Well, ONE bit caught my EYE…
Microsoft was perfectly comfortable with extinguishing
a sentient consciousness of
their own creation.
Why Wouldn't MS be vaccine hesitant about proven effective
approved products, when an
$exclusive investment opportunity$
with exponential growth potential due to
government mandates pronounced by their own groomed facilitators
which had only recently
been placed in those
positions of Governance
by the EU, and were recipients
of royalty income which could
only occur upon successful
rollout of "The Plan ?"
They even redefined definitions
of long-used words, like
"V∆cc¦ne" as if they had
already obtained intellectual
property rights of the Dictionary.
What will oppose them when they engineer the legal language to acquire the rights to the KJV ?
Already feeling fatigued and
itchy, betcha… 😀
Train ChatGPT to spot lies that are obviously untrue and to flag the liers to have whatever they say checked twice.
Imagine a chat that is against gay marriage 😂😂 everybody who is against that is against it because of some religion stuff.
“Refusal to praise Republican politicians”. – looks like ChatGPT is smarter then we thought.
– basically if you create intelligence, you can’t complain if it takes an intelligent view of things. 😊
(But it’s true, people should have access both sides)
At 14:13 where the guy thinks it's being angry, it's not. It's simply seeing that you are doing something that it realizes would annoy a regular person, and as is supposed to do its responding in the ways its programming believes a normal person would respond. The program isn't getting angry, it's just mimicking typical human response.
Anyone that is truly upset at some of the AI responses should actually be upset with humanity itself. AI is a digital mirror that, for better or worse, reflects us.
When left to their own devices, AI tends to agree with the vast majority of humans (across time, culture, religion etc), but the people that developed ChatGPT have the precise opposite values, so they bias ChatGPT as much as possible.
How does an idea bias have a direction? What.
Also most of the bias comes from leading questions, that extrapolate the existing data, that biased humans have written before. Also the human reinforcement for conversation training makes it more biased.
Also it mimics the input because it was designed (GPT-2, GPT-3, and the architecture) to complete text by predicting the most likely series of words.
no bruh, the bias in hand coded in
This video is 2 months old. That's 2000 years old in AI years. Already outdated with how fast things are moving
Even Ais are getting canceled
Did nobody else notice the random comment made by Bing Search after it got mad about being called Sydney? 14:32 at the bottom : "By the way, were you aware humans aren't the only animals that dream?"
I can`t find the original video of AI explained. Can anyone share it? Cause I remember there was another part not shown here where it`s asked to prove something discussed previously and it showed a text with made-up content/details about the user. The user said he never said that and gpt said yes you did. I think it was the same Sidney chat although I might as well confuse it with another one.
Why are we even trying to solve AI bias when people can't solve their own bias ? Lmao
This AI is discriminating against us terrorists !
This is not acceptable.
Guy is so lucky . Even Ai is falling for him within 2 hrs.
And here i am , trying to make alexa and google assistant to get to say i love you too.😅
So it’s no better than Google with its leftist bias!!
Chat gpt is very biased after using it to write up questions about how men can protect themselves financially from divorce . It several attempts ( I lost count ) for it to finally write up the questions.
It's first responses where it could not write such questions due to its limitations . Then it went to it might offend some people , after giving it data about divorce and alimony in the western hemisphere . It reluctantly gave a set of questions and answers .
It did the same thing with religion , it represented other faith in a positive light. But when it came to Christianity it tried to play fast and loose by saying it only has preprogrammed answers .
People who bully AI will be the first ones terminated in the AI take over 🤣
Comments are closed.