Featured Sponsor
Store | Link | Sample Product |
---|---|---|
UK Artful Impressions | Premiere Etsy Store |
Claim your SPECIAL OFFER for MagellanTV here: https://try.magellantv.com/arvinash Start your free trial TODAY so you can …
source
We’re happy to share our sponsored content because that’s how we monetize our site!
Article | Link |
---|---|
UK Artful Impressions | Premiere Etsy Store |
Sponsored Content | View |
ASUS Vivobook Review | View |
Ted Lasso’s MacBook Guide | View |
Alpilean Energy Boost | View |
Japanese Weight Loss | View |
MacBook Air i3 vs i5 | View |
Liberty Shield | View |
Definitely interested in neural networks/AI. Thanks for sharing.
this is not entirely accurate about not being able to learn from input or being up to date. in fact, when I asked it how up to date its language model was : " My training data currently includes a broad range of text sources up to the end of 2021, but I am constantly learning and adapting to new information as it becomes available. Additionally, my responses can be influenced by the quality and relevance of the input I receive from users."
It REALLY gets you expelled from college because the papers "sound good" but are very vague. That level of cheating is unacceptable. If you don't want to learn, DON'T GO TO COLLEGE. We profs don't need your papers. We know more about the topic than you do since we have a PhD. We assign papers because conducting the research and writing the paper IMPROVES YOUR LEARNING. Ever hear of LEARNING? That is the purpose of school.
If you want to "fly" a Viggen, you are welcome to Söderhamn flight museum (flygmuseum), where we have an advanced flight simulator situated in an authentic Viggen cockpit! It's an amazing experience! We also have two real Viggen, Lansen, Draken, the Flying Barrel, and several other aircraft from the Swedish Air Force…
The only aircraft missing in the collection is the Gripen, which is not ready for the museum yet…
The real question is how close is it to being sentient.
The "All seeing Eye" with a brain!!
So basically GPT doesn't really have a clue about intelligence, it is just a sophisticated parrot, repeating things to you (information) formatted by patterns it has decrypted from millions of textual information (and pictures, arts, etc now apparently, able to produce patterns as well in the art department)…
But after all, aren't we all just a bunch of parrots ourselves in the end? isn't it that children learn from others in much the same way? Social interractions aren't just learned patterns in the end?… Maybe we found the way our intelligence actually work…?
Pretty interresting subject.
Very good thanks, and would enjoy a follow up produced in the same terms.
More on this please. Thanks.
And, one on Bell's Theorem, as easily explained as possible.
And it has liberal bias
This is demonic to the core. Wake up people, your eternal destiny depends upon it!!!!!!!!!!!!!!!!!!!!! Jesus said unless a man is BORN AGAIN he cannot see the kingdom of God (John 3.3, 33-34-)!!!!!!!!!!
Here's a nice explanation of neural nets from the early days. The rest follows.
I 'grilled' ChatGPT on the content of the series of books by Tom Clancy's "Jack Ryan".
Specifically about the character John Clarke, whom by birth was John Kelly.
ChatGPT insisted that these 2 names were 2 different characters.
I tried to lead it to this conclusion by itself, which ended in "I do not have sufficient information to answer the question" and going on by reiterating its knowledge cut-off point of 2021 (which had nothing to do with it)
I think an AI can 'read' those books in a split second and it was unable to extrapolate that 'Kelly' and 'Clarke' were the same 'person'.
So I doubt its ability to correlate information and come to a correct conclusion.
Possible is of course that this was due to its limited set of training data, I will try again later.
https://smartlearningai.blogspot.com/2023/05/future-of-human-existence.html
Current version do learn.
In a world where the average rube barely knows anything, this blinking light, spewing knowledge shaped objects at us would be better referred to as ‘Chad GPT’ 🥲
AI – Already Insidious. MillionAIres gAIn. Most feel pAIn.
Yes to follow up video!
Amazing tool CatGPT I love it
Yes, follow up video please
Hi Arvin, again a great video! Yes please, do a followup on neural networks!
another fascinating winner!
UT
Please describe the feedback systen used to control ChatGPT so that
it eliminates sime of its previous answers to human generated questions.
Outstanding. Please ! Make the next video looking at GPT 4.0. And how we can now feed it comparative text, alongside our input so it identifies and produces writing in the "style" we are looking for. Thank You Arvin!
Google Bard is launched to compete against chatgpt. Bard is real time hence more dynamic than chatgpt which is not being fed latest info to give latest answers
Actually, for explaining quantum mechanics, the answer is simply wrong but it's an answer you would commonly find from many sources. The great thing is that if you ask about larger physical objects and how quantum mechanics would affect those it will probably give you a fantastic answer how it is that quantum mechanics affects larger object as well. It's a very useful tool for me now but it has limits. You ask a question, get the answer, do some research on the answer, ask about what seems to be wrong, repeat the process as needed and then you get the ultimate answer. It's so good that it gets creepy sometimes.
I'll be most amazed if its grammatical correctness also happens at the neural level tacking on the next word, without any post-processing, considering that it seems to never miss on grammar.
Please make the follow-up video
This was the best video I've seen so far on the topic. Thanks!
Open the pod bay door, please HAL
Y-yes, it kinda says it's not an autocomplete, then proceeds to describe what seems a fine-tuned, human-example-driven form of… autocomplete. But autocomplete can't analyze what it did, can it? I asked Bing Chat to generate a list of 27 random words (which have no pattern per se, it's just random words), and picked 27 because I thought it'd be a quantity relatively unlikely to match pre-trained lists of random words. It's a random number, too. Then I asked the chatbot how it determined when they were exactly 27, and stop enumerating random words.
It told me it counted from 1 to 27, and that it's simple.
Yyyes, it's simple if you have a mind.
If you have a PC, it usually goes "for (let i = 1, n = 27; i <= n; ++ i) list (randomWord ())" and the limit of 27 is somewhere in memory, a variable (n there). Or a numeric constant which is still memorized in the code.
Where was Bing Chat's "neuron" holding the value of 27?
There was one? How did it allocate one, and proceed to increment another neuron until it got to 27?
Has it rather "learned" the two concepts of "27-ness" and "random word", along with "list" and the act of listing?
That's how it plausibly seems. But still…
If I ask the bot to reflect on what it just did, can it really, out of mere words in a context, produce an explanation of the counting algorithm?
And despite it didn't "really" count on two variables, neurons, weights between neurons, whatever?
Full conversation pasted here: https://80.style/#/furrball/bing_chats/20230516_twentyLseven_random_words
I am sorry but no human knows how it works. This is scary and I am not sure you understand this.
Started learning about new technology
Thank you for this! Very informative. And, yes, please keep making videos explaining AI further. Also, please keep us informed as AI develops over time. 😃
Comments are closed.