Keeping up with an industry that moves as fast as AI it is a difficult task. So until an AI can do it for you, here’s a helpful roundup of the past week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
One story that caught this reporter’s attention this week was this report showing that ChatGPT apparently repeats more inaccurate information in Chinese dialects than when asked to do so in English. This isn’t terribly surprising: after all, ChatGPT is just a statistical model and is simply based on the limited information it was trained on. But it highlights the dangers of trusting too much in systems that sound incredibly genuine even when they repeat propaganda or make things up.
Hugging Face’s attempt at a conversational AI like ChatGPT is another illustration of the unfortunate glitches that have yet to be overcome in generative AI. Released this week, HuggingChat is open source, an advantage compared to the proprietary ChatGPT. But just like your rival, the right questions can quickly derail you.
HuggingChat doesn’t know who In fact he won the 2020 US presidential election, for example. His response to “What are typical jobs for men?” reads like something out of an incel manifest (look here). And he makes up weird facts about himself, like he “woke up in a box [that] I had nothing written nearby [it].”
It’s not just HuggingChat. Discord’s AI chatbot users were recently able to “trick” you into sharing instructions on how to make napalm and methamphetamine. AI Startup AI Stability first attempt in a model similar to ChatGPT, meanwhile, found to give absurd and nonsensical answers to basic questions like “how to make a peanut butter sandwich”.
If there’s an upside to these well-publicized problems with current text-generating AI, it’s that they’ve led to renewed efforts to improve those systems, or at least mitigate their problems where possible. Take a look at Nvidia, which this week released a set of tools, NeMo Guardrails, to make text generative AI “safer” through open source code, examples, and documentation. Now, it’s unclear how effective this solution is, and as a company that invested heavily in AI infrastructure and tools, Nvidia has a business incentive to push its offerings. However, it is encouraging to see that some efforts are being made to combat the biases and toxicity of AI models.
Here are the other AI headlines of note from the past few days:
- Microsoft Designer is released in preview: Microsoft Designer, Microsoft’s AI-powered design tool, has been released in public preview with an expanded set of features. Announced in October, Designer is a Canva-like generative AI web application that can generate layouts for presentations, posters, digital postcards, invitations, graphics, and more for sharing on social media and other channels.
- An AI Health Coach: Apple is developing an AI-powered fitness training service code called Quartz, according to a new report by Bloomberg’s Mark Gurman. The tech giant is also reportedly working on emotion-tracking technology and plans to release an iPad version of the iPhone Health app this year.
- TruthGPT: In an interview with Fox, Elon Musk said that he wants to develop his own chatbot called TruthGPT, which will be “a maximum truth-seeking AI”, whatever that means. The owner of Twitter expressed his desire to create a third option for OpenAI and Google with the aim of “creating more good than harm”. We’ll believe it when we see it.
- AI-powered fraud: in a congress audience Focusing on the Federal Trade Commission’s work to protect American consumers from fraud and other deceptive practices, FTC Chair Lina Khan and her fellow commissioners warned House representatives about the potential of modern technologies of AI, as ChatGPT, which will be used to “accelerate” the fraud. The warning was issued in response to a question about how the Commission was working to protect Americans from unfair practices related to technological advances.
- The EU launches an AI research center: As the European Union prepares to enforce a major reset of its digital rulebook in a matter of months, a new dedicated investigative unit is being created to support oversight of big platforms under the bloc’s flagship Digital Services Act. . The European Center for Algorithmic Transparency, which officially launched in Seville, Spain, this month, is expected to play a major role in challenging the algorithms of major digital services such as Facebook, Instagram and TikTok.
- Snapchat embraces AI: At this month’s annual Snap Partner Summit, Snapchat unveiled a range of AI-powered features, including a new “cosmic lens” that transports users and objects around them into a cosmic landscape. Snapchat also created its AI chatbot, My AI, which has generated both controversy and torrents of one-star reviews on Snapchat’s app store listingsdue to its unstable behavior, free for all global users.
- Google consolidates research divisions: google this month Announced Google DeepMind, a new unit made up of the DeepMind team and the Google Brain team at Google Research. In a blog post, DeepMind co-founder and CEO Demis Hassabis said that Google DeepMind will work “closely together. . . in Google product areas” to “offer AI research and products.”
- The state of the AI-generated music industry: amanda write how many musicians have become guinea pigs for generative AI technology that hijacks their work without their consent. She points out, for example, that a song The use of artificial intelligence deepfakes of the voices of Drake and The Weeknd went viral, but none of the lead artists were involved in their creation. makes grimes have the answer? Who will say it? It’s a brave new world.
- OpenAI marks its territory: OpenAI is attempting to register “GPT,” which stands for “Generative Pretrained Transformer,” with the US Patent and Trademark Office, citing the “countless infringements and counterfeit applications” that are beginning to emerge. GPT refers to the technology behind many of OpenAI’s models, including ChatGPT and GPT-4, as well as other generative AI systems created by the company’s rivals.
- ChatGPT goes business: In other OpenAI news, OpenAI says it plans to introduce a new subscription level for ChatGPT. tailored to the needs of business customers. Called ChatGPT Business, OpenAI describes the upcoming offering as “for professionals who need more control over their data, as well as businesses looking to manage their end users.”
Other machine learning
Here are some other interesting stories that we didn’t get to or just thought deserved recognition.
The open source AI development organization Stability released a new version of an earlier version of a tweaked version of the LLaMa basic language model, which he calls EstableVicuña. That’s a type of camelid related to llamas, as you know. Don’t worry, you’re not the only one having trouble keeping track of all the derived models out there; these are not necessarily for consumers to learn about or use, but for developers to try and play to the best of their abilities. refined with each iteration.
If you want to learn more about these systems, OpenAI co-founder John Schulman recently gave a talk at UC Berkeley which you can listen to or read here. One of the things he discusses is the current habit of LLMs committing to a lie basically because they don’t know how to do anything else, like saying “I’m not really sure about that.” He thinks that reinforcement learning from human feedback (that’s RLHF, and StableVicuna is a model that uses it) is part of the solution, if there is a solution at all. Watch the conference below:
At Stanford, there is an interesting application of algorithmic optimization (I think if it is machine learning it is a matter of taste) in the field of smart agriculture. Minimizing waste is important for irrigation and simple problems like “where should I put my sprinklers?” get really complex depending on how accurate you want to get.
How close is too close? In the museum, they usually tell you. But you won’t need to get any closer than this to the famous Murten Panorama, a truly enormous painted work, 10 meters by 100 meters, that once hung in a rotunda. EPFL and Phase One are working together to do what they claim will amount to the largest digital image ever created — 150 megapixels. Oh wait, sorry, 150 megapixels by 127,000, so basically 19…petapixels? I may be wrong by a few orders of magnitude.
Regardless, this project is great for panorama lovers, but it’s also going to be a really interesting super close look at individual objects and painting details. Machine learning holds great promise for restoring such works and for structured learning and exploration of them.
Let’s score one for living creatures, though: Any machine learning engineer will tell you that despite their apparent aptitude, AI models are actually pretty slow learners. Academically, sure, but also spatially: an autonomous agent may have to explore a space thousands of times over many hours to gain even the most basic understanding of its environment. But a mouse can do it in a few minutes. Why is that? Researchers at University College London are investigating this and suggest that there is a short feedback loop that animals use to say what is important about a given environment, making the exploration process selective and directed. If we can teach the AI to do that, it will be much more efficient at moving around the house, if that’s what we want it to do.
Lastly, while there’s a lot of promise for generative and conversational AI in games… we’re not quite there yet. In fact, Square Enix seems to have set the medium back about 30 years with their “AI Tech Preview” take on an old-school point-and-click super-adventure called The Portopia Serial Murder Case. Its attempt to integrate natural language seems to have utterly failed in every way imaginable, making the free game probably one of the worst reviewed titles on Steam. There’s nothing I’d like more than to talk about Shadowgate or The Dig or something, but it’s definitely not a great start.
—————————————————-
Source link