Skip to content

This Week in AI: AI heavyweights attempt to tip the regulatory balance

Featured Sponsor

Store Link Sample Product
UK Artful Impressions Premiere Etsy Store


Keeping up with an industry that moves as fast as AI it is a difficult task. So until an AI can do it for you, here’s a helpful roundup of the past week’s stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

This week, the movers and shakers of the AI ​​industry, including OpenAI CEO Sam Altman has embarked on a goodwill tour with lawmakers, defending their respective views on AI regulation. Speech Speaking to reporters in London, Altman warned that the EU’s proposed AI Law, due to be finalized next year, could lead OpenAI to withdraw its services from the bloc.

“We will try to comply, but if we can’t comply, we will stop operating,” he said.

Google CEO Sundar Pichai, also in London, stressed the need for “proper” AI guardrails that don’t stifle innovation. And Microsoft’s Brad Smith, meeting with lawmakers in Washington, proposed a five-point plan for the public governance of AI.

To the extent there is a common thread, the tech titans have expressed a willingness to be regulated, as long as it doesn’t interfere with their business ambitions. Smith, for example, declined to address the unresolved legal question of whether training AI on copyrighted data (which Microsoft does) is allowed under fair use doctrine at the US level, it could prove costly. for Microsoft and its rivals doing the same.

Altman, for his part, appeared to take issue with provisions in the AI ​​Act that require companies to publish summaries of the copyrighted data they used to train their AI models and hold them partially responsible for how they are implemented. downstream systems. The requirements to reduce power consumption and resource use of AI training, a notoriously compute-intensive process, were also questioned.

The regulatory path abroad remains uncertain. But in the US, the OpenAIs of the world can get away with it in the end. Last week, Altman courted members of the Senate Judiciary Committee with carefully crafted statements about the dangers of AI and his recommendations for regulating it. Sen. John Kennedy (R-LA) was particularly deferential: “Folks, this is your chance to tell us how to do this right…Speak in plain language and tell us what rules to implement,” he said.

In comments to The Daily Beast, Suresh Venkatasubramanian, director of the Center for Technology Responsibility at Brown University, maybe summed it up best: “We don’t ask arsonists to be in charge of the fire department.” And yet that is what is in danger of happening here, with AI. It will be up to lawmakers to resist the sweet talk of tech executives and clamp down where necessary. Only time will tell if they do.

Here are the other AI headlines of note from the past few days:

  • ChatGPT reaches more devices: Despite being US-only and iOS ahead of a expansion to 11 more global marketsOpen AI ChatGPT application It’s off to a stellar start, Sarah writes. The app has already surpassed half a million downloads in its first six days, app trackers say. That ranks it as one of the top-performing new app launches both this year and last, second only to the February 2022 arrival of Trump-backed Twitter clone Truth Social.
  • OpenAI proposes a regulatory body: AI is developing fast enough, and the dangers it can pose are clear enough, that the OpenAI leadership believes the world needs an international regulatory body similar to the one that governs nuclear power. The OpenAI co-founders argued this week that the pace of AI innovation is so fast that we can’t expect existing authorities to properly police the technology, so we need new ones.
  • Generative AI comes to Google Search: Google Announced this week which is starting to open up access to new generative AI capabilities in Search After mocking them on their I/O event at the beginning of the month. With this new update, Google says users can easily get up to speed on a new or tricky topic, discover quick tips for specific questions, or get insights like customer ratings and prices on product searches.
  • TikTok tests a bot: Chatbots are all the rage, so it’s no surprise to learn that TikTok is testing its own too. Called “Tako,” the bot is in limited testing in select markets, where it will appear on the right side of the TikTok interface above a user’s profile and other buttons for likes, comments, and bookmarks. When tapped, users can ask Tako various questions about the video they’re watching or discover new content by asking for recommendations.
  • Google in an AI pact: Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what’s known as an “AI Compact,” ostensibly an interim set of voluntary rules or standards while formal AI regulations are developed. According to a memorandum, the intention of the bloc is to launch an AI Pact “involving all major European and non-European AI players on a voluntary basis” and before the legal deadline of the aforementioned pan-European AI Law.
  • People, but made with AI: With Spotify AI DJ, the company trained an AI with the voice of a real person: that of its director of Cultural Associations and podcast host, Xavier “X” Jernigan. Now, the streamer can turn that same technology into advertising, it seems. According to statements made by The Ringer founder Bill Simmons, the streaming service is developing artificial intelligence technology that will be able to use a podcast host’s voice to make announcements read by the host, without the host having to to read and record the ad copy.
  • Product images via generative AI: In its Google marketing live At this week’s event, Google announced that it will launch Product Studio, a new tool that allows merchants to easily create product images using generative AI. Brands will be able to create images in Merchant Center Next, Google’s platform for businesses to manage how their products appear on Google Search.
  • Microsoft bakes a chatbot on Windows: Microsoft is building its Based on ChatGPT Bing experience directly in Windows 11, and adding some changes that allow users to ask the agent to help them navigate the operating system. The new Windows Copilot is intended to make it easier for Windows users to find and change settings without having to dig into Windows submenus. But the tools will also allow users to summarize the contents of the clipboard or compose text.
  • Anthropic raises more cash: anthropicThe prominent generative AI startup co-founded by OpenAI veterans has raised $450 million in a Series C funding round led by Spark Capital. Anthropic declined to disclose how the round valued its business. But a pitch deck we got in march suggests it could be in the $4 billion ballpark.
  • Adobe brings generative AI to Photoshop: Photoshop got an infusion of generative AI this week with the addition of a number of features that allow users to stretch images beyond their borders with AI-generated backgrounds, add objects to images, or use a new generative fill feature to remove them. with much more. precision than previously available content-aware padding. For now, the features will only be available in the beta version of Photoshop. but they are already causing some graphic designers appalled by the future of their industry.

Other machine learning

Bill Gates may not be an AI expert, but is very rich, and he’s been right about things before. It turns out that he is optimistic about AI personal agents, since told fortune: “Whoever wins the personal agent, that’s the big thing, because you’ll never go to a search site again, you’ll never go to a productivity site again, you’ll never go to Amazon again.” Exactly how this would play out is not stated, but his instinct that people would rather not take trouble using a search engine or compromised productivity is probably not far off base.

Risk assessment in AI models is an evolving science, which means we know next to nothing about it. Google DeepMind (the newly formed superentity comprising Google Brain and DeepMind) and collaborators around the world are trying to get the ball rolling and have produced a model assessment framework for “extreme risks” as “strong skills in manipulation, deception, cybercrime, or other dangerous capabilities.” Well, it’s a start.

Image Credits: SLAC

Particle physicists are finding exciting ways to apply machine learning to their work: “We have shown that we can infer very complicated high-dimensional beam shapes from astonishingly small amounts of data.” says Auralee Edelen of SLAC. They created a model that helps them predict the shape of the particle beam at the accelerator, something that typically requires thousands of data points and a lot of computing time. This is much more efficient and could help make accelerators everywhere easier to use. Next: “demonstrating the algorithm experimentally in the reconstruction of complete 6D phase space distributions”. OK!

Adobe Research and MIT collaborated on an interesting computer vision problem: knowing which pixels in an image represent the same material. Since an object can have multiple materials, as well as colors and other visual aspects, this is a fairly subtle but also an intuitive distinction. They had to build a new synthetic dataset to do it, but it didn’t work at first. So they ended up fitting an existing CV model on that data, and they succeeded. Why is it useful? Hard to say, but cool.

Table 1: selection of materials; 2: video source; 3: segmentation; 4: mask Image Credits: Adobe/MIT

Large language models are usually trained primarily in English for many reasons, but obviously the sooner they work in Spanish, Japanese, and Hindi as well, the better. BLOOMChat is a new model built on top of BLOOM which works with 46 languages ​​today and is competitive with GPT-4 and others. It’s still pretty experimental so don’t go to production with it, but it could be great for testing a multi-language AI-adjacent product.

NASA just announced a new crop of SBIR II funding, and there are a couple of interesting pieces of AI in there:

geolabe is detecting and predicting groundwater variation using AI trained on satellite data, and hopes to apply the model to a new constellation of NASA satellites to be installed later this year.

Zeus AI is working on the algorithmic production of “3D weather profiles” based on satellite imagery, essentially a chunky version of the 2D maps we already have of temperature, humidity, etc.

In space, your computing power is very limited, and while we can make some inferences up there, the training is immediate. But the IEEE researchers want to make a SWaP efficient neuromorphic processor to train AI models in situ.

Robots that operate autonomously in high-risk situations generally need a human caretaker, and picnic it seeks to make these bots communicate their intentions visually, as if they were going to open a door, so that the caregiver does not have to intervene as much. It’s probably a good idea.




—————————————————-



Source link

We’re happy to share our sponsored content because that’s how we monetize our site!

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
ASUS Vivobook Review View
Ted Lasso’s MacBook Guide View
Alpilean Energy Boost View
Japanese Weight Loss View
MacBook Air i3 vs i5 View
Liberty Shield View
🔥📰 For more news and articles, click here to see our full list. 🌟✨

👍🎉 Don’t forget to follow and like our Facebook page for more updates and amazing content: Decorris List on Facebook 🌟💯

📸✨ Follow us on Instagram for more news and updates: @decorrislist 🚀🌐

🎨✨ Follow UK Artful Impressions on Instagram for more digital creative designs: @ukartfulimpressions 🚀🌐

🎨✨ Follow our Premier Etsy Store, UK Artful Impressions, for more digital templates and updates: UK Artful Impressions 🚀🌐