Skip to content

Discover the Surprising Secret to Developing Exceptional AI – and it’s Not What You Think!

AI development should focus on nurturing and learning, with an approach that prioritizes continuous feedback loops from real-world data and behavior. This is because AI continuously learns over time and practice, going beyond its intended capabilities, especially when nurtured with feedback and data. Companies that employ this approach have been successful in AI deployment, such as OpenAI with its chatbot and Grammarly with its writing assistance system. In contrast, traditional build-test-deploy processes that disregard feedback loops are insufficient for training AI. To ensure safety, AI developers must incorporate security mechanisms that protect consumers and prevent fraudulent or discriminatory content from infiltrating the algorithms. Lastly, the ideal AI development cycle should allow for capturing user behavior and automating data collection and analysis to enable learning at scale. Continuous learning in AI development should focus on a broad range of use cases and data streams and incorporate simulation environments for generating synthetic data and faster development cycles.

—————————————————-

table {
width: 100%;
border-collapse: collapse;
}
th, td {
padding: 10px;
text-align: left;
border-bottom: 1px solid #006699;
}
th {
background-color: #006699;
color: #FCB900;
}

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

For years, even before ChatGPT firmly pushed artificial intelligence to the forefront of the public imagination, AI was slowly penetrating every industry from medical to aerospace. However, the technology has not yet fully exploited its potential. Not nearly.

A Recent study found that only 11% of companies using AI have realized financial benefits. Even tech giants have had problems. IBM’s $20 billion diagnostic AI system Watson Health diagnosed cancer more accurately than doctors in laboratory experiments, but failed in practice. It was a commercial and reputational disaster for the traditional American company.

The failure can hardly be attributed to a lack of technical expertise. IBM hired an army of engineers to work on Watson. Our extensive research into the challenges of developing AI in different commercial environments points to a surprising cause: Watson was developed and brought to market in a way that works well for traditional IT – but not for AI. This is due to a fundamental difference between traditional software and AI: while the former processes data, AI continuously learns from the data and gets better over time, even going beyond its intended capabilities when properly nurtured.

Practices similar to best parenting styles can accelerate AI development. We prescribe an AI development approach based on nurturing and learning, which has been implemented in more than 200 AI projects for industrial and other customers.

Let it learn from mistakes

Children don’t learn to ride a bike by watching an instructional video, but by hopping on a bike, pedaling and learning valuable lessons from every painful fall – soon the magic happens.

The same logic applies to AI. Many companies like IBM believe that they should collect huge amounts of data to perfect the algorithms before deployment. That’s wrong. Using AI in the real world, rather than isolating it in controlled environments, helps generate more data, which in turn feeds into the development process.

While early deployment is inherently more risky, it also triggers a continuous feedback loop that enriches the algorithm with new data. In addition, it is important that the data comes from both standard situations and difficult or atypical situations, which together support comprehensive AI development.

ChatGPT is a great example. The chatbot was made public by OpenAI last November while it was still wet behind the ears, but more to stay ahead of the competition. In any case, the gamble has paid off: not only has ChatGPT become a worldwide phenomenon that has embarrassed Google’s bard, but its early introduction has also attracted millions of users and generated huge amounts of data for OpenAI to bring out GPT-4 improved version of the bot, just months later.

Another example is Grammarly, whose refinement of its writing assistance system uses user feedback to demonstrate the power of Continuous AI improvement and customizationespecially in the complex and context-sensitive area of ​​languages.

Similar, Apodigi, a pioneer in the digitization of the pharmacy business, launched an AI-powered pharmacy app in June 2020 that can be described as workplace learning. The app, called Treet, suggests medications based on doctor’s prescriptions, which a pharmacist then reviews and optimizes. The pharmacist’s responses merge into a continuous stream of feedback that refines the algorithm and contributes to better recommendations that address the complexity of each patient’s needs and preferences.

In comparison, IBM Watson Health developed and tested extensively in the lab and brought the diagnostic tool to market without incorporating continuous learning from real-world data. This traditional build-test-deploy process proved insufficient for training AI

Keep safe

Security mechanisms that protect consumers and protect reputation are essential in AI development. For example, Tesla runs new versions of its self-driving software in the background while a human drives the car. Decisions made by the software, such as turning the steering wheel, are compared to those made by the driver. Any significant deviation or unusual decision is analyzed and the AI ​​is retrained if necessary. Simulator environments like AILiveSIM make it possible to safely and comprehensively test complete AI systems before they are deployed in the real world.

AI developed for creative applications probably needs stronger guardrails. Similar to children being in bad company and learning undesirable habits, the AI ​​could be exposed to training data that is full of bias and discriminatory content.

To prevent this, OpenAI uses an approach called opponent training to train its AI model not to be fooled by fraudulent input from attackers. This method exposes chatbots to hostile content that threatens to break the bot’s default restrictions, allowing it to detect fraudulent content and avoid falling for it in the future.

capture behavior

In the ideal AI development cycle, developers log all user reactions and behaviors to drive algorithm evolution without questioning the accuracy or value of any recommendation or prediction. The Netflix For example, the AI ​​content recommender simply remembers whether a user views the recommended content and how long it takes to view it. The algorithm learns from each answer to make a better recommendation next time.

Watson Health’s developers could have achieved better results if they had adhered to this principle. Instead of programming the algorithm to have doctors score the recommendations generated by the AI, they could have trained the system to simply record the doctors’ prescriptions. Additionally, by integrating Watson Health with patient information systems, it would have been placed in a feedback loop for continuous training based on actual cases and patient outcomes.

User feedback provides excellent training data for vertical applications with a specific focus.

But instead of relying on humans to label data, developers should think about ways to automate the process. For example, by connecting a vehicle’s front-facing camera to the steering wheel, it can automatically create labels for winding roads and feed them to AI models that are learning to drive a car on complicated routes.

In fact, developers should employ many automated data collectors and design explicit feedback loops to enable learning at scale. In the driver assistance development example above, many vehicles can cover a greater variety of situations than just a few. A vehicle passing in front of a Tesla triggers a video upload from the last few seconds before the event. The system feeds the recordings into Tesla’s deep neural network, which learns the various signals, such as: B. A gradual movement toward the lane divider that predicts the cut-in and takes appropriate action such as slowing down. In contrast, traditional automakers are often locked into a rigid mindset, developing and implementing driver assistance software with little automated feedback collection or data updating.

Ongoing Learning

Just like kids don’t stay in kindergarten forever; The training methodology for AI should be continuously improved. But all too often, AI developers focus on the latest developments in AI algorithms and individual use cases, rather than designing the system to cover a large number of use cases and data streams.

To go one step further, companies can develop a simulation environment that generates synthetic data and enables faster development cycles. Tesla, for example, collects data from its vehicle fleet to feed it into a simulator that simulates complex traffic environments, resulting in new synthetic training data.

Tero Ojanpera, Ph.D., has been Professor of Practice on Intelligent Platforms at Aalto University since 2021. He is also co-founder and executive chairman of silo AI, We work with many leading global companies. He was previously CTO, Chief Strategy Officer and Head of Research at Nokia. Treet is a former client of Silo AI. AILiveSIM is technology partner of Silo AI.

Timo Vuori, Ph.D., has been Professor of Strategic Management at Aalto University since 2013 and Visiting Researcher at INSEAD from 2013 to 2015.

Quy Nguyen Huy, Ph.D., has been Professor of Strategy at INSEAD since 1998 and chaired the School’s Strategy Department from 2010-2012. He is known for his pioneering work connecting social-emotional and temporal factors to the organizational processes of strategic change and innovation.

The opinions expressed in Fortune.com comments are solely the views of their authors and do not necessarily reflect the opinions and beliefs of wealth.

More worth reading comment published by wealth:


https://fortune.com/2023/06/14/ai-good-nurture-it-like-we-would-a-child-tech/
—————————————————-