Skip to content

Google brings generative AI to cybersecurity


A new trend is emerging in the generative AI space, generative AI for cybersecurity, and Google is among those looking to get in on the ground floor.

At today’s RSA Conference 2023, Google Announced Cloud Security AI Workbench, a cybersecurity suite powered by a specialized “security” AI language model called Sec-PaLM. A branch of Google Palm model, Sec-PaLM is “tuned for security use cases,” Google says, incorporating security intelligence such as research on software vulnerabilities, malware, threat indicators, and behavioral threat actor profiles.

Cloud Security AI Workbench encompasses a range of new AI-powered tools, such as Mandiant’s Threat Intelligence AI, which will leverage Sec-PaLM to find, summarize and act on security threats. (Remember that Google bought Mandiant in 2022 for $5.4 billion). VirusTotal, another Google property, will use Sec-PaLM to help subscribers analyze and explain the behavior of malicious scripts.

Elsewhere, Sec-PaLM will help customers of Chronicle, Google’s cloud cybersecurity service, search for security events and interact “conservatively” with the results. Meanwhile, users of Google’s Security Command Center AI will get “human-readable” explanations of attack exposure courtesy of Sec-PaLM, including affected assets, recommended mitigations, and risk summaries for security findings, compliance and privacy.

“While generative AI has recently captured the imagination, Sec-PaLM builds on years of fundamental AI research from Google and DeepMind, and the deep expertise of our security teams,” Google wrote in a blog post this morning. “We have only just begun to realize the power of applying generative AI to security, and we look forward to continuing to leverage this experience for our customers and driving advancements in the security community.”

Those are pretty bold ambitions, particularly considering that VirusTotal Code Insight, the first tool in the Cloud Security AI Workbench, is only available in limited preview right now. (Google says it plans to roll out the rest of the offerings to “trusted testers” in the coming months.) Frankly, it’s not clear how well Sec-PaLM works, or doesn’t work, in practice. Sure, the “recommended mitigations and risk summaries” sound helpful, but are the suggestions that much better or more accurate because they were produced by an AI model?

After all, AI language models, no matter how advanced, make mistakes. And they are susceptible to attacks like immediate injectionwhich can cause them to behave in ways their creators did not intend.

That doesn’t stop the tech giants, of course. In March, Microsoft thrown out Security Copilot, a new tool that aims to “summarize” and “make sense” of threat intelligence using OpenAI generative AI models, including GPT-4. In press materials, Microsoft, similar to Google, claimed that generative AI would better equip security professionals to combat new threats.

The jury is very out on that. In truth, generative AI for cybersecurity might be more of a hype than anything else: studies on its effectiveness are dearth. We’ll see the results very soon with any luck, but in the meantime, take the claims from Google and Microsoft with a grain of salt.


—————————————————-

Source link

For more news and articles, click here to see our full list.