Skip to content

NSA cybersecurity director says ‘buckle up’ for generative AI


in the RSA security conference in San Francisco this week, there has been a sense of inevitability in the air. In talks and panels at the sprawling Moscone Convention Center, at every vendor booth on the show floor, and in casual conversations in the hallways, you just know someone is going to mention generative AI and its potential impact on digital security and security. malicious hacking. . The NSA’s director of cybersecurity, Rob Joyce, has felt it too.

“You can’t walk around RSA without talking about AI and malware,” he said Wednesday afternoon during his now-annual “State of the Hack” presentation. “I think we have all seen the explosion. I won’t say it’s delivered yet, but this is truly game-changing technology.”

In recent months, chatbots powered by large language models, such as OpenAI ChatGPT, have made years of machine learning research and development feel more concrete and accessible to people around the world. But there are practical questions about how these new tools will be used. manipulated and abused by bad actors to develop and spread malicious softwarefuel the creation of misinformation and inauthentic contentand extend the capabilities of attackers to automate their attacks. At the same time, the security community is eager to leverage generative AI to defend systems and gain a protection advantage. In these early days, however, it’s hard to break down exactly what will happen next.

Joyce said the National Security Agency hopes generative AI will power already effective scams like phishing. Such attacks rely on compelling and compelling content to trick victims into inadvertently helping attackers, so generative AI has obvious uses for quickly creating personalized communications and materials.

“That native Russian hacker who doesn’t speak English well is no longer going to create bullshit email for his employees,” Joyce said. “It’s going to be the native language in English, it’s going to make sense, it’s going to pass the sniff test… So that’s what’s here today, and we’re seeing adversaries, both nation-state and criminal, beginning to experiment with the Generation of type ChatGPT to provide them with opportunities in the English language”.

Meanwhile, while AI chatbots may not be able to develop perfectly crafted novel malware from scratch, Joyce noted that attackers can use the coding skills that platforms have to make smaller changes that could have a big effect. The idea would be to modify existing malware with generative AI to change its characteristics and behavior enough that scanning tools like antivirus software won’t recognize and flag the new iteration.

“It’s going to help rewrite the code and make it in a way that changes the signature and its attributes,” Joyce said. “That [is] It’s going to be a challenge for us in the short term.”

In terms of advocacy, Joyce seemed hopeful about the potential for generative AI to aid big data analytics and automation. She cited three areas where the technology is “showing real promise” as an “accelerator for defense”: scanning digital records, finding patterns in exploiting vulnerabilities, and helping organizations prioritize security issues. However, she cautioned that before advocates and communities come to rely on these tools in daily life, they must first study how generative AI systems can be manipulated and exploited.

Mostly, Joyce emphasized the murky and unpredictable nature of the current moment for AI and security, advising the security community to “buckle up” for what’s likely to come.

“I don’t expect some magical AI-generated technical capability that blows up all things,” he said. But “next year, if we’re here talking about a similar year in review, I think we’ll have plenty of examples of where it’s been put together, where it’s been used, and where it’s been successful.”


—————————————————-

Source link

For more news and articles, click here to see our full list.