Skip to content

Is your investment portfolio at risk? Here’s why you need to know about deepfake market manipulation now!

Title: The Risks of AI and Deepfakes: A Wake-Up Call for Investors and Governments

Introduction

The recent viral deepfake video of an alleged explosion near the Pentagon has sent shockwaves across the world, highlighting the security risks posed by AI and deepfakes. As AI becomes increasingly intelligent and capable of self-directed activities, it is crucial to address the risks and challenges it poses. In this article, we will explore the dangers of AI and deepfakes, the impact they can have on assets and financial markets, and the initiatives governments are taking to mitigate these risks.

The Dangers of AI and Deepfakes

AI holds immense promise for transforming industries across the world, but it comes with significant risks and challenges. AI is often used for malicious activities such as cyberattacks, hacking, and disinformation, thus exacerbating existing cyber problems. The deepfake video of the alleged explosion near the Pentagon highlights how AI can be misused to spread disinformation and create fake news, causing chaos in the financial markets and undermining public trust.

The Impact of Deepfake on Financial Markets

The deepfake video of the alleged explosion near the Pentagon shows us how deepfakes can impact traditional asset markets. The consulting firm Kaspersky has published a study of the dark web, revealing a significant demand for deepfakes, with per-minute prices of deepfake videos ranging from $300 to $20,000. So far, deepfakes have been mostly used for cryptocurrency scams, but criminals could easily use them deliberately to manipulate financial markets, leading to significant losses for investors.

Initiatives by Governments to Mitigate the Risks

Given the risks posed by AI and deepfakes, governments around the world are taking steps to mitigate these risks. The US and UK, in particular, are working on a joint initiative to address the security challenges posed by self-directed AI. The initiative includes the establishment of an international research institution focused on AI, similar to CERN, the center of particle physics. The aim is to develop AI in a safe way and create AI-enabled tools to combat misuse such as misinformation.

There is also a proposal to establish a global AI monitoring body, similar to the International Atomic Energy Agency, to develop regulations and standards for the development and deployment of AI tools. This would include measures to establish watermarks to identify deepfakes and show where online content comes from.

The Challenges of Implementing Initiatives

Despite the promising initiatives taken by governments, there are significant challenges to implementing them. Drawing corporate users and criminal groups into a licensing network would be much more complicated, and there is already a lot of open source AI stuff out there that can be abused. Setting up an AI-style CERN could be costly, and it will be difficult to get international support for an IAEA-style monitoring body.

Conclusion

The risks of AI and deepfakes are ever-increasing, but mitigating these risks requires a coordinated effort by governments, businesses, and investors. While AI holds immense promise for transforming industries, including the financial sector, it comes with significant risks that can undermine its benefits. Initiatives such as the establishment of a joint US-UK international research institution focused on AI and a global AI monitoring body can go a long way in mitigating these risks. Still, they require significant investments and international support, making their implementation challenging.

Summary

The deepfake video of the alleged explosion near the Pentagon highlights the risks posed by AI and deepfakes for financial markets and public trust. Governments are taking steps to address these risks by establishing international research institutions focused on AI and global monitoring bodies like the International Atomic Energy Agency. Still, the challenges of implementing these initiatives are significant, and investors and businesses must be vigilant in their due diligence. The risks posed by AI and deepfakes, if left unchecked, can lead to significant losses and undermine trust in technology.

—————————————————-

Article Link
UK Artful Impressions Premiere Etsy Store
Sponsored Content View
90’s Rock Band Review View
Ted Lasso’s MacBook Guide View
Nature’s Secret to More Energy View
Ancient Recipe for Weight Loss View
MacBook Air i3 vs i5 View
You Need a VPN in 2023 – Liberty Shield View

An online event broke out last month that should make any investor gasp. A deepfake video of an alleged explosion near the Pentagon has gone viral, after being retweeted by outlets like Russia Today, sending US stock markets reeling.

Thankfully, US authorities quickly flooded social media with statements declaring the video fake and RT issued an embarrassed statement admitting that “it’s just an AI-generated image”. The markets then rebounded.

However, the episode has created a sobering backdrop to this week’s visit by British Prime Minister Rishi Sunak to Washington and his bid for a joint US-UK initiative to address the risks of AI.

There has recently been a growing chorus of alarms both within and outside the tech industry about the dangers of hyperintelligent, self-directed AI. Last week, more than 350 scientists issued a joint letter warning that “mitigating AI’s extinction risk should be a global priority alongside other societal-scale risks such as pandemics and nuclear warfare.”

These long-term “extinction” threats are grabbing the headlines. But experts like Geoff Hinton, an academic and former Google employee, are considered one of the “godfathers of AI”– thinks that the most immediate danger we should be concerned about is not that machines move independently, but that humans misuse them.

In particular, as Hinton recently said in a meeting at the University of Cambridge, the proliferation of artificial intelligence tools could drastically exacerbate existing cyber problems such as crime, hacking and disinformation.

There is already deep concern in Washington that deepfakes will poison the 2024 election race. This spring emerged which have already had an impact on Venezuelan politics. And this week Ukrainian hackers broadcast a deepfake video of Vladimir Putin on some Russian TV channels.

But the financial sphere is now emerging as another cause for concern. Last month the consulting firm Kaspersky published an ethnographic study of the dark web, which found “significant demand for deepfakes,” with “per-minute prices of deepfake videos [ranging] from $300 to $20,000”. So far they have mostly been used for cryptocurrency scams, he says. But the Pentagon deepfake video shows how they could impact traditional asset markets as well. “We may see criminals using it deliberately [market] manipulation,” as a US security official told me.

So is there anything Sunak and US President Joe Biden can do? Not easily. The White House recently held formal discussions on transatlantic AI policies with the EU (from which Britain, as a non-EU member, was excluded). But this initiative has not yet produced any tangible pact. Both sides acknowledge the desperate need for cross-border AI policies, but EU authorities are more attentive to top-down regulatory controls than Washington and determined to keep US tech groups at a distance.

So some US officials suspect it may be easier to kick-start international coordination with a bilateral AI initiative with the UK, given the recent release of a more business-friendly policy paper. There are close pre-existing intelligence ties, via the so-called Five Eyes security pact, and the two countries own a large chunk of the Western AI ecosystem (besides the financial markets).

Several ideas were floated. One, pushed by Sunak, is to create publicly funded international research on AI institution similar to Cern, the center of particle physics. The hope is that this can develop AI in a safe way, as well as create AI-enabled tools to combat misuse such as misinformation.

There is also a proposal to establish a global AI monitoring body similar to the International Atomic Energy Agency; Sunak wishes this to be based in London. A third idea is to create a global licensing framework for the development and deployment of artificial intelligence tools. This could include measures to establish “watermarks” that show where online content comes from and identify deepfakes.

These are all very sensible ideas that could – and should – be implemented. But that’s unlikely to happen quickly or easily. Setting up an AI-style CERN could be very costly, and it will be difficult to get quick international support for an IAEA-style monitoring body.

And the big problem that haunts any licensing system is how to bring the wider ecosystem online. The tech groups that dominate cutting-edge AI research in the West, such as Microsoft, Google and OpenAI, have indicated to the White House that they will collaborate on licensing ideas. Their business users would almost certainly be aligned as well.

However, drawing corporate fiddlers – and criminal groups – into a licensing web would be much more difficult. And there’s already a lot of open source AI stuff out there that can be abused. The Pentagon deepfake video, for example, appears to have used rudimentary systems.

So the unfortunate truth is that, in the short term, the only realistic way to counter the risk of market manipulation is for financiers (and journalists) to do more due diligence and for government investigators to go after cybercriminals. If this week’s rhetoric from Sunak and Biden helps raise awareness of that, that would be a good thing. But no one should be led to think that only knowledge will solve the threat. Warning.

gillian.tett@ft.com


https://www.ft.com/content/7b352945-9295-42f5-a5d1-a01edf48ba51
—————————————————-