Large Institutional Investors Push Tech Companies for Ethical AI
Introduction
- Large institutional investors are pressuring tech companies to take responsibility for the potential misuse of AI.
- Investors worry about accountability for human rights issues related to AI software.
- The Collective Impact Coalition for Digital Inclusion, representing $6.9 billion in assets under management, is leading the push for ethical AI.
Meeting with Tech Companies to Strengthen Protections
- Aviva Investors, Fidelity International, and HSBC Asset Management are part of the Coalition advocating for ethical AI.
- Aviva Investors has held meetings with tech companies, including chip makers, to warn them about AI-related human rights risks.
- Risks include surveillance, discrimination, unauthorized facial recognition, and mass layoffs.
- Louise Piffaut, head of integration of environmental, social, and governance actions at Aviva Investors, emphasizes the need for stronger protections.
The Growing Concerns over Generative AI
- Meetings with tech companies have become more intense due to fears surrounding generative AI, such as ChatGPT.
- Potential misuse of generative AI raises concerns about accountability and responsibility.
- If engagement fails, Aviva Investors may take actions such as voting against management or raising concerns with regulators.
AI and Responsible Investing
- Investment bank Jefferies suggests that AI could replace climate change as a major concern for responsible investors.
- Investors are increasingly worried about the impact of AI on society and democracy.
The Role of Aviva Investors in AI Investments
- Aviva Investors, managing over £226 billion, holds stakes in major tech companies involved in AI development.
- Companies invested in include Taiwan Semiconductor Manufacturing Company, Tencent Holdings, Samsung Electronics, and Microsoft.
Pushing for Worker Retraining in the Face of AI Efficiencies
- Aviva Investors is meeting with consumers, media, and industrial companies to ensure commitment to retraining workers instead of layoffs due to AI-related efficiencies.
- Jenn-Hui Tan from Fidelity International highlights concerns over job security and the impact on the future of democracy and humanity.
Expanding the Focus on Ethical AI
- Legal & General Investment Management is also working on a paper on artificial intelligence.
- Kieron Boyle from the Impact Investing Institute warns about the potential reduction of opportunities for women and ethnic minorities due to AI.
Ensuring Ethical and Regulatory Compliance
- Richard Gardiner from the World Benchmarking Alliance emphasizes the need for tech companies to address ethical and regulatory risks.
- Aviva Investors and other institutional investors are concerned about potential accountability for human rights abuses.
- Only 44 out of 200 tech companies assessed by the WBA published a framework on ethical AI.
- Examples of good practices include Sony’s ethical guidelines, Vodafone’s customer compensation rights, and Deutsche Telekom’s “kill switch” for AI systems.
Regulatory Expectations and Guidelines
- The EU directive on corporate due diligence may require chip makers and other tech companies to consider human rights risks.
- The OECD updated its voluntary guidelines for multinational corporations to include considerations for AI-related harm to the environment and society.
Expanding on the Topic: The Future of Ethical AI
Artificial intelligence has become a crucial and pervasive technology in our rapidly evolving world. As its capabilities increase, so do the concerns surrounding its ethical implications. The push from institutional investors for tech companies to be accountable for the potential misuse of AI highlights both the opportunities and challenges we face.
While AI offers immense potential for positive change, it also raises concerns about accountability, responsibility, and the protection of human rights. The increasing pressure on tech companies to prioritize ethical AI reflects a growing awareness of the potential risks and impacts associated with this technology.
One of the key areas of concern is the potential for AI to infringe upon human rights. Surveillance, discrimination, unauthorized facial recognition, and mass layoffs are just a few examples of the risks associated with AI. These risks can have significant consequences for individuals and society as a whole, raising questions about the accountability of tech companies in preventing and addressing these issues.
Aviva Investors, Fidelity International, and HSBC Asset Management, among others, are leading the charge in advocating for ethical AI. Aviva Investors, with its significant investments in tech companies, has been actively engaging with these companies to strengthen protections against AI-related human rights risks. Their efforts include meetings with chip makers and discussions on topics like generative AI and its potential for misuse.
While these engagements are an important step, the investors are prepared to take further action if necessary. Aviva Investors, for example, may vote against management at annual general meetings, raise concerns with regulators, or even sell shares if tech companies fail to address the ethical concerns surrounding AI. This level of accountability demonstrates the seriousness of the investors’ commitment to responsible investing and their determination to ensure that tech companies prioritize ethical considerations.
However, the concerns about AI extend beyond immediate human rights issues. Jefferies, an investment bank, suggests that AI could replace climate change as the “big new thing” responsible investors are worried about. This highlights the significance of AI’s societal impact and the need for ethical guidelines and regulations to prevent harm.
Aviva Investors’ investments in major tech companies involved in AI development further emphasize the importance of ethical considerations. These investments represent a significant stake in shaping the future of AI and the responsible use of this technology. By engaging with hardware, software, and internet companies, Aviva Investors aims to verify their commitment to retraining workers instead of resorting to mass layoffs.
One of the key concerns raised by Jenn-Hui Tan from Fidelity International is the impact of AI on job security and the future of democracy and humanity. As AI-driven efficiencies increase, there is a risk of job displacement, particularly for vulnerable groups such as women and ethnic minorities. The investors’ focus on worker retraining demonstrates their commitment to ensuring that the societal impact of AI is carefully managed.
The need for ethical AI extends beyond individual companies; it encompasses the entire supply chain. Investors are pushing tech companies to take responsibility for ethical and regulatory risks throughout their value chain. This includes considering not only the immediate impacts of AI but also the potential consequences across industries and society as a whole.
While some companies have shown signs of progress in adopting ethical AI practices, there is still much work to be done. The World Benchmarking Alliance found that only a fraction of tech companies assessed had published ethical AI frameworks. This highlights the need for greater transparency and accountability in the industry.
Regulatory expectations and guidelines are also evolving to address the ethical implications of AI. The EU directive on corporate due diligence, for example, may require companies to consider human rights risks in their value chain. The updated OECD guidelines for multinational corporations also emphasize the need for tech companies to prevent harm to the environment and society through their products.
In conclusion, the push for ethical AI by large institutional investors reflects a growing recognition of the potential risks and impacts associated with AI. The focus on human rights, worker retraining, and the broader societal implications of AI demonstrates the investors’ commitment to responsible investing. As AI continues to advance, it is crucial for tech companies to prioritize ethical considerations and ensure that AI is developed and deployed responsibly. By doing so, we can harness the transformative power of AI while minimizing its potential risks and safeguarding the rights and well-being of individuals and society as a whole.
Summary: Institutional investors, represented by the Collective Impact Coalition for Digital Inclusion, are pressuring tech companies to take responsibility for the potential misuse of AI. Investors are concerned about human rights issues related to AI software and are advocating for ethical AI practices. Aviva Investors, Fidelity International, and HSBC Asset Management are among the leading institutional investors pushing for ethical AI. Aviva Investors has been engaging with tech companies, including chip makers, to strengthen protections against AI-related human rights risks. These risks include surveillance, discrimination, unauthorized facial recognition, and mass layoffs. The investors are prepared to take action if necessary, including voting against management, raising concerns with regulators, or selling shares. The increasing focus on ethical AI reflects mounting concerns about the impact of AI on society and the need for accountability and responsible investing.
—————————————————-
Article | Link |
---|---|
UK Artful Impressions | Premiere Etsy Store |
Sponsored Content | View |
90’s Rock Band Review | View |
Ted Lasso’s MacBook Guide | View |
Nature’s Secret to More Energy | View |
Ancient Recipe for Weight Loss | View |
MacBook Air i3 vs i5 | View |
You Need a VPN in 2023 – Liberty Shield | View |
Large institutional investors are mounting pressure on tech companies to take responsibility for the potential misuse of AI as they worry about accountability for human rights issues related to the software.
The Collective Impact Coalition for Digital Inclusion of 32 financial institutions representing $6.9 billion in assets under management, including Aviva Investors, Fidelity International and HSBC Asset Management, is among those leading the push to influence tech companies to are committed to ethical artificial intelligence.
Aviva Investors has held meetings with tech companies, including chip makers, in recent months to warn them to strengthen protections against AI-related human rights risks, including surveillance, discrimination, unauthorized facial recognition and mass layoffs. .
Louise Piffaut, head of the integration of environmental, social and governance actions at the UK insurer’s wealth management arm, said meetings with companies on this topic had “picked up the pace and pitch” due to fears on generative AI, such as ChatGPT. If the engagement fails, as with any company it interacts with, Aviva Investors may vote against management at annual general meetings, raise concerns with regulators or sell shares, she said.
“It’s easy for companies to walk away from accountability by saying it’s not my fault they misuse my product. This is where the conversation gets tougher,” Piffaut said.
Artificial intelligence could replace climate change as the “big new thing” responsible investors have been worried about, investment bank Jefferies said in a statement last week.
The coalition’s intense activity comes two months after Nicolai Tangen, chief executive of the $1.4 trillion Norwegian oil fund, revealed he would establish guidelines for how they should use the 9,000 companies it has invested in TO THE “ethically” as he called for more regulation of the rapidly growing industry.
Aviva Investors, which manages more than £226 billion, has a small stake in the world’s largest contract chip maker, Taiwan Semiconductor Manufacturing Company, which has seen increased demand for advanced chips used to train large-scale AI models. size like the one behind ChatGPT.
It also owns stakes in hardware and software companies Tencent Holdings, Samsung Electronics, MediaTek and Nvidia, as well as technology companies that are developing generative AI tools such as Alphabet and Microsoft.
The asset manager is also meeting with consumers, media and industrial companies to verify that they are committed to retraining workers rather than firing them if their jobs are at risk of being eliminated due to AI-related efficiencies.
Jenn-Hui Tan, head of sustainable investment and management at Fidelity International, said fears about social issues such as “privacy concerns, algorithmic bias and job security” had given way to “real existential concerns about the future.” of democracy and even of humanity”.
The UK-based group has met with hardware, software and internet companies to discuss these topics, it said, and will consider divestiture if it believes insufficient progress has been made.
Legal & General Investment Management, the UK’s largest asset manager which has stewardship codes for issues such as deforestation and arms supply, said it was working on a similar paper on artificial intelligence.
Kieron Boyle, chief executive of the Impact Investing Institute, a UK government-funded think tank, said a “growing number of impact investors” were concerned that AI could reduce entry opportunities for women and ethnic minorities in all sectors, driving the diversity of the workforce years back.
Investors pushing tech companies to focus on their entire supply chains want to stay ahead of possible ethical and regulatory risks, said Richard Gardiner, EU public policy officer at the Dutch non-profit World Benchmarking Alliance. who launched the Coalition for Collective Impact. Investors like Aviva were probably worried that if they didn’t act they could one day be held accountable for human rights abuses by investee companies, he said.
“If you make a bullet that does nothing in your hand but you put it in someone else’s hand and shoot someone, to what extent are you tracking product use?” he added. “Investors want assurances that there are standards in place if they themselves become liable.”
Only 44 out of 200 tech companies assessed by the WBA in March had published a framework on ethical AI.
Some have shown signs of good practice, the alliance said. Sony had ethical guidelines on artificial intelligence that all group employees had to follow, Vodafone had the right to compensation for customers who feel they have been treated unfairly as a result of a decision made by an artificial intelligence system, while Deutsche Telekom had a “kill switch” to disable AI systems at any time.
While industries like mining have long been expected to take responsibility for human rights issues along the entire supply chain, regulators have pushed to extend that expectation to tech companies and lenders.
The EU directive on corporate due diligence, which has been negotiated by member states, executives and lawmakers, should require companies such as chip makers to consider human rights risks in their value chain.
The OECD updated its voluntary guidelines for multinational corporations earlier this month to state that tech companies should seek to prevent harm to the environment and society related to their products, including those related to artificial intelligence.
https://www.ft.com/content/a6926bb3-5615-4b93-95a8-77db943c7cf1
—————————————————-