Skip to content

Employers struggle to police workers’ use of AI

Matt had a secret helping hand when he started his new job at a pharmaceutical company in September.

The 27-year-old researcher, who asked to be identified by a pseudonym, was able to keep pace with his more experienced colleagues by turning to ChatGPT by OpenAI to write the code they needed for their work.

“Part of it was pure laziness. Part of that was genuinely believing that I could make my work better and more accurate,” he says.

Matt still doesn’t know for sure if this was allowed. His boss had not explicitly prohibited him from accessing generative AI tools like ChatGPT, but he had also not encouraged him to do so, nor set specific guidelines for what uses of the technology might be appropriate.

“I couldn’t see any reason why it should be a problem, but I still felt embarrassed,” he says. “I didn’t want to admit that I used shortcuts.”

Employers have been struggling to keep up as workers embrace generative technology. AI at a much faster pace than corporate policies are written. An August survey by the Federal Reserve Bank of St. Louis found that nearly a quarter of the American workforce was already using technology on a weekly basis, increasing by nearly 50 percent in the financial and software industries. Most of these users turned to tools like ChatGPT to help with writing and research, often as an alternative to Google, in addition to using it as a translation tool or coding assistant.

But researchers warn that much of this early adoption has occurred in the shadows, as workers chart their own paths in the absence of clear corporate guidelines, comprehensive training or cybersecurity protection. In September, almost two years after ChatGPT launched, fewer than half of executives surveyed by US employment law firm Littler said their organizations had introduced rules for how employees should use generative AI.

Among the minority that has implemented a specific policy, the first impulse of many employers was to launch into a blanket ban. According to Fortune, companies such as Apple, Samsung, Goldman Sachs, and Bank of America will ban their employees from using ChatGPT in 2023, primarily due to data privacy concerns. But as AI models have become more popular and more powerful, and are increasingly seen as key to staying competitive in saturated industries, business leaders are becoming convinced that such prohibitive policies are not a sustainable solution.

“We started with ‘lockdown’ but we didn’t want to keep ‘lockdown,'” says Jerry Geisler, chief information security officer at US retailer Walmart. “We just needed to give ourselves time to build. . . an internal environment to give people an alternative.”

Walmart prefers staff to use its internal systems, including an AI-powered chatbot called ‘My Assistant’ for secure internal use, but does not prohibit its workers from using external platforms, as long as they do not include private or proprietary information. in his instructions. However, it has installed systems to monitor requests that workers send to external chatbots on their corporate devices. Security team members will intercept unacceptable behavior and “interact with that associate in real time,” Geisler says.

He believes instituting a “non-punitive” policy is the best option to keep up with the ever-changing landscape of AI. “We don’t want them to think they are in trouble because security has contacted them. We just want to say, ‘Hey, we look at this activity. Help us understand what you’re trying to do and we’ll probably be able to provide you with a better resource that will reduce your risk but still allow you to achieve your goal.’

“I would say we probably see almost zero recidivism when we have those commitments,” he says.

Walmart isn’t the only one developing what Geisler calls an “enclosed internal playground” for employees to experiment with generative AI. Among other big companies, McKinsey launched a chatbot called Lilli, Linklaters launched one called Laila, and JPMorgan Chase launched the somewhat less creatively named “LLM Suite.”

Companies that don’t have the resources to develop their own tools face even more questions: from what services, if any, to purchase for their staff, to the risk of becoming dependent on external platforms.

Victoria Usher, founder and CEO of communications agency GingerMay, says she has tried to maintain a “cautious approach” while moving beyond the “initial knee-jerk panic” inspired by the arrival of ChatGPT in November 2022.

GingerMay started with a blanket ban, but last year began relaxing this policy. Staff can now use generative AI for internal purposes, but only with the express permission of an executive. Workers should only access generative AI through the company’s subscription to ChatGPT Pro.

“The worst case scenario is that people use their own ChatGPT account and you lose control of what is put into it,” Usher says.

She recognizes that her current approach of asking employees to request approval for each individual use of generative AI may not be sustainable as the technology becomes a more established part of people’s work processes. “We are very happy to continue changing our policies,” he says.

Even with more permissive strategies, workers who have been privately using AI to speed up their work may not be willing to share what they’ve learned.

“They look like geniuses. They don’t want to look like geniuses,” says Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania.

A report released last month by workplace messaging service Slack found that nearly half of desktop workers would be uncomfortable telling their managers that they had used generative AI, largely because, like Matt, they don’t They wanted to be seen as incompetent or lazy, or risk being seen. accused of cheating.

Workers surveyed by Slack also said they feared that if their bosses knew about the productivity gains achieved with AI, they would face layoffs, and that those who survived future cuts would simply receive a heavier workload.

Geisler expects to have to constantly review Walmart’s approach to AI. “Some of our previous policies already need to be updated to reflect how technology is evolving,” he says.

He also notes that Walmart, as a large global organization, faces the challenge of establishing policies applicable to many different types of workers. “We want to share with our executives, our legal teams and our merchants very different messages about how we are going to use this technology than we could share.” [with] someone who works in our distribution centers or our stores,” he says.

The changing legal landscape may also make it difficult for companies to implement a long-term strategy for AI. Legislation is being developed in regions such as the US, EU and UK, but companies still have few answers about how the technology will impact intellectual property rights or how it will fit into existing data privacy and transparency regulations. . “The uncertainty is simply leading some companies to try to ban anything that has to do with AI,” says Michelle Roberts Gonzales, an employment lawyer at Hogan Lovells.

For those trying to develop some kind of strategy, Rose Luckin, a professor at University College London’s Knowledge Lab, says the “first hurdle” is simply figuring out who within the organization is best positioned to investigate what types of AI will be useful for your needs. work. Luckin says he has so far seen this task assigned to everyone from a CEO to an apprentice, as companies make widely divergent assessments of how crucial AI will be to their businesses.

Sarah, a paralegal at a boutique law firm in London, was surprised when she was asked to research and design the rulebook for how her senior colleagues should use AI. “It’s strange that it became my job,” he says. “I’m literally the youngest member of staff.”

Leave a Reply

Your email address will not be published. Required fields are marked *