Main image of article Managers, It's Time to Establish a Formal A.I. Policy

Anyone who manages tech pros at any level is probably wondering how to best integrate a growing suite of artificial intelligence (A.I.) tools into their teams’ workflows. A company that gets A.I. “right” could enjoy immense gains in productivity. However, poorly deployed A.I. could result in an existential-level disaster.

According to a new survey by The Conference Board (based on feedback from 1,100 U.S. employees), only 26 percent of organizations have a general policy related to the use of generative A.I., while 23 percent have a policy under development. Despite widespread lack of A.I. policies, however, 40 percent of employees told the organization that their managers are “fully aware that they’re using A.I. tools at work.”

How are workers using those tools? Common uses include:

  • Drafting written content (68 percent)
  • Brainstorming ideas (60 percent)
  • Conducting background research (50 percent)

A selection of workers also rely on A.I. for technical tasks, including:

While A.I. tools have their benefits, they also present numerous pitfalls for managers. Chief among them: generative A.I. can potentially scrape sensitive or protected content, which is why companies such as Google are putting guardrails on how employees use these tools internally. Generative A.I. may also violate others’ intellectual property rights; for example, image generators such as Midjourney are trained on copyrighted visual works, which could lead to a tangled legal morass at some point.  

In addition, A.I. platforms such as ChatGPT can’t adjudicate truth, and may deliver factually wrong answers in response to queries. That’s extremely problematic if a manager’s team is relying on a chatbot for answers to mission-critical questions, for instance. It’s a similar issue with code: although tools such as Meta’s Llama 2 offer increasingly sophisticated code generation abilities, the output may still contain vulnerabilities. 

Building an Effective A.I. Company Policy

So, what goes into an effective A.I. policy? As you might expect, it’s complicated, but here are some things to consider:

  • Establish what contractors using A.I. can do: This comes from the Harvard Business Review: “As a starting point, [businesses] should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI. They should also demand broad indemnification for potential intellectual property infringement caused by a failure of the AI companies to properly license data input or self-reporting by the AI itself of its outputs to flag for potential infringement.

  • Establish hard parameters: What kinds of A.I. tools can be used in an organization, and how will the output be evaluated for accuracy and safety? Is there sufficient communication between all teams potentially impacted by A.I. (i.e., not just tech teams, but also legal teams that might have to clean up a potential copyright mess)? How will the use of A.I. tools evolve in coming months and years?

  • Protect confidentiality: The last thing any organization wants or needs is its most valuable information—such as patient records or proprietary information—used as part of an A.I. training dataset. Organizations must have policies that silo off this valuable data from A.I. without proper security procedures in place.

  • Set up an auditing system: Make sure the organization is periodically evaluating and auditing its use of A.I. tools and data, especially given the rapid evolution of the technology.

Although A.I. policy will necessarily differ between organizations, aligning with a few key principles can help organizations use A.I. in a secure and productive way. Managers should think strategically about the use (and potential mis-use) of A.I. before the technology is integrated too deeply into their existing tech stacks.