When emerging technologies catch, they disrupt the status quo and present us with transformative opportunities. If harnessed properly, the technologies can fundamentally shift the ways we work and live. They can make us more efficient and help foster creativity and innovation. They can also provide pathways to stake out competitive advantages in the marketplace. The use of emerging technologies, however, also presents complex risks, whether legal or regulatory, reputational or financial in nature, and significant consequences can result when risks are left unmitigated.
ChatGPT and other generative artificial intelligence (AI) tools appear primed to follow this general pattern. (It was recently reported that ChatGPT is the fastest-growing consumer app in history.) Does your organization have a strategy to confront the developments? Some organizations have (and will continue to) reflexively impose blanket prohibitions on the use of all generative AI, ostensibly seeking to eliminate the risks associated with use of the tools. This, of course, will also eliminate any opportunity to benefit from leveraging the technologies and may create new risks in the form of competitive harm (or business extinction) if market forces compel use of AI. Other organizations may leap into the generative AI space without considering the risks and permit the use of AI tools (expressly or tacitly) without having appropriate controls in place to mitigate the risks.
We think a more prudent approach involves thoughtfully considering how using generative AI could propel your organization forward, evaluating those uses against your organization’s risk management framework, and implementing a flexible and risk-based governance model that fosters responsible use of the technologies. One component of an effective governance program is a carefully constructed AI use policy that conveys practical use guidelines and expectations to your organization’s employees — and affords your organization flexibility to refine its practices in the face of a dynamic environment that will see evolving legal and regulatory requirements and changing social attitudes about using AI.
As you already know, effective policies do not come pre-packaged as one-size-fits-all and taking a check-the-box approach to organizational policies is not a wise strategy. There are, however, several areas that warrant consideration as an organization contemplates an AI use policy, and we wanted to share a few of them:
- Existing organizational policies. How does use of generative AI comport with (and how will you help employees comply with) the universe of existing organizational policies, including, for example, those pertaining to information technology, privacy, information security, marketing and human resources?
- Appropriate use of AI tools. Will employees have discretion to select which generative AI tools to use in conducting business, or will your organization maintain a list of company-approved AI tools and prohibit use beyond those approved technologies? Does your policy guide employees to determine whether use of generative AI is beneficial and suitable under the specific circumstances?
- Preserving confidentiality. Does the AI use policy safeguard against users entering trade secrets, confidential business information or customer information into an AI interface?
- Policy ownership. Who will own the policy and maintain responsibility for driving compliance? Who will be responsible for informing the organization and providing guidance and direction on the appropriate use of AI tools as the technologies develop and the legal and regulatory requirements evolve? Will your organization employ a cross-functional approach to governance (for example, an AI steering committee)? Who should employees contact with questions about the policy and its application?
- Reviewing outputs. Does the policy require human review of the work generated by the AI tool for accuracy and to protect the security of company information? Is your organization contemplating allowing the use of AI for decision-making? How will the organization evaluate and guard against potential harms, including unintended biases?
- Legal, regulatory and third-party contractual obligations. Does your policy address complying with legal requirements, including, for example, trademark and other intellectual property laws? Is your organization part of a regulated industry that may impose unique regulatory requirements? Has your organization undertaken contractual obligations (perhaps with a customer) that may prohibit the use of AI tools?
We offer these thoughts, not as an attempt to develop a comprehensive list of action items, but rather to illustrate the breadth of considerations that need to be carefully evaluated and managed as an organization contemplates the use of generative AI tools like ChatGPT. The AI landscape is complex and dynamic. Does your organization need support in developing a workable AI use governance model? The Faegre Drinker artificial intelligence, algorithmic decision-making & big data team is ready to address your questions.