Navigating proposed regulations for large Language model tools
As lawmakers worldwide propose regulations and guidance on large language model (LLM) tools such as ChatGPT and Google Bard, Gartner, Inc. has outlined four essential areas for general counsel (GC) and legal leaders to prioritize. These areas will help organizations prepare for regulatory changes and develop effective corporate AI strategies.
Laura Cohn, Senior Principal Research Analyst at Gartner, emphasized the importance of legal leaders understanding the overlap between various regulatory proposals. This understanding will enable senior leaders and the board to proactively navigate upcoming regulatory shifts as they shape their AI strategies.
Gartner has identified the following four actions for legal leaders to take in order to establish AI oversight and guide their organizations while awaiting final regulatory guidance:
- Embed Transparency in AI Use: Transparency regarding AI use is becoming a key aspect of proposed legislation globally. Legal leaders should consider how their organizations can clearly communicate to users when they are interacting with AI. For example, updating privacy notices and terms and conditions on company websites to reflect AI use or creating a dedicated section in the organization’s online “Trust Centre” can enhance transparency. Another option is providing point-in-time notices to users when collecting data, specifically explaining how AI is utilized. Additionally, legal leaders can explore updating the supplier code of conduct to include a requirement for vendors to notify the organization if they plan to use AI.
- Ensure Continuous Risk Management: Legal leaders and GCs should actively participate in cross-functional efforts to implement risk management controls throughout the lifecycle of any high-risk AI tool. This can include conducting algorithmic impact assessments (AIAs) that document decision-making processes, demonstrate due diligence, and minimize regulatory and liability risks. Involving information security, data management, data science, privacy, compliance, and relevant business units is crucial to obtain a comprehensive understanding of risks associated with the AI tool, as legal leaders do not typically own the business processes, they embed controls for.
- Build Governance with Human Oversight and Accountability: Human oversight is essential to mitigate the risk of LLM tools generating incorrect but plausible outputs. Regulators are emphasizing the importance of human oversight, which serves as an internal check on the output of AI tools. Legal leaders can designate an AI point person to collaborate with technical teams in designing and implementing human controls. This person could possess deep functional knowledge, be a member of the security or privacy team, or, if the AI initiative involves enterprise search integrations, be the digital workplace lead. Establishing a digital ethics advisory board consisting of legal, operations, IT, marketing, and external experts can also help manage ethical issues, with findings reported to the board of directors.
- Guard Against Data Privacy Risks: Protecting individual data privacy is a key concern for regulators in the context of AI use. Legal and compliance leaders must proactively manage privacy risks by incorporating privacy-by-design principles into AI initiatives. This can involve conducting privacy impact assessments early in the project and involving privacy team members from the outset to assess privacy risks. Organizations using public versions of LLM tools should inform their workforce that any information entered may become part of the training dataset, potentially resulting in sensitive or proprietary information being included in responses provided to users outside the organization. Establishing guidelines, educating staff about associated risks, and providing direction on the secure deployment of such tools are critical.
By addressing these four areas, legal leaders can navigate the evolving regulatory landscape surrounding LLM tools, establish effective governance structures, and ensure compliance with data privacy regulations. Proactive measures taken now will enable organizations to embrace AI advancements while demonstrating transparency, accountability, and responsible use of these technologies.