Priorities for CEOs and Directors Implementing AI

POMERANTZ MONITOR | SEPTEMBER OCTOBER 2023

By Dean P. Ferrogari

Trailing behind the rapid progress of artificial intelligence (AI) are companies struggling to effectively implement the technology into everyday operations. Artificial intelligence is an umbrella term that covers a variety of capabilities such as machine learning, deep learning, natural language processing, voice recognition, and text analytics. AI technology aims to mimic human thinking by making assumptions, learning, reasoning, problem solving, or predicting with a high degree of autonomy. In the future, legal precedents will be instructive when navigating the implementation of AI in the corporate sphere, but in the interim, AI systems are being deployed without the legal guard rails of bright line rules. Environmental, social, and governance (ESG) protocols are a natural starting point for AI governance and risk mitigation. Corporate boards will become increasingly accountable for the legal compliance associated with implementing AI systems, and at this stage, the perspectives of CEOs both provide a picture of how legal regulations are taking shape as well as best practices for integrating AI into company operations in the absence of clear governmental guidance.

Board directors are uniquely positioned to ensure their companies are reaping the benefits of AI while avoiding the risk of litigation. As the impact of artificial intelligence becomes clearer, recent litigation trends indicate that directors are more likely to face personal liability for problematic incidents caused by AI. Board members owe a fiduciary duty of care and loyalty to their companies, which requires members of the board to make educated decisions in the best interest of the company. Directors will expose not only their businesses but also themselves to legal liability if they fail to uphold their fiduciary duty and mitigate preventable harms from AI systems implemented at their companies. However, implementing AI into pivotal company operations leads to the inevitable risk of legal liability. Thus, the responsible deployment of a company’s artificial intelligence falls directly within the board’s purview.

ESG considerations provide a starting point to combat the potential harms generated by AI. The environment, or “E,” will continue to be a hot topic as AI amasses a significant carbon footprint. Corporate leadership has already begun focusing on the “S,” or societal implications, where harms are pervasive and liability is imminent. Boards must ensure that AI does not perpetuate discrimination, physical harm, or liability to those utilizing the technology. If tailored correctly, AI can combat harms and support best practices for board governance, or “G.” For example, board members must avoid blindly deploying AI systems that have been trained on data sets comprised predominately of Caucasian male users, as these systems tend to perpetuate bias. By operating transparently, user equity can be enhanced by noting when AI programs have only been tested and trained on small subset populations.

There are three key steps that board members should consider in anticipation of installing AI governance programs. First, directors must understand how extensive AI bias is—both in the context of a person’s willingness to trust the technology as well as the biases embedded within AI. Humans find it difficult to comprehend how AI reaches a decision, and even after developers deliver AI to the user, the system continues to learn based on the data it absorbs. Thus, consumers are unconvinced that the decisions purportedly made by AI are not manipulated by the provider of the service deploying the AI, which may result in a distrust of both the technology and the company.

Second, directors must comprehend how human bias naturally infiltrates AI. Artificial intelligence is neither neutral nor infallible. The mathematical formulas supporting AI algorithms are encoded with opinions and presented—or disguised—as empirical fact, which can reinforce injustices that quietly inflict harm. Biases can infiltrate artificial intelligence through human touch points during the AI’s lifecycle. From the initial framing of a problem deemed worthy of AI solutions, to product design, data collection, development, and testing—human biases can enter throughout the process. Each stage of the AI lifecycle is limited by the experience of the overseeing team and reinforced in the data through the team’s learned biases.

Each touchpoint, however, can act as an opportunity to identify and eliminate harmful biases. For example, Isabelle Bousquette from the Wall Street Journal recently reported that Levi Strauss & Co. faced public backlash when announcing that it would be testing the use of AI to generate images of more body-inclusive models. Levi Strauss’s AI testing was part of an effort to create a more inclusive and diverse consumer experience; however, critics argued that this initiative deprived diverse models of work while solely addressing calls for the company to boost diversity. Levi Strauss stated that the company did not see the effort as a substitute for actions that must be taken to deliver on its diversity goals and added that it is committed to testing new technology while being mindful of feedback from consumers and stakeholders. Based on this criticism, Levi Strauss is now developing ethical guidelines for the responsible deployment of AI. Risk management should occur at each stage of the AI lifecycle. Training internal and external teams to identify AI biases will enable board members to implement the necessary precautions at different stages of the AI lifecycle.

Finally, directors need to develop the strategy they intend to deploy. When a company fails to institute AI governance, shareholders will likely hold liable those who were in a position to act during the time the harm became visible. Recent studies show that an overwhelming number of executives are not currently in a position to respond to a call to action from board members. Most companies now implement AI in an ad hoc manner that poses a significant risk to their commercial health. Boards must ensure that executives get up to speed with regard to responsible AI governance.

While each company will craft a message that reflects its unique business and culture, there are five basic principles that CEOs can embrace: (1) Trust; (2) Inclusivity; (3) Data Protection; (4) Transparency; and (5) Accountability. Trust. Companies continuously work at gaining and maintaining the trust of their customers, employees, and communities. Generative AI threatens to undermine trust by replacing a human voice with an automatically generated one. However, promoting a human voice at the top is essential for companies to maintain trust with clients. Any use of generative AI must be carefully reviewed for accuracy and reliability. Inclusivity. As a result of algorithms and the data they draw upon, generative AI includes inherent biases. Companies need to review AI tools and the information they generate for inclusivity and fairness. Data Protection. Data security has become a focal point for companies.  Companies need to be equally mindful of safeguarding privacy and protecting their intellectual property, both of which are threatened by AI. CEOs should reinforce their company’s data protection principles with employees and strengthen existing policies. Transparency. Companies must be transparent about their use of AI. By acknowledging in plain language how the company intends to use AI, they will not only build trust but mitigate unforeseen problems from miscommunications. Accountability. The issues and liability that arise with generative AI will fall on the humans responsible for deploying the technology. Companies must ensure that they are using generative AI carefully, ethically, and within the parameters of applicable law and regulation. In the event of misuse or error, companies need to take corrective action and have predetermined protocols in place for reporting breaches.

Proactive mitigation measures must be taken to effectively prevent AI biases from infiltrating company operations and becoming the subject of litigation and front-page stories in the media. Companies and their governing boards are likely to face increased liability if they use AI in furtherance of critical business operations. Corporate management should decide how the AI systems are used, and to what extent its decision-making process is explained. In making these critical decisions, management must assess the risks of deploying AI against its potential benefits. AI-related liabilities, litigation, and regulation are coming, and companies must prepare by establishing AI governance with the goal of reducing both the risks and harms associated with artificial intelligence while benefiting from the advantages it offers.

Monitor corporate governance