Three Reputational Implications For Corporations Deploying AI
Posted on

March 15, 2023

3 Min. Read

Author

Marissa Piette

Three Reputational Implications For Corporations Deploying AI

The AI arms race is on. Microsoft’s release of ChatGPT-powered Bing and Google’s rushed preview of Bard is evidence that Pandora’s box is fully open when it comes to bringing generative artificial intelligence (AI) to the masses.

The repercussions of generative AI on business could be staggering. These technologies promise to transform everything from how business gets done to the way we interact with the world. This is the next technology frontier, one with great promise and, as with any frontier, great peril.

And whether you believe in more government regulation of technology or not, we’ve repeatedly seen elected officials struggle to establish rules that govern how we navigate the perils of fast-moving technological advances. Tech leaders are increasingly calling for government action to ensure the responsible evolution of AI.

In this leadership vacuum, companies that use and deploy AI will be held just as accountable for issues as the companies creating the technology. A recent Purple Strategies survey found that expectations for corporate leaders are high:

  • 91% of respondents said it was important for leaders to be transparent about their AI strategy and how investments may impact employees, customers and society.
  • 90% said it was important for leaders to solve for potential bias and misinformation in their use of AI.
  • 90% said it’s as important for leaders to adopt an ethical framework for implementing AI within their corporations.¹

So how can companies protect their reputations while capitalizing on AI’s potential?

There are three key elements that will influence your company’s reputation with critical stakeholders ranging from employees and consumers to investors and policy makers.

(1) Aligning on “rules of the road”

One of the most important considerations for corporate leaders when developing AI strategy isn’t what they will set out to accomplish – because the future is so uncertain – but instead, what lines they won’t cross. Establishing clear guardrails to guide decisions about investment, development, implementation and outcomes of AI can give organizations a language through which they can convey their values and create alignment with skeptical audiences, even in the absence of ethical standards and strong governance models.

Start by setting guidelines that are agreed upon at the highest levels of your organization: leadership and your board of directors. From there, the rules can be communicated out to managers and employees through dedicated updates and trainings. Strong organizational alignment is critical to ensuring that anyone in a position to develop or use AI is clear on when and when not to use it, what the standard operating procedures are for piloting and reviewing AI programs, and how the guidelines will adapt as the technology continues to advance and be adopted more widely.

Vanderbilt University recently found itself in hot water over its use of ChatGPT to compose an email responding to the deadly shooting at Michigan State University. The email, sent by the office of Equity, Diversity and Inclusion (EDI), was criticized for being generic and impersonal. In the absence of “rules of the road” for how to use AI, people will take the technology into their own hands – with potential negative consequences to follow.

(2) Scenario planning to get ahead of negative consequences

The potential risks of AI are immense. Many of these are known: the spread of misinformation, the proliferation of bias and the potential for human jobs to be replaced by computers. Even more are unknown, or are unique to specific groups or industries. Companies need broad, flexible scenario plans that can serve as a guide when something inevitably goes wrong, as well as proactive and reactive issues-management capabilities to manage through negative consequences and crises.

To get started, think about how your organization plans to invest in and use AI. Based on that vision, identify any negative outcomes that could expose the company to risk. How will the risks impact the business? Stakeholder perceptions? Talent? Based on those scenarios, develop action plans for how to address them.

Using the example above, Vanderbilt found itself in a reactive position, facing negative press and angry students and faculty. It issued an apology and temporarily placed two associate deans on leave while they investigated. They found themselves on the back foot in a media firestorm – a cautionary tale for any company or institution starting to embrace the technology.

(3) Engaging leadership with stakeholders who hold your license to operate

Communications best practice advises us to dialogue with influential audiences to understand where they’re coming from and manage perceptions and expectations. It’s no different with AI. Defining your AI story and messages based on your “rules of the road,” talking to critical stakeholders about what you’re doing, understanding shared concerns, and agreeing on how you can partner together will be critical to managing change with your core audiences. To manage stakeholders effectively, begin coaching executives on how to deliver AI-focused messages so they can bring internal and external audiences along on the journey and create confidence in the vision.

When it comes to AI, everyone is building the proverbial bike as they ride it. Engaging with audiences early will show a willingness to engage and put your company in a better position to manage the AI transition as it’s happening.

 

Marissa Piette is a Senior Director at Purple Strategies.


¹Purple Omnibus Survey (US Informed Public, N=1,000), February 2023