The Director’s Dilemma – July 2024 Edition

July 01, 2024 Share this article:

Directors Dilemma July 2024

Produced by Julie Garland-McLellan, Consultant at AltoPartners Australia and non-executive director and board consultant based in Sydney, Australia.

Contribution by Julia Zdrahal-Urbanek, is Founder and Managing Partner, ALTO Executive Search GmbH / AltoPartners Austria; a member of the AltoPartners Global Operating Committee; co-founder and initiator of the Women Corporate Directors (WCD) Austria Chapter; and a Non-Executive Director on a large family-owned business. She has profound experience in filling board and top-management positions in Austria and Europe for global conglomerates as well as family owned and start-up/scale-up businesses.

This edition of the newsletter was first published on The Director’s Dilemma website and the full newsletter is available for viewing here. To subscribe to future editions of the newsletter, click here

The Director’s Dilemma - July 2024

This month we think about what to do if we suspect our board papers have been ‘enhanced’ by an AI.

Odette has been on the board of a government-owned utility business for almost three years. She loves being of service and solving the challenges of providing a reliable high class service at a reasonable and accessible fee. She also greatly enjoys getting to know the staff and helping them to grow and develop to reach their full potential.

In the board pack for her next meeting there are a couple of papers that have her concerned. She knows the two people who wrote them, and these papers are not written the way they normally write. Some of the grammar is very American (and she is in England) and the persuasive tone is just not what she expects from these two authors.

Odette suspects that the papers may have been enhanced using an AI tool. She knows that the company has not invested in its own proprietary tool so - if her suspicions are founded - this is likely an open source tool and possibly a free version that might be learning from, and sharing, any information put into it.

She doesn’t want to cause any problems for the staff concerned but wonders how to set some guiderails around the risks of this practice before it becomes too widespread to contain.

What is the best way for her to bring her suspicions to the board for a policy solution?

Julia’s Answer

I recommend that Odette take the following steps to address her concerns and help shape a policy solution at a board level.

1. Ensure a Clear Fact Base: Speak privately with the individuals who wrote the papers to confirm suspicions without public accusations. Highlight the observed changes in writing style and inquire about the use of open AI tools. It’s important to avoid giving the impression of condemning AI in general, as AI can be beneficial for business. Emphasize the need for proper policies and secure systems for its correct and safe use. Inform them that a solution for the secure use of AI is being discussed, but until then, they must not process confidential content with AI tools.

2. Raise Board Awareness: At the next board meeting, Odette should introduce the topic of AI-generated content. This could be part of starting a broader discussion on AI, IoT, robotics, etc. Invite an expert to discuss the opportunities and threats of AI. Additionally, discuss the required security systems and policies.

3. Action Points: Involve the CIO and CISO to take responsibility for selecting secure (preferably proprietary) AI tools for internal use. They should also define and implement a policy on AI tool usage and train staff on usage, opportunities, risks, and compliance. If the positions of CIO and CISO do not exist or are inadequately staffed, the potential need for recruitment should be another discussion point for the board, and an external expert should be consulted.

4. Inform the individuals who used open AI that their actions have initiated two critical discussions: firstly, the importance of actively engaging with trends like AI; and secondly, the necessity of having appropriate systems and policies in place for secure use.

Throughout all steps, ensure that all board members are aligned and in agreement with the actions being taken.

Julie’s Answer

It is likely that staff in every organisation are ‘playing’ with AI. Some will carefully ask only general research questions. Some will use ‘internal’ AI platforms. Some will set the parameters of public platforms so that questions and answers should not be shared with other users. Some will be on public platforms asking questions and blissfully unaware that their identity (and provenance) as well as any information they give or receive, is known and noted.

All staff activity should occur in the context of a culture and a policy framework that clearly sets out expectations for behaviour, including online and use of technology.

If Odette’s company does not have an AI policy, they urgently need to make one.

Odette should talk with her chair about the need to find out what AI are being used and what risks and opportunities each may create for their business. Staff should not be punished for investigating a new technology or for using it if it appears to save time and generate good results. They should be encouraged to share with the board what they are using and what they use it for so that the company can become aware of the potential and start to create some standards that can be applied.

A ban on using a tool that staff have proactively employed will be counterproductive and only drive the use out of sight. The board needs to set guideline that allow staff to grow skills whilst protecting the company from any potential downsides.

The board should model the way by investing in AI education and using this to generate additional insights into both AI and the issues their business faces.