Ask Alto: How to make AI work in your organisation

October 16, 2024 Share this article:

How to make AI work in your organisation

Generative AI has taken the world by storm - leaders can’t afford to wait and see before deciding on an AI strategy. We share some advice for leaders on how to make it work in their organisations.

‘Probably by the end of 2025, everyone who’s paying attention to AI will understand that it isn’t just technology; a social, behavioural and cultural shift is required to get the most out of it.” - Nichol Bradford, Executive-in-Residence for AI + HI at The Society for Human Resource Management.

It’s been just under two years since the initial release of ChatGPT – two years filled with articles, advice, analysis, apps and many, many numbers.

The large language model boomed after its November 2022 release, spurring the release of competing artificial intelligence (AI) products from the world’s largest tech companies. It is now being used by 200 million people each week (OpenAI’s own figures, reported by The Verge at the end of August 2024). Many of those millions of people will be employees of companies, both large and small, and many of them will be using ChatGPT and the like to help them do their work.

The money is equally eye-watering.

  • Reuters reported on October 3, 2024, that OpenAI’s ChatGPT had raised $6.6 billion from investors, suggesting a valuation of around $157 billion and cementing its position as one of the most valuable private companies in the world.

  • Global market intelligence company International Data Corporation (IDC) predicts that business spending on artificial intelligence will have a cumulative global economic impact of $19.9 trillion through to 2030 and drive 3.5% of global GDP in 2030.

  • The global generative AI market was valued at nearly $45 billion in 2023 and is expected to be worth more than $200 billion by 2030, says AIPRM, maker of a prompt management tool and community-driven prompt library.

Exact numbers are hard to pinpoint, but it’s fair to assume that hundreds of apps based on generative AI have been launched since November 2022 in fields ranging from content creation and graphic design to programming and human resources – not to mention the release of new versions of tools like ChatGPT.

What’s happening in the workplace?

In this fast-moving field, business leaders would be forgiven for taking a wait-and-see approach. But at least one expert thinks that would be a mistake.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, notes two trends about AI use at work: studies are showing that many people are indeed using generative AI at work, and that people are reporting productivity gains.

And yet, writes Mollick, “when I talk to leaders and managers about AI use in their company, they often say they see little AI use and few productivity gains outside of narrow permitted use cases.”

His take on the contradiction? He says that while people are experimenting with AI, they aren’t sharing their results with their employers. “Instead, almost every organisation is completely infiltrated with Secret Cyborgs, people using AI at work but not telling (their employers) about it.”

Mollick lists several reasons why people hide their AI use from their employers: They may fear that their company sees productivity gains as an opportunity for cost-cutting, or they may fear that productivity gains will become an expectation that more work will get done.

This hidden use of AI at work already has an acronym: BYOAI, or bring your own AI. This points to a pervasive trend. Businesses have long-established processes for implementing new technology, usually through projects led by management. But the speed and variety of the AI boom is bringing technology into organisations from the bottom up, with no management oversight of the process.

The implications of “bring your own AI”

The Society for Human Resource Management (SHRM) sums up some of the dangers:

  • Potential copyright violations become major threats when employees bring their own AI into the workplace and use it to create content of whatever kind.

  • The technology can open the door to phishing and malware that can steal sensitive company information during a data breach.

The World Economic Forum reported in January 2024 that, according to a 16-country study of more than 15,000 adults, some 84% of workers who use generative AI at work have publicly exposed their company’s data in the last three months.

SHRM says that companies, including Apple, Samsung, JPMorgan Chase, Bank of America, Wells Fargo, and Citigroup, have restricted or banned employees from using generative AI platforms at work because of these risks.

But enforcing such bans is very hard to do, and they create a culture of fear – which, as Mollick points out, drives secret cyborgs further underground.

In addition, the risk of not using generative AI is the risk of falling behind, says AI consultant Melle Amade Melkumian. “The stark reality is that generative AI is not just a productivity tool for tech nerds… It’s a revolutionary force that’s changing the way we work, collaborate, and innovate. Banning it outright could mean missing out on a valuable opportunity for progress.”

The first step in introducing AI in the workplace: it’s the people, not the tech

Nichol Bradford, Executive-in-Residence for Artificial Intelligence + Human Intelligence (HI) at The Society for Human Resource Management, says that while businesses may have previously had enterprise level experience in implementing tools like machine learning, generative AI tools like ChatGPT or Co-Pilot sit directly in the hands of employees.

In an interview as part FutureB2B’s SmartBrief webinar series AI Impact discussions, AI and The Future of Work, Bradford says that while some organisations are experimenting with AI by setting up internal “sandboxes” for employees to use, there is still a considerable gap between the C-Suite and the shop floor.

A study by Upwork, which surveyed 2,500 global C-suite executives, full-time employees, and freelancers in the US, UK, Australia, and Canada, found that:

  • Where companies have introduced AI, 96% of C-suite leaders say they hope the tools will increase their company’s overall productivity levels.

  • Nearly half (47%) of employees using AI say they have no idea how to achieve the productivity gains their employers expect, and 77% say these tools have decreased their productivity and added to their workload.

An Accenture study involving 7,000 C-suite leaders and 5,000 workers at large organisations in 19 countries found that while 95% of workers see value in working with Gen AI, approximately 60% are also concerned about job loss, stress, and burnout.

Bradford says that developing trust between leadership and employees is key to successful implementation – as is understanding that only by working with existing workflows and looking at how things are organised will deep productivity improvements be achieved.

“There are legendary fails in the history of tech transformation – and this is a massive tech change. What succeeded in the past? Organisations that did digital transformation well entrusted the process to a person, someone who really understood the business, who had vertical and horizontal relationships and who did the human work to get buy-in, explaining to people what’s in it for them.”

Top tips:

  1. Take a human-centred approach, focusing on collaboration between humans and AI. This involves upskilling employees, fostering trust, and creating a culture that encourages experimentation and learning.

  2. Consider appointing internal AI champions who play a crucial role in driving adoption by understanding both the business needs and the technology’s potential. They can bridge the gap between executive expectations and employee experiences.

Tying AI to organisational goals

Bradford says that a common mistake she sees is for organisations to do lots of experimentation in this field, without looking for the change that will fundamentally improve the business.

“If you take business architecture as your guiding light and then share areas of focus, you can then point experimentation towards those areas. You don’t tell people how to do things – you just give guidance as to what areas matter the most to senior leadership. You unleash your employee population on the issues and give them permission and agency to experiment around those areas.”

In generative AI, the people who know what needs to be done or not done are the people doing the work, says Bradford. They can help tie proposed changes to business architecture to get to real productivity. The change process is 70% people and 30% tech.

Top tips:

  1. Implement this in steps, says Bradford, adding that this was the process followed by Microsoft: First, prioritise AI fluency: provide employees with opportunities to experiment with AI tools in a safe environment, allowing them to understand the technology’s potential and gain a sense of agency.
  2. Then, decide which areas will impact the business the most and allow people to experiment freely in that space.

  3. Having identified areas that would benefit most from the implementation of generative AI, add “co-pilots” into those spaces. Because you now understand what the business wants to accomplish from start to finish, you can use well-defined use cases to test and iterate your processes.

The role of human resources

Bradford says that everyone in the AI era will have to become a value-added strategist, regardless of their department. But for human resources, a key area is employee experience.

“Employee experience is not just having a ping pong table in your break room. Good employee experience comes from the employer understanding the deep human desire to enjoy one’s work. HR leaders are the circulatory system of the organisation and they can really help people with reinventing workflows.”

Bradford adds that HR practitioners will need to be well-versed in responsible and ethical AI use. “The execution of this might sit in the technology department, but the heart of it is ensuring people know how to collaborate because they understand what the rules are. That’s going to come from HR.”

Top tip:

Don’t assume that a collaboration between the IT department and HR can implement these changes. Senior leadership needs to take charge of the process by thinking transformatively. They should be seen to be learning the new skills and tools – but they should also be thinking and talking about stressful issues like the impact of AI on jobs within the company. The C-Suite should also be involved in establishing an AI policy for the organisation.

There are challenges too

Alongside an iterative and human-centred approach to integrating AI into company practice, companies will need to work on governance issues. That means getting to grips with a fast-moving regulatory landscape, writes Scout Moran, Senior Counsel, Product and Privacy, at Grammarly. As things stand now, the AI regulatory environment in the United States comprises a mix of White House executive orders, federal and state initiatives, and statements by existing regulatory agencies, such as the Federal Trade Commission.

In Europe, the European Union’s AI Act is already in effect, with a staggered rollout for compliance milestones. The Act has broad implications; EY Switzerland says all parties involved in the development, usage, import, distribution, or manufacturing of AI systems are covered by the law – which also applies to AI systems located outside of the EU if the output produced by the system is intended to be used in the EU.

Top tip:

The compliance requirements for companies will be complex. Global law firm Norton Rose Fulbright recommends that companies begin by compiling an inventory of their current AI systems and models.

Leadership in the age of AI

Bradford says she’s heard work described as “what you hoped to do plus all the things you didn’t sign up for”. If generative AI can remove the things that leaders and their employees didn’t sign up for so that they can do the work where their expertise and value lie, that’s when it becomes fun, she says.

“We’ve always had a need for leadership, but these shifts require something different. Leaders need to be able to create trust and an environment where people are constantly experimenting and failing, and they need to have the organisation be okay with that. I think it’s about to get very interesting!”