CDW Blog

AI in the Workspace - Part 3: Adoption Challenges of AI

10 October, 2023 / by Tim Russell

 

The introduction of AI capabilities may be something you are doing for your internal audience or for your customers. But how do you ensure a balanced safe adoption of this technology? 

From the digitally excluded to the early adopters; any new technological capability must be provisioned in a way that supports all users, delivers the benefits it was intended for and, above all, aligns to the business goals. 

I’m going to split the adoption challenges into two separate threads; one focusing on how employee experience (EX) can be addressed, and the other customer experience (CX). 

Adoption challenges – Employee experience (EX) 

When implementing AI capabilities for employees, as stated in the previous blog post on this subject, education is paramount. Although safety mechanisms and security protocols can be introduced, unless you are planning to allow AI direct access to publish, it will have a human agent; a requestor of AI content, activities or actions. The ownership must be clear to anyone that engages in the use of AI capabilities, and it must also be understood that the responses, results, or actions must be tempered with human control.   

Previously, we have seen internal SharePoint sites be overrun with outdated documentation, orphan references and similar; this trend was too often related to negligence regarding data ownership and publication. Often, a document was required when meeting a stage gate in a particular process within a project. This document would be created, placed on a shared site and the stage gate was marked as complete. This somewhat complacent attitude to data must be extinguished when it comes to utilising AI for dynamic content creation.   

I mentioned this in the previous blogs: data created quite often becomes part of the learning data set for future users. Bad or incorrect data will continue to propagate and create hallucinations that become increasingly difficult to distinguish from fact. To mitigate this, education at the earliest stage is critical - if we do not start here, the remedial activities to filter data could undermine any productivity enhancements from the utilisation of AI. 

The second stage of AI adoption for employees is to define a protocol for use; are you going to allow AI-published content, are you going to rely on AI data to create customer insights, is AI going to be utilised for image creation in articles?   

The education piece above is referring to the understanding of the potential impact, the protocol is intended to give all your employees a clear list of suitable scenarios where AI can be engaged and the stage gates that must be utilised to ensure the data and content is representative, accurate and unbiased. This is no different to the view around content creation generally, however, it is easy to fall into the trap of thinking that a well-written, easy-to-read document with tables and references is ready for release.  With AI platforms I have seen that this is demonstrably not the case. The individual requesting AI content is ultimately responsible, and I will keep repeating this as it is critical to the adoption of AI. 

Having decided, as an organisation, that formalised AI capabilities will be engaged, the next decision is around what sort of AI capability. Are you to use public-facing AI models with their predefined learning models, or are you to create your own? The creation of your own AI capability varies in complexity - it’s a question of processing capability and an interface; the content and modelling data is anything but simple. I will not go on to demonstrate this here, but if there is a requirement to look at building your own AI capability, reach out to us and we will be glad to help you on this journey. 

Finally, and most importantly, is the security aspect for the data submitted, stored, processed and created by any AI platform. There is a prime example from March 2023 where a large manufacturing company’s tech team submitted sensitive corporate information to a public AI engine, which resulted in the information leaking into the public domain.   

Obviously, this is a bad thing and one I imagine all organisations would look to avoid! I’m not going to specify technology or vendor solutions here, but what you need to ensure is that, through web proxies or end user protection capabilities; any access to AI engines is governed by clear data rules. This may be that internal data needs to be correctly labelled to prevent accidental disclosure, or it may be that the proxies inspect and classify the data on a real-time basis, Either way, everything must be done to ensure internal data is checked before reaching the public domain. 

Adoption challenges – Customer experience (CX) 

In my opinion, the customer experience is where we the benefits of AI can be realised the most quickly. I’ll qualify this statement slightly to help understand my perspective. 

Imagine you are a customer of your own organisation; what would you consider to be the most important considerations in any interaction with the business?    

I would consider the most important aspect to be: speed of response, accuracy of information provided, speed to resolution and a personalised interaction, regardless of whether it’s with a human or AI.  

It would be logical to assume that an interaction, unless specifically labelled as ‘AI’, should feel human in nature. We have probably all, at one point or another, had a customer service interaction where responses were obviously canned and probably copied and pasted from a repository or repeated from a call script. These scenarios quite often feel cold and lifeless. Even when delivered by a human, you know it’s a script! 

Assuming your organisation implements AI with secure, controlled access to customer data, this information could be provided either direct or via an agent to augment and improve customer interactions. Humans can only process so much information at a time; an AI-supported interaction could monitor the conversation, existing customer data, real-world events, and trends - all in the blink of an eye. Imagine having a conversation with a customer and having real-time information fed to you about sentiment analysis in the voice print, or local issues near the customer. 

Clearly, the potential for AI in the customer contact arena is vast. Now, how do you go about adopting this into the live environment? The answer is education. Education is key, but it is not just internally. You need to be open and honest about the utilisation of AI platforms to drive an improved service to customers.  

For adoption that impacts your customers and the outside world, you need to think about how this will be perceived; is this the inflexible world of the IVRs that has been present in the customer contact world, or will you operate free text that delivers personalised content? A customer is more likely to accept automation if it delivers a better service to them. You will still need to assure them of data integrity, but an improved service response time, shorter time to resolution, and flexibility to potentially work outside normal parameters are all aspects that will be experienced positively. 

Think about the ability to offer 24/7 support to customers, with the caveat that out-of-hours support will be just as capable and beneficial, but AI-driven (I still would consider implementing human validation of output, initially). 

I want to suggest another scenario for you, one that revolves around customer interaction with the digitally excluded.  

Have you ever had to explain something complicated? Did you get frustrated? Did you try and explain something when you were short of time? Did you rush?  

AI will not get frustrated, nor impatient, nor bored.  

This is where I believe AI can deliver real impact: providing support to the digitally excluded or those less experienced at using digital interaction. I quite often look at the interface for public services, more and more often this interaction is web-based and there is less potential for face-to-face contact. The telephone interface is still used, but has a time cost that is far greater than chat-based interactions. A human agent can handle only one voice call at a time, whereas it is normal to be operating 3 or 4 chat channels.  

An augmented AI environment can help optimise agent interaction times; from dedicated pre-call screening to in-call prompting and support, the quality can be improved, and the time of calls can be reduced.  This same concept could be taken into public service kiosks, a concept whereby face-to-face contact could be provided in enclosed, private street-based environments, reducing the cost overhead on large buildings but still providing service availability to those who look for physical rather than digital support. This contact would be driven by AI and not be physical in nature but could address the extra support some in our community require. 

Education, education, education! 

Adoption was the main topic of this piece. We touched on several areas, but hopefully the core message of ‘education’ came through. Underlying this is ‘purpose’; implementing AI must deliver a benefit to a user or customer group, and be responsible, ethical, and aligned to your business goals.   

If you can clearly articulate the three P’s: the Purpose, Policies, and Protections you have in place around AI, the acceptance by both internal and external audiences will be simplified. 

Change in an organisation is always a challenge; CDW is a partner capable of helping you on the journey of discovery, design, and delivery of AI into your organisation. Reach out to us to help you plan, prepare, and operate AI responsibly within your business. 

This is the third in a 4-part blog series written by Tim Russell, the Chief Technologist for Modern Workspace at CDW UK.   

Other blogs in this series: 


Please click here to view other content from the Office of the CTO at CDW UK.