CDW Blog

AI in the Workspace - Part 2: AI - A Business Dilemma

3 October, 2023 / by Tim Russell

 

Businesses want to empower their staff to achieve more through the productive use of AI and ML, but they need to protect confidential data and ensure a safe environment for them. How do you counterbalance these concerns? 

Time is the true constant, but how do we give people time to achieve more, and to be more productive and creative? In this article I will address these concerns and how businesses can address them, while also looking at the opportunities that AI and ML offer. 

Why AI?  

For the last 30 years I have worked in the technology field, and I have lost count of the number of times a technology was chosen or selected because it was going to solve the problem that the customer was suffering at that time.  

It could be customer experience, employee pulse scores, or to try and save money. Often, this has resulted in revenue for a supplier and a headache for the customer. Any technology that is to be invested in must support a specific business outcome; just because it is the latest or greatest piece of tech doesn’t mean your business needs it. 

Artificial Intelligence, Machine Learning, and all that these terms encompass is amazingly powerful and will lead to significant time savings, but at what cost? 

As mentioned earlier, time is the thing we are trying to save. Time saved on the mundane will free it up for the creative, the innovative, the space to think. I’m going to split this down into three sections: 

  • For AI 
  • Against AI 
  • Introducing AI to your business – the dilemma 

First let’s look at the ‘for’ side of this dilemma.   

For AI 

AI and ML can deliver insights, shorten lead time to document creation, remove writer’s block, automate processes and remove mundane tasks. I have used AI in all these scenarios recently; and AI also works very well as a sounding board.   

With the prevalence of asynchronous working (any time, any place) we sometimes find ourselves having creative spurts at times of the day when we may not be able to speak to a colleague; 6am and 10pm seem to be my most innovative hours, but we are all different. 

At these times, if I have an idea I can document it, but I can also use some of the available AI chat engines to gain another view on my idea, to give a counter-argument, or even to check my meandering thoughts for sanity! The machine that I use is not emotional, the responses are well-written and even if it decides to call me a blithering idiot for writing such drivel, I will not be offended; after all, it is a machine and therefore, I am in a safe space. Both my innovation and idiocy are shared with something I view as no more than a mirror of my mind, and my ego is left intact. 

Take this document for example; if I had struggled to begin creating it, even though I had a title and idea for content, I could have used a chat engine to help remove writer’s block; providing key bullet points to cover. Fortunately, on this subject I am not at a loss for words, but there are times where I have used this exact process to kickstart my productivity. 

Robotic Process Automation (RPA) requires a touch more than a prompt and response, however, the potential to remove mundane interruptions (email etc) from our daily lives will give us this all-important time back to focus on our productivity.  

Some RPA may already be in the tools we use, some we develop ourselves. The net result is the same; we are given room to do more with the time we have available. When it comes to business process automation and the digital journey, my colleague Jaro Tomik here at CDW can provide immense insight on the correct and beneficial implementation of RPA for businesses. You can see his articles on the subject here.

I’m sure you can agree, the three examples I have given above are suitable and worthy uses of AI, and without too much of a stretch I think you could find applications for all these scenarios.  

Now on to the counter-argument: ‘against AI’.

Against AI 

Let’s take the examples above and place “what-if…?”, “but…” and “that will never work!” against these specific use cases. 

The first example was the mirror for my thoughts, but when I submit this to an online AI engine, these data can then be utilised in the large language model (LLM) and influence the chat engine’s responses for another user. 

If this information contained confidential data or intellectual property, this would be a huge security risk and potentially devalue the content I am creating or maybe even the company I work for because the confidential information has leaked into the ‘outside world’.  

There is also the potential for AI hallucinations; a scenario whereby the AI provides what appears to be coherent and reliable information, but when checked is nothing more than a concoction of various of data sources, potentially with no common point of reference, that AI has joined together. There is a recent occurrence of this type of hallucination where a lawyer, claiming to be unaware of the potential for unreliable data in AI, relied on it to provide a court brief. Although the content seemed reliable and read as a perfectly formed court argument, the precedents mentioned were fictitious. More can be read on the BBC website about this here.

Recently, I have tried to use chat bots to find part number cross-references for one of my hobbies. The response provided from the AI chatbot looked sensible, had valid model numbers and data pertaining to the type of question I was looking for, as well as manufacturer part numbers for the components I sourced. To the uninitiated, this looked like a perfect response; the results however were anything but this. The model numbers referenced were correct, but 50% of the product codes led to a product that matched the description but not the dimensions of the item I was looking for; in short, the entire result was wrong and unable to provide me with the answer I needed. I took more time checking the results than I would have spent looking for the answer manually in the first place! 

This is one of the biggest risks with using AI: reliability of the response.  

We have the human checker, the user, who can read the responses from AI, but this is little-to-no help when looking at large-scale implementations.   

ai business dilemmaMy colleague Rob Sims focuses on the enablement of organisations to utilise closed-loop AI systems and specific LLMs, to improve the relevance of data in AI responses by narrowing down the quantity and quality of data being utilised to create responses. You can find Rob’s articles on the CDW news site here.

Moving on to the writer’s block example; there is the risk that the content AI suggests is not unique or is based on biased information. What if you were to use and share content created like this? If it was tested (by AI) and shown not to be human-created content, do you lose reputation, do you remove the value you provide as an individual?   

There is a real challenge in discerning human content from AI content. I have seen content that was human-created being scored as 78% AI-generated and I have seen the opposite where AI content was tested as being almost 90% ‘human’.  

However, if we were to utilise a human or AI to create documentation around a structured process, such as service management, the quality and accuracy, especially if using a closed loop LLM defined for this purpose, would deliver the same value and probably be indistinguishable from each other. When I write content, I want it to be clear that I wrote it, that it is my thought process, investigation, and insight. With ever-more content being AI-created, I want to ensure mine is discernible as unique and human-generated. 

I foresee a very real scenario where the information we rely on to make decisions and the content we read daily is nothing more than an amalgamation of all the available information stream. Over time, this will be biased by having more AI- than human-generated content; resulting in a narrative that is AI-led. Now, this isn’t like in film where the computers take over physical machines, but it can result in directing human thought processes - intentional or otherwise.  

‘Facts rule the world, emotions run the world’, is a phrase my father taught me, and it is worth bearing in mind when we consider the influence AI has on the content we digest, especially on the impressionable. A machine cannot experience an emotion (yet), but it can elicit one; are we ready to let AI have a controlling capability in the content we read, and, in turn, influence our decisions and emotions? 

Introducing AI – the next dilemma 

I imagine that, after reading this far you will have thoughts around how you can personally use AI, but how do you enable others in your business? There are effectively four steps involved in enabling and safely utilising AI in your environments. 

  1. Readiness – CDW can offer readiness assessments for your organisation. These look at the systems, data sets and processes you have to define how AI-ready you are as a business. 
  2. Education – There is enough internet hype already, now you need to make sure the training is available and consumed by your employees. This information must highlight the onus of ownership when it comes to AI content. Even if AI has created content at your request, it is still an individual’s responsibility to ensure the data is correct, unbiased, and suitable for use. 
  3. Data protection – There are already several tools in place to prevent data loss. Organisations must ensure that any information submitted by employees into an AI engine passes through a DLP portal; one that ideally can distinguish between content destined for a public or private AI engine. 
  4. Private AI capability – A lot of the consumer AI engines in use are public-based capabilities that have utilised modelling sets defined by the creator. A private AI capability can be modelled utilising internal and pre-defined data sets, keeping the results aligned to a specific business or sector and providing a lower potential hallucination risk. 

Summary 

When it comes to creating a bespoke or private AI offering, there is too much to cover in this article to even touch on the requirements to design, build and operate such a system. However, please do reach out to us if you would like to investigate the further possibilities and maybe run one of our assessments to look at AI readiness. 

This is the second in a 4-part blog series written by Tim Russell, the Chief Technologist for Modern Workspace at CDW UK.   

Other blogs in this series: