Connect with us

OTHER

How LLMs Enhance Customer Experience & Business Value

Published

on

How LLMs Enhance Customer Experience & Business Value

The rise of the LLM and Generative AI models brings the era of mass hyper personalisation at scale.

Increasingly customers are expecting fast and effective responses whether in relation to customer support matters or for helpful and tailored recommendations, product search and discovery and information.

Large Language Models (LLMs) have been key for advancing Generative AI as the natural language capabilities of computing machines has rapidly advanced.

LLM_Generative_AI.png

Advancing computing capabilities across language has been a key aspect of AI development in recent times and is set to continue in 2024. Language capabilities for AI agents brings the ability to personalise the engagement for the customer and within the internal operation side for the enterprise staff to assist them in their work and expedite tasks more efficiently.

McKinsey have estimated in a report entitled ‘The economic potential of generative AI: The next productivity frontier’ that Generative AI may add up to $2.6 to $4.4 trillion to the global economy on an annual basis and that potentially up to ¾ of the total annual value of use cases will result marketing and sales, customer operations, R&D and software engineering.

Advertisement

Generative_AI_completed.png

BCG points out how Generative AI may generate content, improve efficiency (For example summarise documents) and personalise experiences by identifying patterns in a given customer’s behaviour and generating tailored information for that particular person. However, BCG also notes that the extended adoption of Generative AI may result in challenges for organisations that have not implemented suitable governance arrangements. These include bias and toxicity, data leakage whereby sensitive, proprietary data may be entered into a Generative AI model and reappear in the public domain, and a lack of transparency.

Therefore, it is important for businesses that their offerings provide reliable and trustworthy solutions for their external customers and internal operational needs. With this in mind it is worth considering OTTES:

  • Open: whereby the model has enough variation to meet the needs of enterprise use cases and requirements for compliance obligations.

  • Trusted: whereby customers may apply their own proprietary data, develop an AI model and, in the process, ensure that ethical, regulatory and legal matters are managed appropriately.

  • Targeted: solution that is designed for business use cases with the intention to unlock new value.

  • Empowering: the model allows the user to become a value creator across training, fine-tuning, deployment and data governance.

  • Sustainability: working with a LLM provider who is committed to achieving net-zero emissions of greenhouse gasses.

IBM_Watsnonx.PNG

I am delighted to collaborate with IBM watsonx and to set out how OTTE with an LLM is enabled by watsonx. Furthermore, I am also delighted that IBM have committed to sustainability with a target to hit net-zero operational greenhouse gasses by 2030 and have reported that the firm reduced operational greenhouse gas emissions by 61% since 2010;

In addition, IBM have set out the following core principles for Generative AI for enterprise:

IBM_NLP.png

What is Natural Language Processing (NLP)? And what is a Large Language Model (LLM)  

NLP is the area of computer science and in particular Artificial Intelligence (AI) that enables computers the ability to understand text and speech in a similar manner to humans.

Natural Language Understanding (NLU) and Natural Language Generation (NLG) are subsets of NLP. NLU focuses on deriving understanding from language (speech or text).  NLG is applied towards the generation of language (speech or text) that is capable of being understood by humans.

Advertisement

LLM refers to a type of foundation model that has been trained on a vast set of data enabling the model to understand and generate NLP related tasks and even wider tasks (multimodal, multitasking).

Many firms, including IBM, have been developing NLP capabilities over an extended period of time along with the powerful Machine Learning and in particular the Deep Neural Network architecture required for advancing NLP. In particular the Transformer with Self-Attention Mechanism that LLMs typically apply.

The paper Attention is All you need (2017) introduced the concept and for more on how Transformers with the Self-Attention Mechanism work see:

Transformers are giant models and building a foundation LLM model from scratch requires large amounts of data and powerful server capabilities, and software engineering expertise (data engineers, natural language processing experts).

The attention of the public and many C-Teams has been captured by the likes of Open AI’s Chat GPT-3 and GPT-4, and the developer community has also been excited by the likes of Meta’s Llama models and other models available via open source.

There are firms who seek easily implementable solutions from a reliable and trustworthy enterprise supplier with a strong balance sheet.

Advertisement

Furthermore, there are those firms from the Small and Medium Enterprise (SME) segments who could massively benefit from the opportunities that LLMs offer but don’t have the internal resources to access and scale the more complex models.

For firms in the above categories seeking a solution from a reliable and secure counterparty, IBM have launched the Granite model series on watsonx.ai as the Generative AI offering from IBM services including watsonx Assistant and watsonx Orchestrate

Application of LLMs

LLM_2024_IBM.png

LLMs provide understanding and text generation as well as further types of content generation utilising huge amounts of data for the purposes of training. From a use case perspective, LLMs may infer from context, coherently generate responses that are relevant to the context as well as text summarization, language translation, sentiment analysis, Q&A answering, code generation as well as writing creatively.

LLM_BBN_Times.png

This ability is due to the vast size of the LLMs (often billions of parameters) and building and training an LLM from scratch is typically a task that will require significant resources (data, time, engineers, servers with GPUs and money). Hence the reason why firms (in particular those that are outside of the tech sector) are using services from LLMs available in the market.

The section below sets out how, according to IBM, watsonx may enable the user to work with LLM technology that is secure, reliable and relatively easy to adopt.

An Overview of Watsonx

Further advantages of IBM watsonx are set out below:

Advertisement

In essence one may surmise that the key advantage of the Granite model series for enterprise is business-targeted, IBM-developed foundation models built from sound data.

Embeddable AI Partnership: Independent Software Company (ISV) and Managed Service Providers (MSP)

Furthermore, a software company can look at embedding watsonx into their AI commercial software solutions. Partnership opportunities allow for solutions to be elevated via hybrid cloud and AI technology allowing for expansion into new markets and established go-to market tools enabling strategies for sales amplification.

Moreover, the NLP solutions from watsonx may accelerate business value generated via AI through a flexible portfolio of services, applications and libraries.  A particular advantage of the solution is that it makes the technology accessible to those who are not data scientists including non-technical users thereby enhancing productivity and business processes.

Use Case Examples: Retail, E-Commerce, Marketing, Advertising Sectors

Brands and firms can now engage with customers in a more meaningful and customised manner. LLM models can help provide useful information and helpful advice to potential customers about the product or service. Brands can use an LLM to create personalised campaigns and targeted promotions.

Furthermore, LLM models can be used to generate content whether social media posts and blogs or more creative idea content such as poetry that may engage the potential customer. Moreover, LLMs can be applied towards more capable and effective chatbots that can operate 24/7 and assist with customer queries thereby improving the customer experience and user engagement.

An example of enhanced customer support for the retail sector from watsonx-assistant is provided below, whereby the enterprise may inform every interaction, for example enable a customer to discover where the nearest stores are to them:

Advertisement

Looking_for_Retail_Near_Me.png

Source: IBM

A further example is to advise on every possibility: with a meaningful and useful proactive engagement to inform customers with recommendations and accurately inform them about the relevant possibilities.

Red_Hensley_Shirt_Conversation.png

Source: IBM

And to guide every experience: by accurately responding to customer inquiries that results in personalised guidance for the customer and captures sales leads in a meaningful manner by understanding customer intent and personalised follow-up engagement by an agent.

Trouble_Placing_Order.png

Source: IBM

Case studies are provided by:

Advertisement
  • Camping World implementation to reshape call centres that resulted in an increase in customer engagement by 40% across all platforms and reduced waiting times to 33 seconds with a 33% increase in agent efficiency reported.

  • Stiky providing 24×7 customer service, answering up to 90% of queries automatically, conducting on average 165conversations daily and 92% positive customer satisfaction rating (CSAT).

  • Bestseller India to boost fashion forecasting.

Furthermore, it is worth noting that according to Google “Consumers are 40% more likely to spend more than they originally planned when their retail experience is highly personalized to their needs.”

Use Case Examples:  Banking Financial Services / Fintech

LLMs may provide insights from analyst reports, news articles, social media content and industry publications.

The watsonx service may be applied towards empowering customer self-service so that customers may gain rapid access to core banking actions such as searching branch locations, checking balances in their account, payments, transfers and independently resolve their support issues.

IBM_NLP_sample.png

Source: IBM

A further example is provided by contextualizing experiences to drive outcomes in the banking experience for example providing suggestions that are relevant and guidance that is considered helpful as responses to the customer thereby improving the customer experience (CX).

NLP_Convo_with_Customer.png

Source: IBM

Advertisement

Moreover, the watsonx agent may also be applied towards suggesting helpful next steps whereby customers may be provided with intelligent recommendations and proactively informed about opportunities enabling them to accurately understand all the contextual possibilities.

Suggested_Next_Steps_IBM.png

Source: IBM

An advantage of this process is the delivery of human-like interactions irrespective of where the customer inquiry arrives, or the language spoken, with the examples provided by watsonx Assistant applying natural language processing (NLP) to enhance customer engagements towards those expected from human levels and quick responses (effectively in real-time).

Case study examples are provided by:

Use Case Examples: Insurance / Insurtech

Use case examples include agent assist that enables customers to ask basic queries such as the ability to reset password or seek information about their insurance policy and furthermore provide rapid responses to requests for quotes and pricing, coverage checks, processing of claims and the handling of matters relating to the policy.

Policy_Covers.png

Source: IBM

Advertisement

Moreover, insurance firms may also apply watsonx towards digital customer support to enable customer engagement and integrate an AI agent within existing systems and processes.

Claim_on_my_auto.png

Source: IBM

Case study examples include:

IBM commissioned Forrester TEI and found that watsonx Assistant customers saw the following benefits:

Increasingly enterprises are facing a race to adopt Generative AI capabilities or get left behind by their competitors, in particular if rivals can offer enhanced CX and customer engagement.

However, a business whether a large corporation or an SME must carefully evaluate which option and criteria best suits their needs. As noted, many firms will lack the internal resources and core competences to match a technology major to develop a state of the art LLM. Equally many such firms will seek to evaluate the open-source model solutions available with an evaluation of the architecture needs of a pipeline and engineering resources needed for a real-world production level system and governance requirements. For these firms a trusted counterparty who may deliver reliably and securely is key.

Advertisement

IBM watsonx have summarised how they meet OTTES criteria set out at the outset of this article:

  • Open: according to IBM, watsonx is based on the best open technologies available, providing model variety to cover enterprise use cases and compliance requirements.

  • Trusted: IBM state that watsonx is designed with principles of transparency, responsibility and governance enabling customers to build AI models trained on their own trusted data to help manage legal, regulatory, ethical and inaccuracy concerns.

  • Targeted: IBM point out that watsonx is designed for enterprise and targeted at business domains; watsonx is designed for business use cases that unlock new value.

  • Empowering:  watsonx allows the user go beyond just being an AI user and become an AI value creator, allowing you to train, fine-tune and deploy, and govern the data and AI models you bring to the platform and own completely the value they create according to IBM.

  • Sustainability: as noted above, IBM targets to hit net-zero operational GHGs by 2030.

About the Author

Imtiaz Adam is a Hybrid Strategist and Data Scientist. He is focussed on the latest developments in artificial intelligence and machine learning techniques with a particular focus on deep learning. Imtiaz holds an MSc in Computer Science with research in AI (Distinction) University of London, MBA (Distinction), Sloan in Strategy Fellow London Business School, MSc Finance with Quantitative Econometric Modelling (Distinction) at Cass Business School. He is the Founder of Deep Learn Strategies Limited, and served as Director & Global Head of a business he founded at Morgan Stanley in Climate Finance & ESG Strategic Advisory. He has a strong expertise in enterprise sales & marketing, data science, and corporate & business strategist.

#IBMPartner

Source for performance data is IBM. Results may vary and are provided on a non-reliance basis and although the author has made reasonable efforts to research the information, the author makes no representations, warranties or guarantees, whether express or implied is provided in relation to the content in this article. 

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address