It is your customer service, not a chatbot
If you come across an online chatbot that actually serves you well, please let me know. I’ve never used a chatbot that, supposedly should assist my inquiry as a customer, works in a meaningful way.
When online chatbots were introduced, they were programmed in a way when a conditional expression logic evaluated a condition to execute the block of instructions for whether the condition was true or false. For example, if a customer mentions trouble with a product, the chatbot, instead of trying to resolve the issue, might simply provide a list of FAQs with options to retrieve the information relevant to the question. Most of the chatbots are still working in a rule-based model.
The traditional software development based on the IF-THEN-ELSE conditional expression model has a limitation. The linearity assumptions evaluating between variables might not accurately represent real-world outcomes. The linear conditional expression model retrieves answers based on the pre-indexed information using keywords and phrases pointing to different fields and records. The binary outcome is always either true or false. Comparably, the machine learning model has more power to process the information using a graph-based model which has many branches and nodes to cluster the information in layers and networks. The ML’s computation naturally is more capable of evaluating complex cognitive issues.
Moving forward, there will likely be more chatbots powered by generative A.I., but it seems many commercial chatbots are still subpar in terms of performance. These chatbots are merely a static FAQ web page.
Furthermore, just because the voice of the chatbot is communicating for us with our customers, we cannot expect any mistake or inadequacy will be forgiven. Unfortunately, there are still many organizations using the chatbot to hide the insufficiency of the customer service. This is a classic “better-than-nothing” mindset. If your organization adds technology for the sake of appearance, then you won’t achieve a good outcome and even worse, you have successfully turned a customer inquiry into a customer’s complaint. From the customer’s side, it feels like waiting at your storefront in a vulnerable and helpless moment. Customers need help and empathy, not just a Q&A.
It’s not my intention to throw you all the technicalities. Let’s evaluate the whole scenario as a customer who is seeking assistance from a chatbot. Your customer is seeking help and this is the moment your customer has a big chance to make a business decision with you. Hence, the information that you should provide must meet the following criteria:
- Speedy and responsive.
- Informative.
- Clarity and conciseness.
- Not only helpful but also empathetic.
Considering the generative A.I., it’s hot but not new anymore. Why haven’t we seen a movement to use generative A.I. to enhance our chatbot capabilities? I believe cost is a concern, and I can break it down into three economic issues:
- Operation and management -- The cost of expertise and programming is different between the traditional conditional expression model and the ML model. The cost of customizing your own LLM is not as cheap as paying an off-the-shelf chatbot solution. The cost of the ongoing API license can range from hundreds to thousands of dollars (USD) per month.
- Data availability and scalability -- Concerning the availability of the data which you will use to train your own LLM, the preparation is not simple. The volume of data requires expertise to develop a structure of how the data should be tabulated and feed into the LLM. The time cost is substantial on the preparation work and also the cost for the expertise as well. This is not the traditional IT administration work that can be handled by your in-house IT team.
- Infrastructure -- The cost of infrastructure to support a capable A.I. as a bot to service your customers is quite an investment too. The robust infrastructure and cloud hosting will incur enormous costs based on the on-demand usage of the service bot.
Nonetheless, the economics will prove that because as the technological production scales up, the cost per unit decreases. I am sure the cost bottleneck will be resolved. Not to mention that the open-source options which you can use to lower the cost of developing your own LLM. In the end, it is your own effort and commitment to make better use of the technology.
Or, boiling down to the fundamental issue, it is whether or not your intention is to serve your customer better or you want to use the technology as a cover-up for your customer service deficiency.