top of page
Contact Us

ChatGPT and the future of Customer Support

ChatGPT and the future of Customer Support

Kay - Welcome to the experience dialogue. In these interactions. We pick a Hot Topic. That doesn't really have a straightforward answer. We then bring in speakers who have been there and seen this but approached it in very different ways. This is a space for healthy disagreements and discussions but in a respectful way. By the nature, of how we have conceived, this, you will see the passionate voice of opinions. Friends having a dialogue and thereby even interrupting each other or finishing each other's sentences. At the end of each dialogue, we want our audience to leave with valuable insights and approaches that you can try at your workplace and continue the discourse on social media channels.

A little bit about Ascendo, it is addressing optimization of support to operations within enterprises so that they can serve their customers better. We enable enterprises to optimize workflow for the agents and provide dashboards for insights on risk, churn analysis, and visibility for senior managers. We are revolutionalizing support ops in the same way DevOps and RevOps have transformed other areas of the business. In the last three years, we have created a G2 category and are ranked #1 in user satisfaction. We are very proud to be loved by our users, and now with the topic ChatGPT and the future of customer support.

There is excitement on many Tech and business channels on ChatGPT from OpenAI. It had a lot of adoption within the first five days of its getting released. We've been following OpenAI and GPT 3 for some time. We will discuss the technology, explore its impact on customer support experience space, it's possible limitations and opportunities. So join us and bring in your questions to LinkedIn and Slack channels.

Now it is a pleasure to introduce the speaker. He is the co-founder and CTO of Ascendo.AI. Ramki comes in with deep data science, and support background.He ran managed services for Oracle Cloud, created a proactive support platform for NetApp's, multimillion-dollar business and is respected for both his mathematical and business thinking and data science. At Ascendo, his mission is to give meaning to each and every customer interaction and elevate the experience of customers and support agents. Welcome, Ramki.

Ramki - Thank you Kay, Glad to be here.

Kay - So we can start with the basics Ramki. What is actually ChatGPT?

Ramki - You know, I create a slide that kind of shows what ChatGPT could be, you know, and I know that it kind of comes from the comic strip but now let's talk about what ChatGPT is. It's essentially a modern variation of a chatbot. We all know and we've been living with chat bots. Typically the chat bots require you to set up the rules, based on a question that the person might ask it basically has rules that match the contents, and the whole thing happens in a coordinated way. The ChatGPT, difference is that instead of only knowing a little bit of whatever it is, for the website that you are on. Essentially, it's kind of a robot ChatGPT knows, just about everything, and it's more articulate than the average human. So it's kind of comes up with- Hey, I've consumed all the internet and I can provide answers in a conversational way. Let's talk about the technicality of it. It's essentially a language model. It's been trained to interact in a conversational way, it's a sibling model to the instructGPT which was trained to follow an instruction on prompt and provide a detailed response. What I mean by that is, essentially it remembers the thread of your dialogue and using the previous questions and answers to inform, what the next responses could be. The answers are essentially derived from the volume of data that got trained on which is what we had on the Internet. So that's kind of the technical answer for it. You can think of it as it understands the conversation, it consumed all of the internet. It knows the history of your dialogue and it can prompt automatically what the next language sentence could be.

Kay - So we've been following ChatGPT, GPT3 for quite some time, right? There was GPT 3, and then now ChatGPT, tell me the difference, please.

Ramki - Yeah,I said it's a language model, right? So underneath this nothing but it's using the GPT. Its GPT is the Transformer model. What it means is it's predicting what the next words would be based on what it is seeing. The difference is GPT3 uses 175 billion parameters in whereas to instructGPT which is kind of a 1.3 billion parameters. You can look at it as a hundred times fewer and it's still performing quite well because it's the way it got trained but the same time You know, everybody knows that excitement is great but OpenAPI warns, you know, Hey not all the time, the answer may be correct. So you got to be watchful of what you're seeing and you have to inculcate what it is saying and then see in your own form, whether it makes sense or not.

Kay - So there are a lot of people who are new to data science, also here Ramki. and when you talk about Transformer model, we are talking about transforming the learning from one to another or transforming the learning. Correct? Would you like to add any other definition for Transformer?

Ramki - Transformer was the kind of technical term essentially it was done. You know, you can kind of look at all the words in One Sweep, and the training time is less. So you are essentially looking at the whole sentence or whole information and taking the mass in one set of tokens and understanding the relationship and then you can kind of predict what the next one would be. So it's a combination of doing the training faster and having fewer parameters and doing it with a lot of content and also making some kind of a model that really reinforces the better behavior of which is correct and guarded better.

Kay - So a lot of people have interacted with ChatGPT. Right? It's a, you know, they ask a question and they type a piston in and use the content that gives a result and they give feedback and based on how it's trained. So, in a way, it's kind of Google but not Google also. So can you describe a little bit more?

Ramki - It's gone, in a way that, we all go to Google and say, hey, I want to know something. Then we go search and we look at the results and we kind of look at them. What makes sense, and put our thoughts into it and make sense out of it, right? The most notable limitation that you're going to find is that this ChatGPT doesn't have access to the Internet. It's basically loaded with the entire content prior to 2021 data-wise. But it can not look at the current image. In fact, OpenAI tells you that. For example, I want to know when my tree train is going to lead, you cannot get it right but you can pretty much ask anything like, Hey, I want to write a poem. I have an issue with this code, does it make sense. Those types of things one can ask and it can be ask it to fix it. In fact, The very first day was out. One of the teammates asked, hey, I want to write a poem on Ascendo and it kind of actually did a pretty decent job. You know.

Kay - I would love to see it at the end, so I was playing around with it, and I will share that also in a bit. So, now the adoption of ChatGPT has been pretty exponential, right? So we see millions of people using it. What Are some of the key differences that you would point out in terms of its output?

Ramki - Recently, I was listening to several things, one of them being Steven Marsh. He recently, like, even last week, wrote an opinion column in the New York Times in. You also had a podcast before that with the intelligence quiet, a British podcast media. He's been using similar tech for some time. It's not like we just looked at this and said, yeah, you looked at a different variation not opening another company, and then you looked at them as well. He says it in a very succinct way, he says ChatGPT is a great product that he calls, it can provide a filler response by the filler response means it's not junk, it's not a trivia, it leverages on how people are taught to write essays in a structured manner, you know, we have an opening sentence, kind of things like that. The key point that he's bringing up, is the ChatGPT does not have an intention, it's not like an author, you know, I want to, I have a will I want to want to like what when you write an article you're thinking about the point that you want to convey, right? You want to say this, I want to be able to show that to you. I want you to know that, that's not what you're going to be getting. ChatGPT is a kind of a filler, but it gives you a starting point. One can use the starting point and add the rest of the information that are from your Vantage standpoint. We may be entering an era of doing things differently. Like when we started with the internet, right? When the internet came, then Google came, and then, you know, yeah I remember going to some places, where people essentially say, Hey the computer tells me, this is what this must be the truth, kind of thing, so the open source came all of them, right? So that's the same way here. We are going to be entering a different era where you may be asking, and it gives you some responses that use that as a starting point and go from there.

Kay- So some could also say that a GPT3 is the base model and ChatGPT is the bot version or the conversational version of using GPT3. That's already indexed and modeled with internet data. As you mentioned, September 2021. Will that be a correct statement?

Ramki- It’s kind of you know, Yes and No. I know the GPT is the base, ChatGPT is not using a bot version of GPT3. It's essentially a smaller model, right? It's created by fine-tuning GPT3. In other words leveraging what GPT does it has to offer a mix of its own bot kind to give this whole intelligent conversational experience.Does this make sense.

Kay- Yeah, absolutely. So you know, we know RPA came in, right? So that was the first iteration of introducing AI and I Love to equate it to the autonomous driving experience which will also bring up in a sec. So the RPA came in and it became too much rules-based and very cumbersome to maintain, but RPA was very hard and then that got faded away. Then came chatbots, and I remember at one point, we were counting 318 ChatBot companies and they were the chat versions of the RPA, which again was very rule-based and you had to pretty much codify the question and answers and stuff. And they were very well used within the customer service context. So tell me a little bit more about the Bots in the customer service context.

Ramki - You know you're right there. A lot of chatbots. In fact people think when we know when you say have a question they always think bot as one of the things but they have a lot of Baggage, you know, companies have tried with limited success to use them, instead, of humans to handle customer service work. There is potential in these bot where you can kind of alleviate the pressure on answering some mundane questions. But the thing is yesterday was like, you know, recently 1700 American sponsored by a company called Ujet whose technology is handling customer contacts. What they saw was very interesting. 72% of the people found Chatbot to be a waste of time that's a very serious thing. You know, the reason is the biggest challenge people want to have is they don't like the feel of having to work with the robot. When I talk to many of our customers you know when they get you to know, yes there is a potential for doing a lot of self-service self answering but the reality is as soon as you give the option to talk to somebody or something, they just click that, you know, that's what people want. The reason is they don't Like to work in a bot-like environment.

Kay-They want a answer. Right?

Ramki-Exactly.

Kay - You know which is like, I'm having an interaction, why can't it be an answer? Why does it have to be a conversation with a machine-like thing which has to be maintained and codified extensively? and on top of it, I don't even want to go in and extend this process ultimately creating a ticket, right? So Yeah, elaborate.

Ramki - If you look at the ChatGPT right, on the other hand. It sounds like a human, you know, and it is of one, what you are saying to form the response. It is not pre-coded with a response. It really thinks off what you're saying and that kinda makes the whole discussion and responses more conversational but doesn't make its responses always right. You know, again OpenAI says that. You have to look at the response and make your conclusion.

Kay - You also talk about ChatGPT’s initial, audacious claim. So elaborate a little bit more on that.

Ramki - I'm going to share one slide on this. You know, it's an interesting lie that you will get a chuck lot of it.In fact, I went and asked this to ChatGPT. Hey, tell me about the customer support kind of thing. So we ask this and first, you can you know, I just put the same response, what I got right on the slide. It first makes a very audacious claim. Hey, it is not capable of making a mistake so that's a big thing but at the same time, it also did admit that it cannot help with real-world tasks. So it's kind of that is essentially what I want the readers to understand. It will appear that it is not making a mistake, it's giving the answers. But at the same time, you have to know that, you know, it may not have the ability at least as of now, to provide customer support or the real-world task, you know, where's my training? What is the issue? Because in a real customer support scenario, things change more normal things that things are going to be relevant now, and it may not have all the answers. So that's where the big difference is, I would see.

Kay - There is a question from Shree. He's asking, what is current state of art in ChatGPT integration with KnowledgeGraph enterprise solutions ? continues saying, particularly around Particularly around Explainability for conversational problem solving , in domains that have high compliance bars ( like healthcare or finance )?

Ramki - You know you can't just Wing, you know if you look at it right when you and I are having a conversation we're going to use of the knowledge that we have gained and we are going to just tell you and there is no fact-checking. So we got to be conscious of that. So just because you get a response and in fact, the response may look somewhat legit and it doesn't mean that it is right? Especially when there is a complaint kind of a thing as a mod so I would strongly suggest it. In fact, you know about the openAI will in fact concur with this type of thing. You got to, you know, it's giving the answers based on what it had been trained on, but fit is for Real World past and something that you need to do, contact the customers and do contact that particular customer support and get the answers. It's, they're saying, so that's what ChatGPT itself with that, you know it is audacious to say, it will never make a mistake, but it doesn't make a mistake and it also tells you that you have to be at your own.

Kay - Yeah, and it's good that the model actually understands its own limitations and claims its limitations, and it's by us too since, you know, I'm just bringing it up because there is the explainability component of it. So absolutely.
So, let's now that we talked about Transformer models, we talked about GPT3 and ChatGPT. Tell a little bit about how Ascendo works.

Ramki - You know, if you look at Ascendo.AI. At the core it also uses the Transformal model. Well, we essentially developed our model based on the domain expertise that we have, you know, many of our key people come from there, come from customer support or customer service background. So that's a great value because we know how the support model Works. How large companies' technical support organizations should handle even smaller ones as well. And we know the nuances of finding the answer for a customer question and issue. Sometimes, be a simple and elaborately explained to me what it is and what the product test is. Sometimes it is actually an issue. I'm facing, I'm doing this, and I'm facing an issue. What should I do? How can you help me? kind of thing. Our Transformer model essentially looks at the knowledge and the other data point that we have within the company that we are that we have implementing or we are basically providing Ascendo service on top and it's looking at all the content within the company and to evolve, what should be the answer.
For example, there may be a new issue blowing, Right? it probably never happened before, but it's coming. There may be new knowledge that got updated. You know, somebody found an answer and the dots or maybe there was a bug that came in, and then somebody answer it, it became a knowledge, all these things are happening as things go by, then maybe these things in some similarity, there are some similarities with ChatGPT because we also use as kind of human feedback to make sure that we can constantly evolve and self-correct self learn, right? That part of it is very similar. We are using actual data and we are also evolving and with actual factual data, not from the entire internet to provide an answer.

Kay - Very specific to the Enterprise, very specific to the product, Etc. So the analogy is very similar to the autonomous engine auto driving. Right? So we start with giving the Triggers on predictive actions, escalations impact, risk intended context, and all of that. Then our agents and leaders still make the decision on what they use and when they use it. So in a way, we automate the data aggregation, aspect of humans, maybe I would, I always equate it to what an engineering calculator did for the basic calculations but on an advanced scale so it does help remove the biased. It enhances collaboration, even when people went, whether people are together or remote and it also helps with faster problem-solving. So essentially, we are automating support ops, like, whatever dev-ops and rev-ops are doing.
Back to chat GPT.So, the challenges explain a few challenges of ChatGPT, like the media. I kind of alluded to writing.

Ramki - One of the biggest issues that we are all going to face. It happened even with the internet, right? When you see something, you may actually believe it. The way we probably get unknowingly got caught by the early days of the Internet, just because something is said multiple times, it appears opinions may drive the truth. The fact-checking asked to be will be on the Forefront. Unknowingly, someone keeps repeating the same thing or, you know, gets Amplified through multiple things. And then information comes on top. People may think that is the truth and it may, you know, actual truth will be hidden, right? That's where we have to be watchful. We have to be careful how these things happen just because it says something so nice and it, you know, feels correct and eloquent. It should not be that, it's always right. And we have to remember but it's a nice way of saying things but it is not the truth to have to do, a fact check.

Kay - Yeah, a model is as good as what we feed it in and ChatGPT is fed with internet data and there is a lot of information that needs fact checking whether it is from humans or a machine and it's we at Ascendo we always talk about metrics versus data? Data helps say the story. So from a story standpoint aggregating all of this customer data and bringing out an ability to say, a story is something that models as Ascendo does, But the actual story is told by humans and not by the data itself. So that's where there is this human connection.

So Thank you. I think this has been helpful So I was actually asking to ChatGPT to write about holidays in 2022 and it did respond by saying that the day it has stayed only up till September 2021 cannot write about 2022. But you did talk about the poem, it wrote about Ascendo AI, and want to share it, before we end?

Ramki - Let me..you know, it's kind of interesting, you know, we basically talked asked it Hey tell me about Ascendo.AI. Like Steven would say it did a pretty decent job, you know, kind of filler information, you can call it. Now you can take it and you can now use this and can just change it the way we want to convey it or whatnot, but here it is, you know, it did a great job. I would say,

Kay - I like that so let people read it while we're stopping the Livestream. Thank you very much for tuning in and we want to continue the conversation here on the LinkedIn and slack channel. So, feel free to post your questions and comments. What else can we do to help continue this engagement?

Ramki - Thanks. Absolutely.

bottom of page