The Future of Service Support Unveiling the Power of Knowledge Intelligence | Transcription
Hey, Brett, it's a pleasure to have a discussion with you on Knowledge Intelligence.
Thank you so much for coming.
Thank you for having me.
It's exciting with what's going on.
We really appreciate the time.
Yeah.
Brett, just looking at your background and your experience, you've actually been serving for very large companies like HPEDES, Dell, Xerox, and Avaya, and now with Veritas. You've led very large service teams and improved service operations quite a bit. So it's wonderful to hear your insight in this session of the experience dialogue.
So thank you.
I want to start off with having you know level setting about knowledge, right.
So knowledge is an integral piece of any services support organization. But lot of times teams go ahead and have a couple of knowledge management, people building the content and then agents and, and customers and experts and product groups, everybody contribute to it. That's kind of how a lot of companies think of knowledge.
How do you think of knowledge and think about knowledge intelligence?
A lot of the is is all around it's, it's not only the uniqueness of the knowledge, but it's also how do you, how do you unlock it?
How do you reuse it, especially in the services services industry, which is where I spent most of my career now in support.
Very, very much you find similarities between supporting a a a product and servicing a a customer. Very, very similar in the approach knowledge unlock who, who has it, who has the answer to a particular challenge that you're working on at a point in time you, you find a unlocking it is, is locating it, where's it at?
And as we went through REI proof of concept this past summer and now the, the full implementation of it, what we ended up finding as we inventoried our, our knowledge, intellectual knowledge, we had two 290,000 items of knowledge.
And then when we started to unlock the organizational silos in a support organization again, we found more content of which much of it was rich.
So we started really focusing on how, how do you collect all of this?
How do you make it available instantaneously and helping resolve support issues?
And then how do you create live articles from closing out cases?
So when you do 180 or 200,000 cases a year throughout the course of time supporting customers, there's the creation of all sorts of knowledge.
So how do you unlock that?
Make it available because it's relevant at a particular point in time, not only for one customer.
We found it being relevant for multiple customers.
So how do you create that content?
How do you get it through the process of publication and available internal?
And then what do you use as a standard to finally decide you're ready to make it externally available for your customers to access?
And so you, you find yourself moving from a very slow pace of methodically moving to a really high pace and recognizing that if you can unlock that content at a relevant point in time, its relevancy is across multiple cases.
So you get a high reuse factor out of it.
Yeah, I, I, you know, we've had very good conversations around it.
And I can see the three buckets that you know you and I have discussed on.
The first bucket is how do you summarize all of the cases and all of the knowledge so it can be reused by everybody, which is the one that you just talked about.
The second case is how do you start creating new pieces of knowledge from everyday interactions, agents with agents, agents with product teams, agents with customers, etcetera.
And then the third bucket is how do you make this proactive so you can start looking at the list of customer interactions that are coming in in real time and then creating knowledge to map to the interaction.
So completely take it to proactive approach.
So those are the three buckets in which you and I had discussed earlier.
It would be great to get your opinion about how do you bring in, you know, a level of quality and accuracy, search ability and all of that within these three buckets.
Well, I, I think like anyone would get and, and most companies all have it today.
They have a, a knowledge management team.
They have some sort of hierarchy set up for the purposes of, of creating and reviewing and then publishing.
Veritas is no different.
We have the same in the support organization.
The only difference now what used to be days of creating articles and going through that process, we're doing it now in in the matter of hours.
So as cases are closed, it's flushing through AI.
It's looking at the relevance of content and relationship to what we already have and deciding, OK, this is a unique article and it's predicated on a standard that we've established.
Create the article and publish it for the KM team to review.
So we, we can now take the, the timing of the closing of a case from the time it's closed and 40 minutes later we can have a full article published internal for the TSE engineering community to use in servicing customers and, and support requests.
And the reviewers are very focused now on the quality aspects of the article.
In fact, often times when it's rejected, it's because it didn't meet a, a standard that we have.
And you know, it goes through the rejection process is either a full rejection and it's, it's no longer a useful content, or it's a rejection for rework, which then requires the, the manual aspects of, of cleaning up the, the part that needs to be resolved.
The beauty behind it though, is that we're, we're maintaining a 60 to 70% content rich creation.
So that means we're only working on 20 or 30% of the article for the quality aspects of what we want to refine to meet our standard before we publish it.
And then it has to be referenced so many times internally by our TSE community before which time we then publish it externally for our customers to access and our knowledge base.
So we, we've established a standard we're working toward that.
We, we have always had one, but we had to establish some standards around the AI aspects of it.
And you know, so far it's, it's, we have an internal what's called a business value index that we rate the articles and the AI content is, is scoring high in that business value index every day.
So we, we know we're, we're creating relevant content.
It's timely.
It's not 100% creation like it used to be, as 70% of it's created from the closed case.
And we're finding high reuse, meaning that we're, we're finding that the content is used in more than a one or two KS.
It's because of its relevancy.
It's used multiple times throughout the course of the week.
Yeah, go ahead, please.
No, no, go ahead.
It's, it's, you know, so tell us a little bit more about the business value index and how it is calculated.
Sure.
The business value index is, is predicated on the number of times it's used internally referenced and, or how many times it's referenced externally from, from a customer.
It also can be a part of the index.
Is is is also based on a rating that the customer or the TSE provides A sentiment rating and it is also calculated
based on the number of times that AI uses it to produce an an answer and and Veritas support.
We took a slightly different approach in that we created a hybrid approach where we have we used working with Elastic Search as our search engine.
If you were looking at your screen on the left hand side, one through 10, most relevant to least relevant, we rate the articles predicated on the prompt question on the left hand side.
On the right hand side, we have a bot that actually produces ALLM response from the content Veritas content and everything we've deployed is is predicated on our content.
So while we're using a three 3.5 turbo and in the ChatGPT engine, all of the content that we do the search with and the creation of the LLM response is coming from Bertha certified content.
So that way we know we've got a high degree of accuracy.
We have had some hallucination because of, of AI given us an answer.
You ask and it's gonna give you an answer.
It's you and you want high degree of accuracy.
Obviously because we're working with customer data, we want our product to, we don't wanna delete something.
We wanna make sure the integrity of the data is always intact.
So the quality of the the content is an important aspect of what of what we do.
And the BDI index is just a real time view of how is the the articles performing not only internally but externally.
And it's weighted like any index.
It's weighted business value index.
Yeah, so, and it's interesting, you know, because we were talking about knowledge from the old cases, knowledge from the interactions and then identifying what pieces of knowledge, the three buckets of taking it all to proactiveness, right.
But what I'm curious is at what point is it useful for training?
And I have a subsequent one for product 2, but let's talk about training a little bit.
At what point is that knowledge used to train new support teams or new employees or new products or product changes etcetera?
Yeah, what we, we tested that too.
And in fact some of the content that we have indexed is in fact our training material.
It's also included in the in those documents in which it's, it's searching for answers with we took what we have we we call our our level 1 support the the care team.
Level one obviously would be the the entry level of of knowledge of, of the technical knowledge of the product and what we found in using the content and helping educate the the use of a prompt how to problem solve iteratively as you go through your question you asked the very first time is that we've we've we've found double improvement in the throughput in in cases.
So you're taking a a loved one support who used to do 20 to 23 cases a day and now they've doubled that.
You're doing 5253 cases a day using the the prompt and using the content.
And keep in mind that most of that content, or at least not most a portion of that content comes from the training material that we use to train the level 1 support.
So what does that mean?
That means that the folks that that they're getting on on the on the phone faster from new hire time to starting to engage customers and you spend more time teaching now the prompt and the questions and the iterative process you go through with diagnosing the problem upfront and you can immediately start becoming productive.
Now we're working on making sure that, that we've got a, a finite way of identifying the boundaries for we've now kind of crossed into a complicated problem, not one that you could, you know, level 1 support could handle itself.
So when you get into file movement, file deletion commands of that sort you, you like to move that to level 2 and Level 3 support just to validate what not only the system is saying, but that to have the expert look at it and validate, OK, if this is the right, this is the right answer.
So a lot of training has occurred in and around the the handoff in that in that space.
We haven't, although it's being worked on right now, we have not started creating content training content from the live activity that we're performing on a daily basis.
We have folks working on it.
But the the first phase of what we wanted to get done was in the proof of concept.
We wanted to prove, we create content from closed cases and we wanted to prove that we could enhance the, the TSC internal to Veritas its capability using AI by getting quicker answers.
So those are the first two things that that we wanted to get accomplished.
We went live on the KM portion of it in November.
We went live on the AAR portion in February and we're up now fully operational on the, on the, on those two aspects, training being worked on.
The next thing we'd like to do then is work with Ascendo and, and you guys and, and really start looking at how we take the, the large amount of data and correlate the patterns that we know we have and start creating content out of, out of those patterns.
Because as, as, as we get into the patterns, you obviously get into higher reuse.
And as you get into those patterns, you can also give a lot more clarity back to the product team with things that we may need to go and and resolve within our product itself, not just solving problems, but hopefully working toward making an even better product.
Yeah, I, you know what?
I the synergy I find every time I talk to you is because people look at knowledge and workflow as two separate things.
You know, be have you know we look at it together, right?
Knowledge creating from all cases, knowledge creating from interactions, knowledge created from what the customer interactions are coming in from and predicting what did that would be and knowledge used by L1 knowledge used as a business value index.
So knowledge used for training, knowledge used for self-service.
So it's knowledge is the integral part of every workflow within support service operations.
And that is to some extent that's the crux of the entire thing that you are talking about.
It is and it's, it's pretty powerful when you start to move it fast and even more fascinating to watch the prompt queries that we get to go and review the queries and you can look at the queries at the time we introduced AI and you look at the queries today and just in six weeks, there's such a 7-8 weeks.
I guess there's such a huge difference from when we started.
And then out of that is the knowledge that that's getting created just from the prompts, because you can, you can now start to see, you can look at, you know, 7 or 800 people, 3 or 400 at one time on the system.
You can see the queries coming in.
You can see the types of questions that are being asked at the prompt.
It gives you some real time feel for what your customer base is calling about related to your product.
Now how do you convert that into knowledge that you can actually do something with in that immediacy of time Is the next step that we're we're we're looking at as well?
Yeah.
And we also talked about, you know, how this can be utilized to facilitate product improvements.
Do you want to touch on that a little bit?
Yeah, we're, we're, so one
of the areas that that we're working on right now is logs are the Veritas product like any software has a lot of logs that has immense information.
And so the next step of what we're working on with AI is how we ingest those logs that we get from our customers, ingest it, let AI interpret this large amount of data and then come back with an action plan.
And early indications are that we can, you know what, it's going to take us minutes.
The process, ingest it, process it and come back with a, a set of answers.
And So what we're, we're, as we work through the process, we have air conditions obviously, and we've worked closely with the product team at making sure that we, we have the air conditions also documented and in our indexes related to AI.
So what we can do is as, as we have, as we solve cases, you can take the content that's being created by closed cases.
And you can look at this and you can tell generally what what types of issues on a particular release that was just put out to to the customer base.
And as their adoption allows for upgrading the software.
As we're getting in the inbound calls, AI is creating new article content related to a release that we were just that we just published.
And so you can start to see as as time progresses, customers adopt, you can see the issues that are coming in, what types of areas in the product and the content creation for that new release is again, it's, it's done in, in minutes, it's not days or weeks.
So the ability to get relevant information related to a release and the issues that customers have had published and available then to the support organization happens the through the process that we talked about earlier.
And at the same time, we can provide the product team a more in depth understanding of what we're working on in a particular area of a, of a, a new release.
And you, you know, all of it, it, it, it's just accelerated what we used to do.
It just makes it all faster.
And at some point you can start to bring solutions faster in, in the, in the product cycle.
So you're finding the support organization playing a, a, a more relevant role in helping with the with product maturity, if you will, as you go through the life cycle.
That's very, very, very true.
Brett, thank you so much for taking the time to talk about how we are revolutionizing the support content creation process so knowledge can be valuable, actionable and can be aligned with customer needs.
And thank you for discussing this in depth.
Thank you for your time.
Thank you, really appreciate it.
Enjoyed working with with you and the team so and thanks again.
We appreciate the time.