My goal so far has been to show you the fundamental concepts of chatbot building,
and demonstrate to you how by leveraging what's in conversation,
it's possible to build and deploy
a simple but useful chatbot for your own website or business.
If a Floras client asked me to build a chatbot for them,
I would of course spend much more time fleshing out to chatbpt.
I will make it stronger at handling chitchat interactions,
handle more occasions, relationships,
and scenarios in general.
My suggestion to you is to spend enough time to build a solid chatbot for your own site,
and above all to do extensive testing.
Speaking of testing, so far we test
the chalkbot by giving the right answers to the chatbot.
In this module, we'll test the chatbot by intentionally trying to make you fail,
and then discuss strategies to solve those problems.
Pay close attention to how I find these problems by
avoiding to give the answer that the chatbot expects.
The goal of this type of testing is to collect shortcomings.
You can then decide whether a fix is needed for a given failure,
or if it's an edge case that we can agree with.
No chatbot is going to be able to handle every single interaction from the user.
Because this by Florence cognitive capabilities,
we have not built a replica of a human being.
It can be argued, that even a skilled human being could fail at handling some case,
though the resulting rescission will certainly be smother.
Without further ado, let's try to break our chatbot.
We can start by asking for flower suggestions for an anniversary.
And when asked the question on whether it's our anniversary,
we can reply with, "Wait a second."
Florence doesn't understand this phrase.
So we're sent to the anything else node.
Now if we reply yes,
thinking that we are addressing the existing question,
the chatbot waves of goodbye.
That's not really good, is it?
This happens due to a couple of problems or shortcomings if you will.
The smaller problem is that yes on his own is interpreted as a good by intent,
which you rudely concludes the conversation with a user,
when the user didn't actually express the intent to leave.
Yes, we could mark this response as having an irrelevant intent.
We can do so for common cases that we come across,
but we can't realistically mark every possible input as irrelevant.
You may have noticed by now how Watson will quite often interpret
simple vague answers as having a generic intent such as greetings or goodbyes.
This is because by default,
he tries to find the best match among the existing intents,
even if its confidence level in the match is quite low.
We also have a more complex problem.
Our yes was correctly classified as a response types positive,
and the de-value which is good.
But by the time we replied, we were too late.
We were already outside of the anniversary children node list.
So the yes was interpreted out of context if you will,
and treated it as an independent answer and not as a follow up reply.
Essentially, the problem is caused by the fact that
an entity is tied to the current user input,
which in turn is tied to a certain position in the dialog.
In other words, the chatbot knows what to do with the information we collect and
follow up questions only within certain child nodes within the dialogue.
If we get out of the position within a relevant reply like we did,
our reply will be analyzed and evaluated as
if it was a brand new utterance and not a reply.
In short, we have a couple of problems.
Let's proceed in order though by first addressing the generic intent detection problem.
The solution to this problem is accessing and using
Watson's confidence level, for human intent.
Let's start with a couple of examples of the problem.
What time is it in Paris?
And yes, are both wrongly interpreted as adding
greetings and goodbyes intents, respectively.
What we can do, is create a node just below welcome,
and above all other nodes.
Within it we can check how confident Watson is,
that the given input matches the intent that Watson is proposing.
If the confidence level is low,
we'll just jump to the anything else node,
to notify the user that we are not sure about what they are asking.
If the confidence level is higher than the threshold we set,
the condition would be false and Watson will
continue evaluating the other nodes as usual.
This node will essentially act as gate keeper for situations
in which Watson is not confident in the user intent.
Recreate a node as usual.
We give it a name such as Intent Confidence Check.
For the condition we can use,
intents, open square bracket zero,
close square bracket dot confidence less than zero point seven.
What this means, is that we want to check the confidence level of
the first intent in the list of detected intents for the current input.
The number you see between square brackets indicates the position
of the intent that we want in the list of intents.
Quite often the list will only have one intent,
but it's possible to have more than one.
It's at zero rather than one,
because most programming languages start counting indexes at zero rather than one.
This below code you see is Watson conversations SPEL notation,
which also adopts at zero based indexing.
We want to do the jump,
if the confidence level is less than say 70 percent.
So we're checking that a confidence level is below zero point seven.
You can experiment with this number for your own chatbot if you decide to
implement a confidence check or node like I did.
Next, we'll specify that the jump should be towards the anything else node's response,