Bot Essentials, Chatbots

Bot essentials 8 : Chatbots NLP aspects – the deep dive 2

In the previous blog, we understood the advances in algorithmic science to improve accuracy of Natural Language Processing (NLP) constructs and how vector scoring could be improved. We trace the evolution path further in this blog for base elements of the network and how adding a memory for long term dependencies optimise it further.

Evolution 4

A further evolution was the use of semantic matching through a global word to word co-occurrence matching matrix. The base algorithm relied on the difference vectors between words. Further multiplied by the context word, if equal to the ratio of of their co-occurrence probabilities.

NLP algorithmic advances

Evolution 5

Though the use of the global word to word co-occurrence matrix in Evolution 4 increased the accuracy of base model predictions, long and short term dependency determination was still found to be lacking.

This evolution called for the application of Recurrent Neural Networks (RNNs) which fed text data in a sequence to the network. The RNN application was an improvement since it handled local temporal dependencies quite effectively. The problem arose in figuring out long sentence forms. To address the issue of long form sentences, researchers designed an algorithm for using Long form. We do it with a technique called Long-short term memory (LSTM). This algorithm introduced a memory cell which stored long term dependencies of the long form sentence being processed. Think of it as an indicator score that updates based on every word context to store the overall vector in long form sentences. This allowed for weightage based on binary science that allowed for the network to forget or weight it higher for relevance, making the RNN further optimized for relevance. This evolution was quite optimized.

Evolution 6

Researchers were advancing the use of RNN and LSTM techniques for word context and intent. And another parallel application was producing success and results. The name of the approach is Convolutional Neural Networks or CNNs. Would it be possible to use a CNN in an NLP solution framework to further enhance accuracy? It sure did seem so.

Consider a 2D model that image applications use to solve for a small segment of an image to do an ID filter. Could we use the technique for linear sentences of words to predict context and intent?

Read about Bot Essentials 9 : The NLU Deepdive

It sure did seem so. ID based CNNs were more accurate than RNNs. Multiple CNNs working on a construct could create a feature map. They could learn from the features that appear the most often. CNNs still remain a work in progress with promising results on the default Q&A mechanisms used in bots for light weight determinations of questions and the answers that match the search intent and context.

One other use of NLP..

.. is with sentiment analysis. Figuring out the sentiment of the text that the user has typed. This is quite useful in determination of an empathetic response for the user. Platforms that build bots allow you to incorporate sentiment analysis. With the intelligent path flows set the emotive and empathy context of the response that the user generates.

For a further use of bots and utilising an NLP engine for better accuracy, please sign up to www.engati.com to start your bot journey.

If you would like to know more about the chatbot technology, register for a free demo.

Leave A Comment
*
*