The benefits – and perils – of commercializing biometric technology. Biometric technology used to be exclusive to government administration – passports that store fingerprints and faces, or criminal databases that collect the DNA information of people who have been arrested. In recent [...]
Understand the basics of how computers comprehend and respond to human language.
Alexa, play my favorite song.’ You can utter these five words without moving an inch and within seconds, your favorite song will start playing. But have you ever wondered how Alexa, your virtual assistant, understands what you’re saying?
Whether you are using Alexa or Siri or asking questions of Google, you are essentially communicating with a machine. But machines or computers do not speak human languages, so how do they understand your commands?
The system that helps computers process human language, understand it, and generate a response in the same language is called Natural Language Processing (NLP). The use of NLP is not limited to virtual assistants or search engines alone. It is widely used in chatbots, customer service, language translation tools like Google translate, word processors like Microsoft Word, or Grammarly, and most importantly in search, autocorrect and autocomplete mechanisms in search engines like Google. All these different applications use NLP to help machines understand your queries and appropriately respond to them.
NLP does not sound complex, since we interact with machines almost every day. However, NLP is one of the most difficult subfields of artificial intelligence (AI) and machine learning (ML).
That should not come as a surprise since human language is ever-changing and evolving, and riddled with nuances that are difficult for computers to understand. The rules that govern human language and speech are often abstract, making it complex.
Take sarcasm for example. A sarcastic remark can be used to pass on information. But, if a computer only analyzed the words, and not the sentiment, it would arrive at the wrong conclusion.
If someone says, ‘Oh! I’m fantastic!’ sarcastically, while we may be able to perceive the sarcasm in the tone, a computer might take the sentence at its face value and understand it to be true. This is why areas like sentiment and intent analysis are gaining traction in order to not just understand what is being said, but to comprehend the full meaning behind the text or speech.
How does Natural Language Processing work?
Human language can be ambiguous. In English, one word can have two or more meanings depending on the context. For instance, the word ‘crane’ can either refer to the bird, the heavy object lifting machine, or stretching toward an object of attention. We understand which one a speaker is referring to because we understand context, tone, and body language, but computers don’t.
Just like humans, machines also learn through experience. But the basic first step a machine performs to understand each sentence in natural language, is to convert speech to text in case of audio input. This means that when you say something, the computer first converts it into text.
The machine then breaks down the sentence into its components to extract its meaning. This mainly includes a syntactic analysis and a semantic analysis, although other processes may be involved as well.
Syntactic analysis refers to the analysis of the sentence structure and the arrangement of words. Semantic analysis, on the other hand, is used to extract meaning of each word in the sentence. It is important to remember that the inferred meaning may not be accurate since semantic analysis yields a dictionary meaning of the sentence, without regarding the context.
The process of NLP differs from system to system based on the rules coded into the algorithms to analyze natural language. After the interpretation is over, the computer then translates its response into human language for output. That’s how Alexa or Siri are able to answer your queries in your language.
What are the subfields of Natural Language Processing?
NLP is generally divided into two major subfields: Natural Language Understanding (NLU) and Natural Language Generation (NLG).
As the name suggests, NLU refers to process of machines comprehending natural language and involves mapping out the natural language input into useful representation and analyzing the different aspects of the language.
NLG refers to the process followed by machines to produce meaningful sentences in natural language. This essentially means that the computer translates its response to natural language in order to communicate with us.
The long road ahead
Have you noticed when Siri or Alexa are unable to understand you and you have to try different phrases or terms to get the correct response? That’s because machines are a long way from fully comprehending natural language, although NLP has been studied and improved upon for over 50 years.
Still, NLP remains a promising field in artificial intelligence that has helped businesses automate daily processes, extract meaningful insights from customer conversations, and ultimately improve customer experience and satisfaction. And with the constant increase in computational power and cloud computing, the growth of NLP will continue for the foreseeable future.
Header Image by Soner Eker on Unsplash.