Ever stepped out of a meeting and wondered how it went off? What if there was some intelligent software or gadget that synthesized your conversation and gave you a report on how it went — good, bad, ugly?
With Artificial Intelligence you can do just that — well almost!
The Power of NLP
Natural Language Processing (NLP) is a stream of artificial intelligence that gives the computer the ability to understand human natural language. It also processes and analyzes a large amount of natural language data.
(Read our earlier blog “NLP for All” for more details on the basics)
One of the basic foundation blocks of any AI is the process of making the software intelligent by using data sets to train NLP algorithms and build an NLP data model which can be shared with the community. Newbies to NLP can use existing data-sets if one exists or create a new one if there doesn’t exist — like in our case.
Training the NLP Algorithm — Building the NLP Model
Our objective is to study the undercurrent of a long conversation and provide a meaningful interpretation. For our particular scenario, we will be using a set of interviews which are part of earning call transcripts as our data-set to train the NLP algorithm and build the NLP model.
The Challenge — Using Long Interviews as NLP Training Data
Double problem: your corpus is huge, and each document is an interview
The approach we used is to look at the data set as a classification problem. Using techniques of machine learning, the software will group together data by a particular criterion. Our aim will be to find this grouping or classification.
Problems while solving a classification problem where your corpus is a set of interviews
(1) Large text: When working with a document with more than 5000 words creates a difficult challenge versus dealing with Twitter (average of 50 words). Imagine the chances of polarities being lost amidst thousands of words for simple sentiment analysis.
(2) Interviews do not contain discernible topics. Interviews often contain hidden information. Interviews are likely focused on positive things. For example, if a CXO is being interviewed, rest assured the CXO may not discuss negative topics. Even if a difficult question is put across to the CXO, the answer is likely spun in a positive manner.
(3) Interviews are mostly Neutral
Most NLP problems can be viewed as having two aspects: The HOW aspect and the WHAT aspect
(1) The “How aspect”, refers to how a person replies to a question. Consider the difference when a person is not confident about the answer versus the person who is confident about the answer.
(2) The “What aspect”, refers to what the person talking about which include topics they are discussing. For example, if the CXO is talking about new markets, revenue, loss etc.
Because interviews often marvel in the “How” aspect, we implemented created features which capture the “How” aspect. The machine learning model can review transcripts and accurately predict the risk of a class action lawsuit.
Feature Creation pitched to capture the “how” aspect:
We capture the “how” by creating features which fall into the following categories:
(4) Sentiment Extraction on small snippets/ chunks of the entire interviews. These locations are determined using NLP processes as well as domain knowledge
Tone, in written composition, is the attitude of a writer toward a subject or an audience. The tone is generally conveyed through the choice of words or the viewpoint of a writer on a specific subject. The tone can be formal, informal, serious, sarcastic, sad or cheerful, or it may be any other existing attitude. The formality of tone is the use of contractions, words like we’ve, I’d, they’re, and it’s. Using ‘it’s’ instead of ‘it is’ kind of informal tone.
Consider the following examples of tone:
“I want to ask the authorities what the big deal is? Why are they not controlling this epidemic? It is eating up lives like a monster.”
“I want to draw the attention of the appropriate authorities towards the damage caused by this epidemic. If steps are not taken to curb it, it will continue to injure our community.”
The theme of both tone examples is the same. The only way we can differentiate between them is their separate tone. The tone in the first example is casual or informal while, and the tone in the second example is more formal.
“Can someone tell me what the hell is going on here?” has an aggressive tone.
“That’s great!!!” has a cheerful tone and can sometimes be sarcastic as well. So, it depends on the complete context of the paragraph.
Modality is the semantic information associated with the speaker’s opinion on what was said. It captures the certainty, possibility, willingness, obligation, and necessity from the text. It is the non — propositional part of the sentence which concerns the factual status.
Low modality shows less certainty and high modality shows more certainty. Flags of high, medium, low modal words are created. Based on these flags it calculates the number of words which have high modality, medium and low modality in the transcript.
“I am almost certain; every investor has to look at their own situation and conclude the things that they think are best. I think the active devices are definitely”. In the above text words like ‘almost’, ‘definitely’ are strong modal words
It is connected to the act of ‘Assertives’, ‘Directives’, ‘Commissives’, ‘Declaratives’, ‘Expressives’
Assertive — ‘Socrates is bald’
Directives — ‘Pass the Mike’
Commissives — ‘I solemnly swear to truth’
Declaratives — I promise to complete it by today
Expressives — Thank you for giving me this opportunity
Let’s say, some of the tokens are given as ‘might happen’, ‘could possibly happen’, ‘will definitely happen’. Here the modality of a sentence is increasing from low modality to high modality.
Complexity measures the standard of the text or how much text can easily be understood. It is important to analyze the complexity of statements since it is difficult to capture the actual sense of the text when the speaker is trying to hide information. For example, if an analyst asks some difficult questions, and the executive wants to evade the question, the sentence structure is less likely to be straight forward and more likely to be complex. The complexity comes out with various different motivations at both micro and macro level of meaning. One motivation can be to divert the topic. The most common is a human response to evade or the feeling of uneasiness. They tend to use complicated, ambiguous words in their answers or make insignificant and sophisticated statements instead of giving more simple answers. For example, consider the following 2 sentences whose meaning is the same but the difficulty of words is different. If the manager asks the employee about the progress of the task. It can be as follows
‘I completed the task and will forward it to you immediately’ — which is less complex and more clear
‘The task is quite strenuous, but I am almost done with that and I am going through it to so there won’t be any white spots in it’ — using unnecessary words and hiding information
Here, we are using a readability index to measure the complexity of the text. It can also be captured using quantitative features like
- Word length (e.g., simple versus multisyllabic words)
- Frequency of unfamiliar or new vocabulary terms
- Sentence length and syllables used
- Text cohesion
Qualitative evaluation of the text can be possible by understanding the Levels of meaning, structure, language clarity.
Sentiment Analysis of specific sections of the text
Sentiment is the opinion of the speaker on a certain topic. Many techniques are in development on extracting the sentiment from text. However, it is difficult to extract sentiment for long discussions and interviews. With sentiment analysis, it is not always possible to understand the exact viewpoints.
Ex: The company performed very well in the last 2 quarters and the revenue has increased significantly. — tells gives a positive sentiment about the company.
One approach is to split the whole text into small paragraphs and smaller chunks based on a predefined ruleset and extract SA on them individually to make it robust. A defining ruleset is very important and proper, and it will vary for different kinds of problems or text.
For example, let’s take extracting the sentiment from an interview between the Analyst and CXO’s of the company. Our hypothesis is that CXO’s will answer positively about his company even if the analyst asked a complex question instead of using negative terms about the company. Similarly, if we compare answers given by CEO and Vice President, we can assume that the answers given by CEO will be stronger and more positive without hesitation and have less use of unnecessary words versus the answer given by Vice President. These can be the set of rules set to extract the information from interviews and this rule set will vary according to the problem statement.
As discussed above, when it comes to extracting information from interviews, the “how” aspect matters significantly. In fact, interviews related to the financial health of the company, may not disclose the bad aspects. This indirectly means that extracting the “What” aspect via topic modeling is not reasonable. However, if we address “how the replies are being given?” and extract information along the same, we can engineer a load of features to use in the creation of an ML/ DL model. This is captured by Tonality, Modality and Complexity and Sentiment on important parts of a document.
Note: For each of Modality, Tone and Complexity there will be multiple features.
Tone of text
Complexity of text
Affine has been a provider of analytical solutions, working with global organizations solving their strategic and day to day business problems.
We combine data-driven statistical and AI analysis with heuristic domain expertise to provide actionable insights empowering organizations to make informed decisions and take proactive action. Our value lies in enabling our clientele to implement and realize the ROI of the recommendations.
About Humans For AI
Humans For AI is a non-profit organization focused on building a more diverse workforce for the future leveraging AI technologies. Learn more about us and join us as we embark on this journey to make a difference!