Part II – AI for Media Intelligence: The Bad

Be careful. If you’re now assuming AI can do anything, you might be disappointed.  Bad AI might lead you to make bad decisions. Why? Simple -AI struggles with interpreting complex human communications that don’t have simple yes or no answers.

[click_to_tweet tweet=”Why? Simple -AI struggles with interpreting complex human communications that don’t have simple yes or no answers. #MediaIntelligence via @PublicRelay” quote=”Why? Simple -AI struggles with interpreting complex human communications that don’t have simple yes or no answers.” theme=”style4″]

Here are some examples of where AI is not ready for prime time:

If the answer is not known it can’t be fed back to the computer.

For example, say you’re looking to hire a new employee, and the (AI) computer says you should make an offer to a person based on the data. If I hire that person and it either works out or doesn’t, that’s one piece of data. But what about the people I didn’t hire? I will never know whether they would have worked out, and AI is not able to confirm my rejections. It is hard for AI to determine the best hire when it only gets feedback on the people I chose to hire.

This is the challenge of what are called Type I vs. Type II errors. A Type I error is a false positive: someone AI recommended who turned out to be a bad hire.  We can learn from that type of error. A Type II error, on the other hand, is a false negative: someone AI passed on who would have been good, but I’ll never know that for sure. We cannot learn from that type of error.  So when AI cannot be given information on Type II errors, it has only half the necessary learning set to advance the AI properly.

Another variation of the AI challenges in hiring is when the AI system is exposed to all-new data.  For example, if your resumes to date have all been from East Coast schools and for applicants with engineering degrees, what does your system predict when exposed to a Stanford graduate with a physics degree?  AI struggles to reach a conclusion when exposed to vastly new, deviant data points that is has not seen before.

Can AI still learn in these circumstances? Yes, to a degree, but it does not see (and cannot learn from) the missed opportunities, and it needs enough of the new data points to begin to model and predict outcomes.  The data that is collected from the hiring decision represents a fatally incomplete training set.

If the data sets are small.

For example, if you are making a one-time life decision such as what house I should buy (not the price I should pay, which AI is good at, but rather what house will work for me and my family), the data set would not be large. The data might suggest I will like the house for the community and the features of the house. If I buy the house, regardless of whether it works out or doesn’t, I still only have a single piece of feedback to learn from. It is hard to learn from tiny data sets, as you need thousands if not tens of thousands of data points to run through machine learning to train it to make informed decisions.

If the answer is indeterminate compared to the yes/no answer.

This is probably the biggest area where unassisted AI fails at proper classification. And it is the problem that most affects those of us seeking trustworthy media analytics.

How a person sees content frequently depends on their perspective. ‘Good’ things can in fact be ‘bad’ and vice-versa. And computers can’t be taught one-answer-fits-all approaches, which is what most AI-powered automated media intelligence solutions are doing today.  Two people can read the same story and have a very different opinion of the sentiment.  Their take may depend on their political or educational background, their role in a company, or even the message the company wants to be heard in the public – when positive discussion of a taboo topic is seen as a bad thing.

In addition, AI can’t reliably interpret many language structures and use, including even simple phrases like “I love it” since they can be serious or sarcastic.  AI also struggles with double meanings and plays on words.  And AI is unable to address the contextual and temporal nature of the text and how the words, topics, and phrases used in content change over time. For example, a comparison to Tiger Woods might be positive when comparing to his early career, less positive in his later career, and perhaps quite negative in a comparison to him as a husband.

If the subject matter is evolving.

Most AI solutions being applied to media analysis today use what can be called a ‘static dictionary’ approach. They choose a defined set of topical words (or Booleans) and a defined set of semantic-linked emotional trigger words. The AI determines the topic and the sentiment by comparing the words in the content to the static dictionary. Current studies like “The Future of Coding: A Comparison of Hand-Coding and Three Types of Computer-Assisted Text Analysis Methods,” (Nelson et. al., 2017) have proven the dictionary methodology does not work reliably and that its error increases over time.

The fundamental flaw in this AI method is that the static dictionary doesn’t evolve rapidly as topics and concepts shift over time and new veins of discussion are introduced. Unless there is a way to regularly provide feedback to the AI solution, it cannot learn and the margin of error grows and compounds quickly.  It is a bit like trying to talk about Facebook to someone transported from the year 2004 who only understands structured publishing – they just cannot understand what you are talking about in any meaningful manner because mass social media was not yet developed.

As these examples show, AI struggles with interpreting complex situations with either small data sets or indeterminate answers that evolve over time.  So what does this mean to me as a professional communicator?

This article is part of a three part series, AI for Media Intelligence: The Good, the Bad, and the Ugly.

About us: PublicRelay delivers a world-class media intelligence solution to big brands worldwide by leveraging both technology and highly-trained analysts.  It is a leader on the path to superior AI analytics through supervised machine learning. Contact PublicRelay to learn more.

Eric Koefoot

Eric Koefoot

President & CEO
Find me on LinkedIn