We, at NextBigWhat, are attempting to drive forward the conversation around Artificial Intelligence in India beyond headlines that scream for attention but don’t offer much, buzzwords that make your head buzz after a while, and countless utopian tales that vie for your eyes only to leave you with a scratching head and a lingering itch to actually get to the meat of the matter.
In this spirit, the #AIBoss series intends to educate, inform and elevate our readers understanding of the subject as well as the space. If you’re part of the leadership team of an AI based product, from enterprise or startup, willing to share your AI learnings – then reach out to us and become part of this initiative (you can apply using this form).
Here is our interview with Ranjan Kumar, the founder & CEO of Entropik, a company that specializes in ‘Emotion AI’ – a suite of technologies that allow brands and creatives to truly guage the emotional response of consumers with the incorporation of brainwave mapping, facial coding & eye tracking.
NextBigWhat caught up with him for an extensive discussion on Entropik’s product, the ethics of emotion tracking technology, where AI is headed and how their proprietary technologies are making an impact in various business verticals.
How would you describe Entropik?
At Entropik we work in the space of emotion intelligence. So what we do is basically, track consumer emotion at its point of origin in a scalable way. We’ve built technologies around brainwave mapping, analyzing facial expressions, analyizing eye tracking, so these are the fundamental proprietary technologies which actually allow us to decipher the subconscious aspect of human behaviour. And 95% of the decisions that a consumer actually makes is subconscious, however most of the analytics aligns around the conscious aspect of user behaviour. So we’re taking our shot at tracking the subconscious aspect of human behaviour.
What are the current and upcoming applications for your product suite?
Right now we’re very focused on media experience testing. So anything video – right from ads to trailers to long-format TV shows and even 3-hour films. Where you get to see on a second-by-second level what the emotional response of the user is, be it happy, sad, excited or bored. In addition to that, we also track what the attention level is, where the mental engagement stands, and quickly you can get to know the parts of the video that least engage the viewers.
UX testing is another area where we offer great value. Some other areas are chatbot experience testing and shopper experience testing. So while you’re walking in the store, we can track where you’re looking and what sort of brainwaves are being generated. Whether you’re attentive and feeling excited or bored etc.
Edtech too is emerging as a big market. We’re using the technology to validate psychometrically the cognitive learning index of a student. Retail is at an early stage and automotive is an upcoming market where companies want to track fatigue and stress levels of drivers.
What was the pain point you set out to solve specifically with Entropik in its early stage?
One of the resons why products fail is because they don’t resonate with consumers. Every consumer brand, the biggest pain point or the fundamental question is whether I can decipher what the consumers are feeling, what their preferences are, and hence align and create a product which is likely to resonate at a more cognitive level with the user. The first step to that problem is knowing whether you understand your consumer very well and whether you understand not just the conscious aspect of their expression, but also subconscious behavior and subconscious elements which tie into the buying pattern.
We looked at what makes a purchase decision or what makes a content consumption decision: fundamentally 95 percent of your decisions are subconscious. That means we need to have analytics to pick up on this aspect of consumer preference. So that’s a huge problem.
Look at the kind of losses companies have when they actually roll out a product and it fails. Or if a movie is released and it fails – we’re talking about huge losses. Efficiency of success of product content is around 2 percent today. So we set out to solve this 98% inefficiency problem.
Based on the sweep that you have in terms of what you could do with this kind of technology on scale, could we be heading towards a Huxlian world where everyone’s emotions are fine tuned to an inch of what is being intended to evoke – pleasure, sadness and so on?
I don’t think that will happen. So, you have AI which is about intelligence i.e it is logically intelligent. It has a bunch of ‘if x’ rules, that is ‘if x then do y’. A huge collection of such rules is called artificial intelligence. What it lacks is empathy and emotional intelligence, which means that it is answering a question but without the context of my current emotion.
When two people are interacting, emotion is a factor which makes human communication unique. That aspect is missing with machines. So I think that brings a lot of resonance but I don’t think technology will be modifying people’s behaviour in the way that you envision. I think it is solving a gap of 98% inefficiency of interaction between man and machine. That’s about it.
When it comes to ads, there is an explicitly commercial element to it therefore optimizing for it in the manner that Entropik allows makes sense to a great degree. With movies or TV shows, however, what if production houses begin to edit films based on the responses they see on a system, rather than it being the relatively untramelled creative vision of a director – how do you see that playing out?
That’s something we face day in and day out. We currently work with almost every major broadcaster in the country. And we’ve faced this question before. What our product can do is two things: a. It can help marketeers optimize how they reach customers. Help improve business outcomes. b. It helps creative guys to optimize on the content itself. We’ve been very cautious not to step on the toes of those in charge because the relationship should not trample on their creative vision. We look at ourselves as enablers.
For example, if we’re recommending that these are the 10 things that came out of analyzing this piece of content, then they mostly agree or disagree but still do what they have to do. So that freedom is never taken off from them. And we’re receiving a lot of intelligence from their end as well and we need to make sure that we’re incorporating that into our system. So as to make it even more intelligent. And the other important thing is that most of these studios are sitting on hundreds of scripts, and they require a system that can evaluate these ideas in the quickest way possible – which is where we come in as enablers.
You can watch the full interview in the embed above. Make sure to subscribe to our Youtube channel for more such engaging videos!