Table of Contents Hide
[Guest article by Sameer Shishodia, cofounder of Zook and now an independent consultant. This article is meant for those who live on the either side of the world – i.e. either driven purely by data or by instinct (and I mean corporates too!). Do share your opinion.]
Its very hip to talk about being data driven, and as engineers, very appealing as well. Its also a more provable way of justifying a decision. User A/B tests on various designs prove it like no amount of brainstorming can, and clickstreams, sales numbers do not lie!
Yet there are majorly accomplished folks that disagree vehemently with the data-centric approach to decisions, and that makes for a good debate :). Here’s a very good 360 around that particular one.
So, is the “versus” in the debate actually called for at all ? As engineers, do we beat too many things down with the data club ? (I somewhere suspect the opposite is not that strikingly true for designers, decision makers, etc who are seen as “following instinct”)
Let’s deconstruct this data driven decision thingy, shall we ?
You gotta decide about a product idea, or a sales strategy, or a marketing message. So typically, this is what you do (explicitly, or otherwise)
- Imagine who might consume the outcome of the decision
- Try and figure out something about the above set. Data ? Helps a lot!
- Take a call on what’ll work.
The data is an input (and comes in many forms!). Over this input is a layer of interpretation, analysis and the final decision is a reaction to this process. Data can help decide, but data cannot decide!
At what level is the decision ?
Are you deciding about a UI attribute for an existing, popular product ? Sure, do bucket tests and let the audience speak. Is it about the product’s copy ? Hmm, a little less black and white. The positioning and concept you’re trying to communicate about the product ? Well – there’s no getting away from hard work and decision making on that. If you expect data to always provide all the answers in black and white terms, you’re likely to freeze when working on a lot of things at a more “zoomed out” level.
For entrepreneurs, its often a question of whether the product (which is usually the company itself) is working, and if its not. If you need data to tell you that, you have other serious problems 🙂 Of course you will be going through data, and what we popularly refer to as “instinct” or “intuition” does not develop in isolation of data. On the contrary, its something you develop as you learn how to sift through data, read between its lines and deal with the apparent conflicts it sometimes throws at you.
Conflict ? Aren’t numbers black and white ?
Are they ?
Here’s two data points from an example I remember reading about recently (sorry, do not recall the source). A survey of numerous car buyers in India put safety high up there in the lsit of influencers for the buy-call. Yet the actual purchase decisions (vis-a-vis the information requests) for models with ABS, airbags or till some time ago, even those with seatbelts at the rear said it was not necessarily an important factor.
The case study surmised that the latter was to be believed, and the former to be ignored. Its probably a more trustworthy data point! So the models which get ABS etc continued to be those at the top end of the offerings.
There – you already needed to pick and choose, and not trust every bit of data as is. But is that all ?
Weren’t the consumers also communicating an aspirational need, or one that got negated merely because the cost differentials between the (usually high end) models with ABS etc and those without were extremely high ? Would these people pay a smaller difference for the same in an otherwise lower spec’d model ? Is there a way to lower the cost of including the safety add-ons ?
Do you have enough data ? Qualified ?
Very often, a new venture, idea or product does not even have enough data to start with. Product and entrepreneurship decisions are full of these cases. You still do your best to understand what the picture is, and base a call on that. Would you like to base your calls on tiny samples ? Or would you rather ignore them, unless a clear message emerges ?
And them, lets say you launched and had a couple of hundred thousand clicks on a new page/feature. Of course, you SEO’ed, maybe ran an ad or two, and maybe got some initial coverage? What was the quality/value of those clicks ? Did they all come from ad-clicks, with little repeat usage? Are they from the right target audience which might help in engagement and drive usage/consumption of the important features? Should a transactional site be very happy if a lot of folks turn up looking for info alone? Should it be despondent ?
The clickstream is quite useless during the early life of a product unless qualified. Yes you can collect more data for it, but each qualifier and context for data that you measure stems from a judgement call about why a number is important, and when must it be factored in. Of course, again, you can create a analytics framework to crack that as well, but at some stage or the other, its impossible to keep out human judgement and interpretation of data completely.
Its not “data vs intuition”.
There’s a huge risk to merely following data – in case you’ve not set the right context. You can be lulled into a sense of safety/driven into panic needlessly merely because the data says you doing great/horribly, and you forgot to try and see whether you were missing some part of the picture.
Data is only about the stated. The rest is conjecture or extrapolation!
You can collect data about what’s out there. What did the user leave unsaid? Did they like your logo? Why did the dropoff happen at some point ? Did something trigger a word of mouth ? What about the not-clicked links – why did those get ignored ?
Some will need another round of hypothesis and data collection. Others will need some level of an educated guess so you can move on. You’ve got finite time and resources, and your product is not a grant funded lab!
Data is valuable. But the value is elsewhere.
A lot many great businesses have been built on instinct. That does not discount the research they did as preparation, or the amount of information they might have collected. It only highlights that the ability to make a decision with whatever is available is far more valuable. With the same data, the same inputs and for the same scneario, there will be multiple decisions that succeed to varying degrees, and numerous possible ones which might fail. It is an art, and the attempts to reduce it to a science can only work for low-granularity decisions, and that again for existing, running-state stuff.
What’s probably more critical, and to be practised to improve one’s judgement calls, is to actively seek to verify hypotheses through launch-and-test iterations, and do these cheap. Developing a good sense of what data to collect, how to read it in the context of other numbers, words around it – this will serve you much better than being a data driven automaton. Its a great help to have numbers, but remember to use them both judiciously, and honestly.
Its not yet time to let the machines take over 😉
[Reproduced from Sameer’s blog]