Interview with Tim Macer, Managing Director of meaning ltd "Automation is important, but it is not the magic answer."

Tim Macer (meaning ltd)

marktforschung.dossier: Mr. Macer, we once had punch cards, now everybody seems to be talking about Big Data. What have been the most influential changes regarding the use of software and technology in market research in the past?

Tim Macer:
I think it’s what I’d term the democratisation of technology, as computers have consistently become smaller and cheaper. The changes have come in several waves. The first was the arrival of the personal computer. Mainframe computers were for the elite, locked away for a few highly trained professionals to use. PCs meant computers could right there in front of the researcher or the interviewer. These provided so much more computing power that software designers could start to design cross-tab or data collection software with graphical user interfaces.

At the time seemed like a very extravagant use of resources, but it made software so much easier to use – so anyone and everyone could start to use software themselves. Researchers could interrogate their own data and run their own analyses. Even CATI, which had started as a mainframe technology, became more agile and easier to use. We went from training courses that lasted several weeks to maybe a day for some of the more complex tasks – and now, software that needs essentially no training.

As PCs got smaller, and laptops arrived, CAPI became possible. But the real game changer was the consumer computing – people buying desktops and laptops to have at home. It’s what gave rise to online research. And that trend continues now with mobile, which I think research is still struggling to work out whether it’s a friend or an enemy.

marktforschung.dossier: What are the main challenges for technology providers at the moment?

Tim Macer: The big one is getting research firms to spend on technology. A lot of research companies seem to expect their software costs to follow the same downward price curve that hardware has. That is unrealistic because any market research software needs to be able to do so much more today, and that requires a lot of development effort.

Of course, cost is always important, but what matters more is the return you can make if you invest in technology. Or put the other way round, the competitive disadvantage you will suffer in the future when others are outbidding you because they can turn the work round faster and with less effort, and can throw in other value-adds like dashboards or text analytics, which clients want, but maybe can’t justify paying for.

The tech providers I speak to often find research firms are not only reluctant to spend, but are also very slow to decide, compared to other industries. Two year procurement cycles for, say, a new data collection or analytical tool, are not unusual. Add another year for implementation, and that benefits no-one – the technology has often moved on in the meantime.

Another problem is the “wanting it to do everything” approach, which drives up both complexity and cost, and means most developers have a wish list of features which is much longer than their list of clients. Another approach, which helps both software provider and buyer, is to bring together a portfolio of different software tools that complement each other. A lot of today’s software offers open interfaces for others to develop custom plug-ins, and providers are increasingly talking to one-another to achieve some integration between their tools.

marktforschung.dossier: How do you consider the impact of Big Data in general and ambitions such as Google Consumer Surveys on the market research branch?

Tim Macer: I see it as just the next phase in the democratisation of market research. It is making research more affordable, and what I think that does is create an appetite and demand for more research – especially among people that could not previously afford market research. But this means the profession needs to refocus its attention to where it can add value: attitude, opinion and emotion. Big data, and the abundance of existing, transactional data means there is no longer a need to ask measurement questions in surveys, unless you really want to measure the difference between perception and reality. Recall has always been a poor way of estimating

People are worried about Google Consumer Surveys, but again, I think these can be very useful tools for the research companies to use for themselves. Why limit a survey just to questions in one survey instrument, or by one channel? I believe research firms should be confident in the unique expertise they can bring – which Big Data or automated providers can’t – and that is getting to the heart of the “why?” question, providing context and testing relevance. Market researchers are particularly good at looking across a lot of different data points, and then being able to deliver grounded, objective and verifiable insights.

The research market in Central Europe is not doing very well at the moment. Is that a chance for software providers as focussing on automating might be a way to save money for the institutes?

Tim Macer: I am going to contradict myself slightly on what I have just said about cost, because price actually is a big issue in those countries where research is not well developed. Here, software providers need to come up with imaginative ways to repackage product offerings and their pricing. It’s too simplistic to think of market research as being a global market – it is a series of interconnected markets each with its own challenges and achievable price-points for the technology, which will not be the same in every country.  

Automation is important, but it is not the magic answer. I suspect it’s more to do with ease of use and specifically making the software robust and ‘fail safe’ – so it actively discourages making mistakes. I think the researchers working in these countries often have to cover more bases than their counterparts in a research firm in Germany or the UK. They need to be experts in everything, and may well be deploying their own surveys, and doing everything from the survey design and pulling the data, to turning out the PowerPoint.

So the tools they use need to really simplify each step, and actively encourage accuracy – and make it hard to make mistakes. I do see some technology providers now working hard on these issues – on how to make something that used to take ten clicks just take one or two, or provide visible feedback at each stage, so the user can check everything along the way. Much more of this is needed – and that will help everyone in the industry.

marktforschung.dossier: Market research has always been considered as „people‘s business“. As far as research methods are concerned: where are the limits of automating? Will the „human factor“ always be needed for conducting research?

Tim Macer: I don’t see anything to fear here! I’ve always seen technology as a liberator. I don't think we have reached the point where it has liberated the researcher enough, in fact. People are still finding there is too much friction in getting most research projects through the system. There is often still an irritating amount of manual intervention required, which also means the potential for error to creep in.  

Technology has had a major impact on improving the efficiency of quantitative research, but is still barely used in qualitative research, and it is in the qualitative arena where you hear the fear expressed most strongly that computer-based methods will take away the “thinking”. But I think that confuses process with outcomes.

We must make the processes as efficient as possible – quant and qual. And that can include advanced analytical techniques that can help us to make connections where we might not. None of this is new, really – there never was a golden age of the manual conjoint. Some very useful methods have always relied on automation. But now we have powerful text processing tools, which we can apply to qualitative data on a quantitative scale and which feel as if they are replacing human thinking. It would be irresponsible to ignore the potential of these tools. They can find things more that might be interesting that we can, but it is only us humans that know how to work out whether they are interesting or not.

Put it another way, it’s always been easy to do bad research, but it takes human skill and experience to do great research. Hopefully, automation makes it harder to do bad research, but it takes people, working together, to produce great research – to make sense of it, provide context and test its relevance.

Technology is also changing the dynamics of that relationship. Research clients may now have access to research findings delivered via automated dashboard reporting systems even before the researcher on the project has looked at them. And researchers can also feel squeezed out by “black box” solutions, which is how some social media analysis and sentiment analysis tools can present. We need to be on the guard for these closed systems where researchers are not able to understand the provenance of the outputs. But the answer to both these situations is to ensure that there is transparency and openness – and that is built on good working relationships.

So yes, I believe a human factor is assured for a long time yet – but it won’t be always the same human factor as it has been.  

Mr. Macer, thank you very much for this interview!


Diskutieren Sie mit!     

Noch keine Kommentare zu diesem Artikel. Machen Sie gerne den Anfang!

Um unsere Kommentarfunktion nutzen zu können müssen Sie sich anmelden.


Weitere Highlights auf