BA 850
Sustainability Driven Innovation

Introduction to Methodologies

PrintPrint

Introduction to Methodologies

All Failures are Human in Origin

lines of different thicknesses and colors
Credit: roscolux paper spectrum by is licensed under CC BY-SA 2.0 DEED

There was a famous automotive engineer and designer named Carroll Smith, whose fastidious attention to detail has won his teams major victories at virtually every level. I do not use the term "fastidious attention to detail" lightly: he has written a 224-page book devoted solely to nut and bolt selection.

There is one phrase he is most famous for: "There is no such thing as material failure. All failures are human in origin." He argues that the role of the engineer is to account for everything from metal porosity to poor machining to the tendencies of the driver, and if a component fails in a crucial moment, the responsibility lies on a human–not an inert material.

While this is certainly a compelling platform for engineering, his statement may be adapted and applied to consumer research, as well. Research methodologies do not fail, research design is typically where the failure lies... and humans are responsible for this. As we will see, each methodology has strengths and weaknesses, and it is our responsibility to not only account for those, but to be smart about research design.

How do we tend to fail in consumer research design?

  • Asking too many questions. The survey that goes on for 40 minutes? Not only will few participants complete the survey, the validity of those who do complete drops off a cliff as they get bored and disengage. Open-ended questions become one-word answers, at best.
  • Asking the wrong level of question. Using a survey to ask people deep or emotional questions and expecting them to write tomes of explanation is not going to happen.
  • Not accounting for self-reporting bias. Humans are notoriously bad at self-reporting even the most basic behaviors. Asking people hypothetical questions about hypothetical products purchased with hypothetical money held in a hypothetical bank account can be a very dangerous path if not structured with high levels of statistic validity.
  • Poor participant selection. This is one which is a perennial failure, as there is a bias to select groups who are more engaged with a product or current customers because they are easier to obtain. Ever consider that much academic research relies on 18-22 year old undergraduates as the core participant pool?
  • Poor participant screening. Especially online, people can say virtually anything if they think it will make them eligible for the research. This can lead to very dirty data.
  • Poor segmentation. Neglecting groups of interest, or not appropriately weighting one segment or another, can significantly distort the research findings. You can even have ironclad statistical significance... but with the wrong customers.
  • Imprecise or confusing questions. "Have you or have you not encountered questions which are not designed to illuminate, but appear to be designed to obfuscate a straightforward answer?"
  • Survey fatigue. Much like the Tragedy of the Commons, researchers who are inconsiderate of participants and oversurvey will reduce the chances of those people participating in research in the future. This is a very real problem when you are dealing with limited pools of participants to begin with.
  • ...and others

Note that none of the above are methodological failures, but failures on the part of the researcher. While professionals and professors alike devote their lives to the art and science of research design, in our case, we will be looking to perform quick–but "clean"–research which can be built upon. In essence, we will seek to perform research which will not be thrown away after a month, but which may be built upon to create a research narrative which accompanies our revisions to the offering over time.

What is freeing about research design is that if it is sound and done well, the results are the results. No explanation is needed for negative outcomes, no congratulations accepted for positive outcomes.

Our role is to think, structure, execute, and learn as much as possible. Only after, may we interpret results.

A Spectrum of Options

In the following pages, we will examine a few methodologies and techniques especially well-suited to testing early-phase ideas, messages, and offerings.

When discussing the realities and underpinnings of these research techniques, we will consider how each fits into a few different spectrums:

  • Speed. How long does it take to go from zero to final findings?
  • Cost. Where does the methodology fall on the cost spectrum? Do you have the ability to do a battery of projects on your offering and the offerings of competitors, or is it a more involved and limited scope "deep dive" to explore an idea?
  • Level of insight. Are the insights provided by the findings likely more toward the shallow and superficial or deep and subconscious?
  • Verbatims. Does the methodology provide brief participant sound bytes or deep transcripts?
  • Remote/In-person. Can you start the research from your office and monitor results in real-time, or does the research require face-to-face interaction with participants?
  • Ideation. What are the chances participants will come up with related ideas or concepts to your offering during the course of the research?

I will also share some experiences with the methodologies and applications of technologies you may find interesting. Some methodologies may be timeless, but there are ways you can deploy the methodologies which can also produce dividends for the research and generally make things more efficient.

Five word summary - Many potential research design pitfalls