When you use secondary research, you need to discern between valid data and
that which lacks credibility.
Let's discuss some tips on how to check the validity of the research.
First and foremost is obviously the source.
Knowing the source is credible or that the site is credible is one thing.
But when you go and
do the search for secondary research, you must look at the original source.
Because a person could be extrapolating from a source or
making judgements from that.
Ultimately, you've got to find the original place of data and
after you've done that, you also need to look for convergence here.
If multiple people are saying the same thing and referencing the same source,
we've gotta find that origin, as that makes for more valid secondary research.
We rarely ever use a blog post for research, for
example, unless we go to the blog's original source.
Doing otherwise can be very dangerous.
So all these factors play into validity.
But usually, when we're searching online, we're finding interpretations.
Or we're finding the actual report that was done.
At that point, if it's not a name brand company like Roper or Gallop, and
it's based on some company without established credibility,
you've got to look for the motivations of that company.
That company maybe doing biased research so
they can sell their product, you've got to check that.
Many times when you put together secondary sources and
draw conclusions you've got to look for the sponsor or the original source.
It could be Joe's House of Mugs.
If they're saying mug sales are going to be sky rocketing in 2018,
because of the ceramic industry, and
the cost of materials may be lower, you should greatly question their objectivity.
We could take that with a grain of salt, but if other people without
bias are saying the same thing, then it adds to the weight of the research.
The more dots you can connect along the way the more likely it's credible
research.
However, with that being said, suppose you go back to the original source, and it
is a questionable source but everybody's drawing from that questionable source.
I came across one article here and it was all about shopping during the holidays and
employee attitudes about shopping during the holidays.
Everybody was citing the same study.
But then when you go back and you look at the source,
it was all coming from the same source.
That source was an HR firm and they had done eight interviews with employees.
It wasn't 800 or 8,000, it was 8 total.
From that they drew what they thought was a very non-risky
conclusion about holiday shopping, and the media latched onto it.
And they built off each other, and it snowballed.
Soon everybody was saying people shop in a particular way during the holidays, or
have certain beliefs, but
it was based on this one study with woefully inadequate data.
In that case through secondary research tracking back to the original source
was very valuable in terms of discovering the validity of the conclusions,
which turned out to be not valid at all.
Weighing your secondary research involves getting a sense of how
how much risk your client can afford for the project.
It comes down to how much risk you allow in your secondary research or how much
can you extrapolate from your secondary research, you're connecting a lot of dots.
All these dots are connecting in a particular way.
Hopefully it's not like the example where it all came from the same place.
But if you've got a lot of dots or arrows pointing in the same direction,
you can draw some conclusions and maybe you can take a little bit more risk if
more arrows are pointing in that same direction.
At that point you can advance your thinking to say,
okay we've learned a lot from this secondary research.
We've learned a lot from the internal research.
Now we've got to take a step further and really formulate what we want to do.
Where are the gaps in our knowledge?
Where are the blind spots in our mirror, and so on?
In some situations a client may assume they need one type of research, or
even hire you to do a particular kind of research that turns out not to be what is
actually needed.
Doing internal or secondary research can reveal during the planning process, so
you don't waste a lot of time, effort, and money.
Here's an example.
Several years ago I was working for
a nonprofit that came out of President Clinton's administration.
It was a job service program and I was contracted to do
a customer satisfaction survey of their volunteer leaders.
The organization provided me with internal research about what they've done in
the past, how their volunteer leaders reviewed each other in 360 reviews and
other types of information that they passed along to me.
Reviewing the internal research I quickly learned that it wasn't customer
satisfaction that was needed it was an organizational structure study.
The internal research revealed that there were some leadership organizational
problems that needed to be addressed as part of the research.
Just doing a customer satisfaction survey, or just talking to volunteers or
leaders within this organization would not address the information and
issues revealed by the internal research.
If you looked at all the past research that had been done,
there were a lot of signs that said you cannot just survey volunteer leaders.
You've gotta survey people within the organization, paid staff, and
other folks like that.
And really try to get at a different process for
measurement because the problem was much bigger than one specific group.
So in that case, a review of internal research during the planning stage
revealed the client needed to shift away from what they thought was needed
to a different research approach, all together.
This, too, shows the value of internal and
secondary research being done when you initiate the market research process.