Hill also successfully replicated well-known study by Schwarz and
his colleagues, which asked people to report on
the frequency with which they watch television using two response scales.
What Schwarz and colleagues called a scale with low-frequency alternatives and
high-frequency alternatives.
And as you can see, they are actually very similar, but
the low-frequency alternative scale starts with an increment of half an hour.
The high-frequency alternative scale starts with an increment of up
to 2.5 hours, then both scales proceed in increments of half and hour.
What Schwarz and his colleagues found was that the low frequency scale,
more people picked the response option that was less than two and
a half hours than in the high-frequency scale.
And they attribute this to respondents really not paying attention to
the numerical properties of the scale, but picking a position in the scale.
So for example, if a respondent reasoned well,
I watch about as much TV as most people, so I'll pick the middle option,
that would lead to very different numbers numerical values depending on the scale.
This is exactly what Peytchev and Hill replicated replicated,
what they observed when presenting this on a mobile device.
So they found that with a low-frequency scale,
only 14% of people picked a value of more than 2.5 hours.
On the high-frequency scale, 44% pick an option that was greater than 2.5 hours.
It's not surprising, if you look at the scale.
For the high-frequency scale,
this means 44% picked anything besides the first option.
So there seems to be some evidence that the basic response processes are similar
in Mobile Web and other modes, but there are features of mobile devices in
the way they are used that really could affect measurement or response quality.
Antoun investigated this in a study in which he had respondents answer
the same questions, either on a mobile device and
then on a conventional computer or in the opposite order.
First, conventional computer and then mobile device.
So he had a within-subjects comparison,
which strengthens the ability to make inferences and conclusions.
So there's some evidence that the response processes are quite similar,
the sort of basic response processes are quite similar in mobile, web and
other modes at least according to these replication findings by Peytchev and Hill.
More recently, Antoun has done a direct comparison between mobile and
conventional web surveys.
He used a crossover design in which the same respondents answered the same set of
questions in both modes, either they were first assigned to the mobile condition and
then answered the questions on a PC or vice versa.
So, this is actually gives him a much more powerful kind of comparison than some of
the other studies that we've talked about.
The findings were that there was no difference in satisficing.
Remember, these are mental shortcuts that survey respondents often take.
It's not that he didn't find any evidence of satisficing,
it's just that it was no different between the two types of web surveys.
There was one reversal.
He was using a measure of the length of open responses, the number of words or
characters as an indication of satisficing the ideas that it's easier for
respondents to answer fewer questions and he found that the open responses were
longer in mobile than PC and this is not what one might expect.
One might think that entering text on a small device is more complicated.
He doesn't really have an explanation, but that was a finding.
It's considered a reversal.
But for the other measures of satisficing such as straight lining,
giving the same answer to a number of questions and primacy effects,
there was really no difference between mobile and the PC Web.
No differences in disclosure of sensitive information.
So we earlier talked about the fact that in this study,
the mobile respondents were far more engaged with their
surroundings than when they answered the questions on a PC.
And yet, there was no effect on disclosure.
So the concern that being around the others,
being away from home would inhibit honest respondent
to sensitive questions seems not to be the case, but
Antoun did find a couple of pieces of evidence that
response quality is lower on mobile devices than on PCs.
And in particular, answers to questions about the respondent's age and
their birth year.
The reason he used he used these is he had true values in the sampling frame.
So, what he concluded was that it was the widgets that the respondents
used the data entry features that respondents used on the mobile
device that were responsible for lower accuracy.
They used a slider for their age and the date picker to indicate their birth year,
and the idea is that this requires considerable dexterity, and
it can be challenging on a small screen.
So his conclusion was that while most of the indications of data quality
that survey researchers use were no different between mobile and
web where screen size was an issue, there was a difference and
in the direction of apparently lower quality on mobile than PC data collection.
So, small screen seems to be sort of the culprit in the Antoun study.
This is actually consistent with some of the early evidence with Peytchev and Hill.
One finding in the literature they were unable to replicate was
a visual contrast effect that Couper and colleagues had shown him.
We talked about this when we talked about conventional web surveys and
measurement error in conventional web surveys, and you may recall the finding
was that people self-rated health was affected by an image on the screen or
actually previews screen of a person who's either quite healthy looking or
quite unhealthy looking.
Peytchev and Hill did essentially the same study and
they were unable to produce the same contrast effect in the sense that.
People rated their health higher when the image depicted someone who was quite
unhealthy than when it depicted someone who was quite healthy.
So they were unable to replicate this with similar images and
suggested number one, that this was a positive outcome,
because this is really a source of error that you would hope wouldn't occur.
But it's really kind of a null finding, which makes it hard to interpret.
Does this mean that there is never an effect of image on a mobile device,
on a small screen.
That's one limitation of this finding.
And also if the take-home message is that images are not noticed on a small screen,
that's not necessarily good news.
Because even designing for a small screen,
one would like to be able to judiciously present images.
Over all the studies we've now discussed,
response quality seems very much the same in mobile and
PC modes as long as the mobile response format is designed for a small screen.
One other aspect of Mobile Web surveys is that particularly in the early
days of Mobile Web responding, designers actually intended for
all responses to be entered through conventional devices not mobile devices.
So responses via mobile devices were considered unintended and
really unwanted, and designers did not optimize the design for mobile devices.
Bosnjak and colleagues compared these so-called unintended mobile
respondents to desktop, laptop participation into meta analysis
as they looked across a number of studies and summarized the findings.
In one study,
they found that the average participation rate -of unintended mobile users was 5.8%.
They found that younger respondents were generally more likely to
participate via mobile devices and
that males more than females were likely to participate on mobile devices.
The break-off rate was actually higher for mobile than desktop devices and
this could well be, because the design was not optimized for mobile devices.
That is the screens were not designed for a small display.
Despite the higher break-off rate for mobile than desktop devices, the number
of pages completed before break-off was actually higher on mobile than on desktop.
Not clear exactly why, but it could be that these respondents are quite
motivated to complete the questionnaire but eventually gave up.
In the other meta-analysis, Bosnjak and his colleagues found that about
8% of respondents used a mobile device, which was unintended at least once.
And that 2.1% of respondents participated via mobile device in at least seven
out of eight waves of this longitudinal study.
So really, in terms of prevalence, this unintended mode was relatively high.
They also found that older respondents and women were less likely to
use mobile devices, consistent with the finding in the other data analysis.
So to summarize our discussion of Mobile Web surveys, smartphone and
tablet coverage is far from the general population, but it is rapidly growing and
it is potentially an alternative to in-home internet access for at least
certain subgroups of the population who may have no other way to go online.
We talked about SMS as a contact mode and it does seem to be promising,
that is sending people a text message about an upcoming web survey seems
to be more effective than other modes such as email or a physical letter.
The other advantage of SMS is that it may potentially allow sampling of phone
numbers, which could overcome the number of the coverage issues that we spoke about
with web surveys.
The difficulty or the challenge for using SMS is that
it may be hampered by legal restrictions that unsolicited
SMS messages may be prohibited in certain countries.
And it may require that respondent's give prior consent to
being texted before a researcher tries to contact respondents that way.
When we looked at measurement, there was really very little evidence that mobile
and conventional web surveys differed in the quality of individual responses.
That is measurement error seemed to be about the same in the two
types of web surveys.
Finally, it may make sense to use Mobile Web for short location or
context based surveys, rather than for longer questionnaires as is really
the norm, or is more common with conventional web surveys.