0:03
Welcome back to securing digital democracy.
In today's lecture, we are going to focus again on some of the human elements of, of
secure and trustworthy voting. Because voters are people, voting systems
have to be designed so that they're, they're easy for people to use.
I mean, this sounds obvious, but if voting systems aren't easy to use, lots of things
can go wrong and some of those things have really important security implications.
Before we begin, I'd like to just remind you to please try to keep up with the
reading in Broken Ballots, and, we have the chapters to read for this week posted
online. The first element of human factors that I
want to talk about today, is usability. Usability, in computing systems has been
studied very extensively and, and in voting systems too.
Usability consists of several different things.
It consists of how, how quickly can, people, use a particular system.
How satisfied are they with their interactions with that system.
But also, and the, the focus of our discussion today will be this last
element. Usability includes how correctly and
accurately, can people interact, with a particular system.
The correctness and accuracy and usability of voting systems, ends up having a big
impact on how accurately votes can be counted.
So, to see why. Let's think again about our definition of
the integrity security property for voting.
We defined integrity with two parts. First, votes have to be cast as intended
and second, they have to be counted as they are cast.
This cast as intended issue turns out to be largely a usability problem.
Does the system provide an interface for voters to use that allows the voter to
tell the system correctly who it was they wanted to vote for.
Now, we've seen several examples already where systems just couldn't do that.
One example was the punch card voting system used in Florida during the 2000
presidential election. The, famous butterfly ballot.
Because of the design of the ballot, thousands of voters apparently voted for a
candidate they didn't intend to. And we can see a little bit of why this
design is bad right here. But for a more accurate view, let's think
about how the system would actually look to a voter.
It's not just going to be, presented flat before their face.
But it's, sitting on a table angled just slightly towards the voter.
So, if you're looking at the system at an angle, the usability problems are, are
even easier to spot. Look at how poorly aligned the punchcard
locations are with the candidates there. So, this kind of design where the ballot
just creates obstacles to being used correctly can lead to major problems as it
did in Florida. Some other problems in this design.
You can see that the choices alternate between the left and right faces of the
page. This is another thing that likely added to
and compounded the voter confusion. We've seen examples of this kind of
usability problem with DRE electronic voting machines as well.
I'm going to show you this example. This comes from Sarasota, Florida in the
2006 presidential election. The Sarasota ballots had a multiple page
design as in the Diebold DREs we saw in previous lectures.
And voters would vote by touching square-shaped targets on the screen for
each page. So, I'm going to flip through a couple of
pages of this ballot now. And I want you to pay close attention, and
then I'm gonna ask you a question about them.
Okay, let's see if you were paying attention.
Did you notice the race for the House of Representatives?
It's right there at the top of the second page.
It turned out that a lot of voters didn't notice that race.
And, the reason we think that, is there was a one percent rate of under votes or
people who didn't mark, the, the permitted number of candidates or just left it blank
for the governor race that's located on the bottom half of this page.
But for the race for Congress for US Representative, there was an under vote
rate fourteen%. Fourteen percent of voters left that very
important race blank. Now, why was that?
Well, well one possibility was that the machines were dishonest.
But, based on testing, that, people in the usability field have done, subsequent to
this race, that kind of fraud doesn't seem like the most likely explanation.
Instead, the reason that we think, fourteen percent of voters appeared to
miss this race, has to do with the design of the ballot.
So, if you look at the way these two pages of the ballot were laid out.
There, we have the first page that had only one race.
And then the second page that had two races on it.
That's the first usability problem here. That the voter after the first page was
only expecting there to be one race. But there's an even more, complicated and
interesting problem going on with this design and this in an effect called banner
blindness. So, look how, prominent this heading is in
the middle of the ballot. The heading that says state.
This is the first thing your eye is drawn to.
Well, there's an effect that's been very widely documented in usability studies,
called banner blindness. This says that people tend to
subconsciously ignore text, and especially rectangular shapes that are above the main
heading on a page. If you're on a website and it has banner
shaped advertisements above the text, you usually ignore those.
And this is just reinforcing and training you to do this more.
Because of the, the banner blindness effect is so strong.
We've been able to reproduce in studies after this election with actual users,
the, the same effect in, in, volunteer voters.
They'll just not see something up there. So, let us now in the forms if you, also
missed that race when you looked quickly at the pages.
Another problem, with these machines that just compounded this sort of thing was
very slow response. The machines, displayed a lot of lag
moving from page to page. Furthermore, there's often a problem with
DREs that the displays aren't completely accurately calibrated.
This manifests itself in other cases and has in other races has been documenting as
causing, what people think is, is a vote swapping effect.
You touch one place on the screen, and, the, machine registers, and shows a check
in another box. That's usually caused by a miscalibrated
display. Unlike say your, your IPad, or your, your,
your smartphone, the displays on voting machines tend to use a much older
technology that requires, periodic manual calibration.
And often, this step is just skipped in the, the, the election procedures, or it's
not done correctly and that can lead to, to this miscalibrated display.
But that just slows the voter down further, adds aggravation, and adds
confusion to the process creating further usability problems.
Finally notice that this ballot is 21 pages long.
The user would have to go through this again and again and again.
The likelihood of mistakes is just going to be compounded as the user is being
fatigued by all of this. So, all these different factors combine to
make, make for significant usability problems in this style of DRE.
Let's look at another kind of usability problem with DRE's though.
Remember, when we talked about the voting process for DRE's that the last screen
voters are presented with is typically a review screen that looks like this that
highlights all of the choices on the race. So, based on some of the things you've
learned about usability and some of what you know about, about human nature.
What might limit the usefulness and effectiveness of this kind of review
screen in actual voting. Let me give you a chance to think about
that and then, we'll come back and talk about it.
So, while you were gone we took the liberty of setting up this, DRE voting
machine that I'm going to use to demonstrate, some of the problems we'll
talk about. But before we get to that, I want to look
again at the, review screen you saw in the slides.
So, this kind of review screen was introduced to help voters, catch any
mistakes or errors they made on the ballot.
10:58
actually, take the time, and have the, have the, concentration to, to catch
errors, on such a screen. Now, one question this raises is how many
voters would check the tape on, would check a print out on a VVPAT device.
So, we've taken the, the, Diebold AccuVote TSX voting machine that, I showed you all
the other week, and, one thing that we've done is added this device.
So, this wasn't here before. This is a VVPAT add-on that Diebold made
to allow jurisdictions to have a printed paper trail to go along with the election.
It just attaches to the side of the machine.
The way this works is it prints out while the voter is at the review screen.
It prints out, on a cash register style tape, a record of each of the voter's
selections. Now, the design of this VVPAT unit adds
several other usability questions. It creates, it actually creates an
impediment to voters checking the tape. Because in order to even see it, you have
to open this door on the VVPAT unit. And there's no button or switch or
anything. There's nothing to, to force you to do
that before you cast your ballot. This is entirely voluntary for the voters.
So, a large fraction of voters aren't even going to bother to open the door.
Even if you do actually seeing the choices printed on the tape might be pretty hard,
because they're printed in very small and crude writing on this cash register style,
piece of paper. Diebold has provided a little magnifying
glass. In fact, the, the writing is so small you
need this to, to make it out. Unless you have excellent vision.
But even then the design of this magnifier is such that unless you're looking at it
head on, you're not going to be able to see the printing.
So, there are a lot of physical obstacles to usability here.
But there's another kind of problem with this, this VVPAT, design, and it has to do
with the review screen issues. And this actually a pretty subtle but
pretty scary attack. So, the VVPAT idea is that if we have a
paper trail, and a, a, a physical piece of paper that the computer and the voting
machine can't go back and change. Then that's going to provide a, a pretty
strong defense against software cheating because, people can go back and audit the
results on the tape and make sure they match the count on the computer.
But what if you wanted to commit fraud on a device that had a VVPAT?
How, how could you go about doing that? Well, one thing you might do, is you tried
to have it print out some choices that weren't what the voters selected.
And hope that the voter wouldn't catch it. If very few voters actually look at the
tape, that, that's going to give you the chance to, to commit, some level of fraud
without getting caught. But what if some voters do look at it?
I mean, that still provides a defense. Right?
It's like if, if those voters are if, if one percent of voters actually look at it
and that's a random selection of voters, then it's like you're getting a one
percent audit on the correspondence between what's on the screen and what's on
the tape. Well, the problem is that the voting
machine, still has an option to cheat. So, what it can do is it can make sure
that what it's showing on the review screen matches what's on the tape.
That way if there is, it, it has to make sure that those match.
Otherwise, if someone, if someone notices a discrepancy.
If a voter checks and notices, they can just get a poll worker to come over and
see that they're not the same. And, that's going to be an obvious way to
catch, that the software is cheating. So, that the, if the machine wants to
cheat, it has to show the same fraudulent choice, something the voter didn't intend
to vote for, on the review screen, as it prints on the tape.
Now, it might just decide to do this for, for a certain fraction of voters.
But that would. If the poll workers have good procedures
in place, that would be something they could chat, they could catch, because
they've noticed, a, a very high rate of, of, of voters having to go back and change
choices. So, what the machine might do if the
designers of the vote stealing software were clever is, is take into account the
usability features, and usability failings of the voting population.
It could try, for instance, to cheat only in cases where it thinks the voter's not
going to check the review screen. And we see that voters, we, we might see
from, from research that voters who are really in a hurry, for instance, fail to
check the screens. So, if you fly through everything, that
might increase the chances that the, that the machine wants to try this kind of
attack and change something on the screen. So the, the, the theory here is that it's
going to decide whether or not to cheat based on whether it thinks you're gonna
look. Another thing it could try to do is, only
cheat, if it thinks no one's going to believe the voter.
So, if you were having a lot of trouble. If you had to press several times to get a
button to go on. That might be a sign that you're, a voter
who, is not familiar with the touch screen technology or maybe you're, you're, an
older voter who's having trouble. Something like that, then even if you do,