Is Parallel Testing an Effective Method of Assuring the Accuracy of Electronic Voting Machines?
General Reference (not clearly pro or con)
The National Academy of Sciences' 2005 report "Asking the Right Questions About Electronic Voting," stated:
"Parallel testing, which is intended to uncover malicious attack [sic] on a system, involves testing a number of randomly selected voting stations under conditions that simulate actual Election Day usage as closely as possible, except that the actual ballots seen by 'test voters' and the voting behavior of the 'test voters' are known to the testers and can be compared to the results that these voting stations tabulate and report... Note also that Election Day conditions must be simulated using real names on the ballots (not George Washington and Abe Lincoln), patterns of voter usage at the voting station that approximate Election Day usage (e.g., more voters after work hours, fewer voters in mid-afternoon, or whatever the pattern is for the precinct in question), and setting of all system clocks to the date of Election Day. Parallel testing is a check against the possibility that a system could recognize when it is being tested at any other time."
Is Parallel Testing an Effective Method of Assuring the Accuracy of Electronic Voting Machines?
The Maryland State Board of Elections issued a report intended as an informational resource to the residents of the state titled Voting Systems, available on the State Board of Elections website (accessed June 14, 2006), which explained:
testing is a method of testing an electronic voting unit by producing
an independent set of results that can be compared against the results
produced by the voting unit and is cited as a best practice by election
administration and computer experts... In Election Day , over
1,300 votes were cast during parallel testing... The vote totals and
hand-tallies for the...units also matched.
kind of testing confirms the accuracy of the voting unit in recording
and tabulating votes. Given the fact that every voting unit in the
State uses the exact same software, voters in Maryland can be confident
that their votes are accurately counted."
Michael Shamos, PhD, JD, Distinguished Career Professor of Computer Science at Carnegie Mellon University, wrote in his paper "Paper v. Electronic Voting Records - An Assessment," published in the Proceedings of the 14th ACM Conference on Computers, Freedom and Privacy, 2004:
"More than 15 years ago, in a Pennsylvania certification report, I wrote of the possibility that a DRE machine could contain an on-board clock and that an intruder could rig the machine so that it behaved perfectly in all pre- and post-election tests, but switched votes during an election...
[A] solution is to employ parallel testing... At the normal close of polls, the votes on the test machine are tabulated and compared with the expected totals. If any software is present that is switching or losing votes, it will be exposed... [Parallel testing] is designed to detect the nightmare scenario in which some agent has tampered with every machine in the jurisdiction undetectably, a major risk cited by DRE opponents to justify the addition of paper trails."
The Brennan Center for Justice was commissioned by the Leadership Conference on Civil Rights to prepare the report "Recommendations of the Brennan Center for Justice and the Leadership Conference on Civil Rights for Improving Reliability of Direct Recording Electronic Voting Systems," 2004 which stated:
testing is the only procedure available to detect non-routine code bugs
or malicious code on DRE systems. In addition to laboratory testing
during the certification process it is essential that DRE systems get
tested during real elections, using so-called parallel testing
procedures. Parallel testing is needed for two separate purposes: (a)
to test the myriad parts of the system that get used during a real
election but not in a laboratory testing situation, and (b) to check
for the possible presence of malicious code or insider manipulation
that is designed specifically to avoid detection in a laboratory or
testing situation, but to modify votes surreptitiously during a real
election. Where possible, parallel testing should be performed in every
jurisdiction, for each distinct kind of DRE system."
R&G Associates stated in the "Report of Findings for the California General Election Parallel Monitoring Program," prepared at the request of then California Secretary of State Kevin Shelley, available on California Secretary of State's website, released Nov. 30, 2004, to asses the parallel monitoring program employed during the Nov. 2, 2004 California General Election:
federal, state, and county accuracy testing of Direct Recording
Electronic (DRE) voting systems occurs prior to elections and does not
mirror actual voting conditions. The...Parallel Monitoring Program was
developed as a supplement to the current logic and accuracy testing
processes. The goal was to determine the presence of malicious code by
testing the accuracy of the machines to record, tabulate, and report
votes using a sample of DRE equipment in selected counties under
simulated voting conditions on Election Day...
analysis of the data and the reconciliation of actual to expected
results began on November 3, 2004. The analysis included a review of
the discrepancy report for all counties and the videotapes, as
necessary, to determine the source of all discrepancies. Results of the
reconciliation analysis indicate that the DRE equipment tested on
November 2, 2004 recorded the votes as cast with 100% accuracy."
Kevin Shelley, former Secretary of State of California, released the "Report on Mar. 2, 2004 Statewide Primary Election," released Apr. 2004, which identified problems in the 2004 California Primary Election and included the following explanation:
is only able to asses the ability of the software tested to generate
accurate results from the votes entered. Parallel monitoring was not
designed to detect whether all touch screens used at the March Primary
recorded votes accurately. Nor does it exclude the possibility that
other sequences of votes or behaviors might trigger a different result.
parallel monitoring does not address whether (a) touch screen machines
were running firmware with uncertified modifications or patches, (b)
security holes exist in the firmware that could be exploited, (c)
machines in use on Election Day were tampered with and/or used in a
manner that exploits such security holes, or (d) systems tabulating the
votes were tampered with, apart from the accuracy of the DRE machines."
Ronald Crane, JD, Software Engineer, and Arthur Keller, PhD, Co-founder and Secretary of Open Voting Consortium, in their Dec. 2005 paper "A Deeper Look: Rebutting Shamos on e-Voting," available at the Verified Voting Foundation website, stated:
"While [parallel] testing is valuable, and should be used during every election, it can fail to achieve its purpose in a number of ways. First, the testing conditions, including the vote stream, must be truly indistinguishable from those that obtain during regular voting... A cheat might observe the vote stream and notice that votes are being cast much more rapidly than during regular voting, or that a block of votes was cast during the first hour of the election, then none for the remainder of the day... It is very difficult to design a test that, even when executed perfectly, outwits a determined adversary.
Second, a communications device could be used to tell the voting station how to detect the latest testing procedures. Third, the test must actually be performed diligently and in a statistically-significant set of precincts... Given this, it seems likely that parallel testing often will fail of its purpose. This failure will be particularly acute if the first few tests 'find nothing,' since that will create a natural tendency to view the process as pointless. Fourth, elections officials must actually believe, and act properly upon, the suggestion or discovery of cheating. Far too often officials treat anomalies (such as the one in which 600 voters somehow cast 3,900 votes for a Presidential candidate in Ohio - using DREs) as 'glitches,' and merely remove the offending machine(s) from service."
Ronald Rivest, PhD, Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT), wrote in a Dec. 24, 2004 communication on an Institute for Electrical and Electronics Engineers (IEEE) online discussion group:
"A security mechanism that can 'detect' errors can be not terribly helpful if there is no rational way to repair or correct those errors. Parallel testing, by itself, provides no good repair or corrective mechanism; it is for problem detection only. Parallel testing, coupled with a corrective mechanism (such as an independent record of voter's intent), may also be better than just the corrective mechanism alone."
Verified Voting Foundation stated in an Aug. 23, 2005 letter to Bruce McPherson, California Secretary of State:
logic and accuracy nor parallel monitoring are sufficient to detect all
types of malfunctions or tampering, nor do they provide a means of
recovery in the case where a malfunction or tampering is discovered.
For example, parallel monitoring on election day would not have
discovered the type of DRE voting system malfunction as occurred in
Carteret County, North Carolina in the November 2004 election. In that
incident, over 4,000 valid votes were irretrievably lost by a DRE
parallel monitoring is typically conducted on only a small number of
machines (usually representing much less than 1% of the machines
deployed in each county) and a small number of DRE counties. It does
not provide the same level of statistical sampling as provided by the
1% mandatory audit."