What Are the Stages of Testing Electronic Voting Machines?


General Reference (not clearly pro or con)
The six stages of testing electronic voting machines are:

      
2.   Federal
 
3.   State
     
4.   Acceptance
     
 
6.   Parallel

-  

1. Manufacturer:

Cathy Cox, the former Georgia Secretary of State, released a document titled "Multilevel Equipment Testing Program Designed to Assure Accuracy and Reliability to Touch Screen Voting System," available on its website (accessed Feb. 27, 2007), which stated:

"Before leaving the factory, each touch screen terminal receives a diagnostic test. Upon arrival at Diebold's central warehouse in Atlanta, each unit was put through a diagnostic sequence to test a variety of functions, including the card reader, serial port, printer, the internal clock and the calibration of the touch screen itself."

Feb. 27, 2007 - Cathy Cox, JD 

Doug Jones, PhD, Associate Professor of Computer Science at the University of Iowa wrote in his article "Testing Voting Systems," available on his website (accessed Feb. 23, 2007):

"All responsible product developers intensively test their products prior to allowing any outsiders to use or test them. The most responsible software development methodologies ask the system developers to develop suites of tests for each software component even before that component is developed. The greatest weakness of these tests is that they are developed by the system developers themselves, so they rarely contain surprises."

Feb. 23, 2007 - Douglas W. Jones, PhD 

2. Federal:

The U.S. Election Assistance Commission released a media advisory on Dec. 13, 2005 which stated:

"The U.S. Election Assistance Commission (EAC) unanimously adopted the 2005 Voluntary Voting System Guidelines, which significantly increase security requirements for voting systems and expand access, including opportunities to vote privately and independently, for individuals with disabilities.

The guidelines will take effect in December 2007 (24 months), at which time voting systems will no longer be tested against the 2002 Voting System Standards (VSS) developed by the Federal Election Commission. However, states may decide to adopt these guidelines before the effective date...

These guidelines are voluntary. States may decide to adopt them entirely, in part or not at all. States may also choose to enact stricter performance requirements for voting systems. Currently, at least 39 states require voting systems to be certified at the national level."

Dec. 13, 2005 - US Election Assistance Commission (EAC) 

The 2005 Voluntary Voting Systems Guidelines included the following overview:

"The United States Congress passed the Help America Vote Act of 2002 (HAVA) to modernize the administration of federal elections, marking the first time in our nation's history that the federal government has funded an election reform effort... Section 202 directs the EAC [U.S. Election Assistance Commission] to adopt voluntary voting systems guidelines, and to provide for the testing, certification, decertification, and recertification of voting system hardware and software. The purpose of the guidelines is to provide a set of specifications and requirements against which voting systems can be tested to determine if they provide all the basic functionality, accessibility, and security capabilities required of voting systems."

2005 - Voluntary Voting System Guidelines (4,158 KB)  

Deirdre Mulligan, JD, Director of the Samuelson Law, Technology and Public Policy Clinic at University of California, Berkeley School of Law, and Joseph Lorenzo Hall, PhD candidate at University of California Berkeley, stated in their 2004 white paper "Preliminary Analysis of e-Voting Problems Highlights Need for Heightened Standards and Testing," submitted to the National Research Council of the National Academy of Sciences' Committee on Electronic Voting:

"Voting systems are tested against the Voting System Standards [until December 2007] by Independent Testing Authorities (ITAs) that are certified to conduct these tests by the National Association of State Election Directors (NASED)... These ITAs conduct manual and automated source code review, documentation review, environmental 'shake-and-bake' testing, and some systems-level testing of the full voting system... Each voting system, that does not predate the VSS themselves, must pass both hardware and software testing by an ITA before it is considered 'federally qualified' and given a NASED identification number."

2004 - Joseph Lorenzo Hall 
Deirdre Mulligan, JD 

3. State:

The Maryland State Board of Elections explained the state certification process in its report Voting Systems, available at the Board's website (accessed Feb. 23, 2007):

"In addition to ITA testing and NASED qualification, a voting system used in Maryland must complete state certification. This is a state testing process to ensure that the voting system meets all of Maryland's statutory and other voting system requirements. To be certified, a voting system must:
  • Protect the secrecy of the ballot
  • Protect the security of the voting process
  • Count and record all votes accurately
  • Accommodate any ballot used in Maryland
  • Protect all other rights of voters and candidates
  • Be capable of creating a paper record of all votes cast in order that an audit trail is available in the event of a recount."

Feb. 23, 2007 - Maryland State Board of Elections 

Doug Jones, PhD, Associate Professor of Computer Science at the University of Iowa explained in his article "Testing Voting Systems," available on his website (accessed Feb. 23, 2007):

"While some states allow any voting system to be offered for sale that has been certified to meet the 'voluntary' federal standards, many states impose additional requirements. In these states, vendors must demonstrate that they have met these additional standards before offering their machines for sale in that state. Some states contract out to the ITAs [independent testing authorities] to test to these additional standards, some states have their own testing labs, some states hire consultants, some states have boards of examiners that determine if state requirements are met...

State qualification testing should ideally include a demonstration that the voting machine can be configured for a demonstration election that exercises all of the distinctive features of that state's election law, for example, straight party voting, ballot rotation, correct handling of multi-seat races, and open or closed primaries, as the case may be. Enough ballots should be voted in these elections to verify that the required features are present."

Feb. 23, 2007 - Douglas W. Jones, PhD 

Britain Williams, PhD, voting machine examiner for the State of Georgia, offered the following explanation in his May 5, 2004 testimony before the U.S. Election Assistance Commission (EAC):

"When the [electronic voting] system successfully completes ITA qualification testing and is issued a NASED qualification number, it can be brought into Georgia for State Certification Testing...

The Kennesaw State University (KSU) Center for Election Systems conducts a series of tests on the system. Some tests examine the level of difficulty associated with operating the system. Another tests the capacity of the system to accommodate the maximum number of ballots that might be cast in a large precinct or at an in-person absentee voting location. One test is specifically designed by the KSU Center for Information Security, Education, and Awareness to detect fraudulent or malicious code that might be present in the system. This test is designed to wake up any, so called, Trojan horse that might be present. In all of these tests a known pattern of votes is cast and then compared with the output of the system.

If any of these tests result in a modification to the system, the entire system is returned to the vendor for correction and the NASED Qualification/State Certification test cycle is repeated."

May 5, 2004 - Britain Williams, PhD 

4. Acceptance:

The Maryland State Board of Elections explained the state's testing protocol in its report Voting Systems, available at the Board's website (accessed Feb. 23, 2007):

"Every voting unit used in Maryland undergoes a comprehensive, two-part State acceptance test. The first part involves a diagnostic test to ensure that each voting unit and all of its components are performing to the required specifications. The second part involves casting hundreds of votes on each voting unit. A report showing the contest totals is printed from unit and compared against the expected results. This test ensures that the voting unit is accurately recording and counting votes. The results from the voting units are then transferred to the central tabulating computer that counts all the votes from the voting units. This ensures that every vote put onto the voting unit during acceptance testing is counted by the central tabulating computer."

Feb. 23, 2007 - Maryland State Board of Elections 

Britain Williams, PhD, voting machine examiner for the State of Georgia, described the acceptance testing process in his May 5, 2004 testimony before the U.S. Election Assistance Commission (EAC):

"When the vendor notifies the State that they have completed installation in a particular county, the KSU Center for Election Systems sends a team to the county to conduct Acceptance Tests. These tests verify that the hardware is operating correctly and that the correct version of the software has been installed. During these tests the electronic signature of the software installed in the county is compared with the electronic signature of the software archived by the KSU Center for Election Systems to validate that the county system is identical to the system that was State certified."

May 5, 2004 - Britain Williams, PhD 

The Election Technology Council stated in its Nov. 2005 article "Voting System Independent Testing and Certification Process," available at their website:

"After production testing and upon delivery from a vendor, local election authorities conduct acceptance testing to ensure the voting system equipment performs properly and is certified."

Nov. 2005 - Election Technology Council 

5. Logic and Accuracy:

Cathy Cox, the former Georgia Secretary of State, released a document titled "Multilevel Equipment Testing Program Designed to Assure Accuracy and Reliability to Touch Screen Voting System," available on its website (accessed Feb. 27, 2007), which stated:

"Georgia law requires that before an election, each of the 22,052 voting units also undergo 'Logic and Accuracy' testing which examines system features, insures that votes that are cast are properly recorded, and assures that all candidates and questions for each ballot style in each precinct are properly loaded onto the system. Sample votes are cast on the equipment and these totals are verified. (Logic and Accuracy differs from the previous rounds of examination because the testing is specific to the exact ballot that will be displayed in a specific precinct on election day)... At least one memory card from each precinct is uploaded to the county server to ensure that the upload features necessary to compile and count the votes are working properly."

Feb. 27, 2007 - Cathy Cox, JD 

Dana DeBeauvoir, Travis County (Texas) Clerk, submitted a paper titled "Prevention of Attack, Not Detection After the Fact: A Note on Risk Assessment and Risk Mitigation" in conjunction with her public testimony before the U.S. Election Assistance Commission on May 5, 2004, which stated:

"L&A [Logic and Accuracy] testing proofs the ballot and proves that the system is properly adding votes to each candidate in the same quantity as the votes were manually entered. The system result is compared to a known set of data and must match... L&A testing increases the confidence that the system properly attributes votes and that the tally will be repeated exactly the same way each time the system is voted.

Logic and Accuracy testing confirms that each candidate appears in the proper precinct, including split precincts, and does not appear in precincts outside that candidate's jurisdiction."

May 5, 2004 - Dana DeBeauvoir, MA 

6. Parallel:

The National Academy of Sciences' 2005 report Asking the Right Questions About Electronic Voting stated:

"Parallel testing, which is intended to uncover malicious attack [sic] on a system, involves testing a number of randomly selected voting stations under conditions that simulate actual Election Day usage as closely as possible, except that the actual ballots seen by 'test voters' and the voting behavior of the 'test voters' are known to the testers and can be compared to the results that these voting stations tabulate and report... Note also that Election Day conditions must be simulated using real names on the ballots (not George Washington and Abe Lincoln), patterns of voter usage at the voting station that approximate Election Day usage (e.g., more voters after work hours, fewer voters in mid-afternoon, or whatever the pattern is for the precinct in question), and setting of all system clocks to the date of Election Day. Parallel testing is a check against the possibility that a system could recognize when it is being tested at any other time."

2005 - National Academy of Sciences (NAS) 

Michael Shamos, PhD, JD, Distinguished Career Professor of Computer Science at Carnegie Mellon University wrote in his paper "Paper v. Electronic Voting Records - An Assessment," published in the Proceedings of the 14th ACM Conference on Computers, Freedom and Privacy, 2004:

"I wrote of the possibility that a DRE machine could contain an on-board clock and that an intruder could rig the machine so that it behaved perfectly in all pre- and post-election tests, but switched votes during an election...

A better solution is to employ parallel testing... Under this method, a set of examiners is empowered to enter any polling place at the start of voting and commandeer any voting machine for test purposes. No actual voters cast votes on the selected machine. No change whatsoever is made to the test machine - it is not even moved from its position (to counter the argument that it might contain a motion sensor to warn that it was under test). The examiner votes a number of predetermined ballots comparable to the number that would be voted on a typical machine in that precinct. Of course, manual entry of votes by a human is an error-prone process, so a video camera is used to capture his actual vote entries. At the normal close of polls, the votes on the test machine are tabulated and compared with the expected totals. If any software is present that is switching or losing votes, it will be exposed."

2004 - Michael I. Shamos, PhD, JD 

Kevin Shelley, former Secretary of State of California, released the "Report on March 2, 2004 Statewide Primary Election," released Apr. 2004, which explained the parallel monitoring program employed by California during the March 2004 primary election:

"The parallel monitoring program is specifically designed to detect the potential presence of malicious code in the software of a voting machine that would otherwise not be detected by other testing processes...

Under the parallel monitoring procedures, two touch screen machines of each model used by a California county on Election Day were randomly selected and removed shortly before the election to be tested... Voters did not use these machines at the March Primary. Instead, they were test-voted on Election Day in a simulated election conducted at the same time and in the manner as the actual election. Parallel monitoring minimized the risk that malicious software would detect that the machines were not being used by actual voters, and thus not execute its malicious code. All test-votes were videotaped to compare the results reported by the machine against the votes actually entered on the machine by Secretary of State testers. Any unresolved discrepancy found during this procedure would indicate the presence of potential malicious code in the voting machines."

Apr. 2004 - Kevin Shelley, JD