Andrew Myers, Professor in the Department of Computer Science in the Ann S. Bowers College of Computing and Information Science, discussed the New York City mayoral election with Aaron Mak of Slate, replying to the question "How Did New York City’s Election Count Go So Very, Very Wrong?" The interview is part of Future Tense, "a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society."
On Tuesday, New York City’s Board of Elections released an incorrect tally of votes in the Democratic primary for mayor that appeared to show a near-dead heat during an automatic runoff—before the count was revealed as a fiasco. The results seemed to show that the lead that front-runner Eric Adams had over Kathryn Garcia and Maya Wiley had significantly narrowed. Adams’ campaign, however, pointed out a discrepancy in the tally, which led the board to withdraw the report. The board later disclosed that it had accidentally included 135,000 test ballots with the group of actual ballots, leading to the error. The snafu comes after the city’s first time using ranked choice voting, in which voters are able to order the candidates from first to last according to preference, and votes are reassigned until a candidate attains a majority. The board is expected to release a corrected intermediary vote count later on Wednesday, though the final results likely won’t come for a few weeks.
To get a better sense of what went wrong, I spoke to Cornell University computer science professor Andrew Myers, who has spent the last 18 years running an open-source online ranked choice voting system called CIVS that organizations can use to run their own elections. Our conversation has been condensed and edited for clarity.
Slate: Given your experience with ranked choice voting, what was your sense of the voting infrastructure and systems that New York City had constructed for the race? Did it seem promising?
Andrew Myers: I don’t know the details about the software they set up, but any time you’re fielding a large system that has software and humans involved and lots of complex processes, it’s hard to get it right the first time. They accidentally counted test ballots, which seems like a mistake they should have ironed out earlier in their process and noticed before they actually released results.
Slate: Did this seem like a careless human error, or could it be a more complicated technical bug?
Andrew Myers: It’s hard to speculate, but what it suggests to me is that they hadn’t done a really thorough dry run of the system before they put it online. Even if they had done that, I might expect some kinds of problems, because if you test the system with 10,000 voters and then you scale it up, suddenly there are changes and new kinds of bugs that show up, but this seems like the kind of bug you could’ve caught with a 10,000-vote test.
It seems to me that they should have noticed that the results didn’t look right. That I think is a human error, but it shouldn’t have been a human error that would’ve even had a chance to occur. When you’re running something like this, it should really be set up to be push-button, and all the processes are really well understood, and everybody knows what they’re supposed to do at each step. To include fake data accidentally suggests they didn’t have their processes rock-solid.
Continue reading at Slate, where Myers discusses the usual procedure for text ballots; whether ranked choice voting systems are significantly more complicated or prone to error; and what the Board of Elections can do to prevent this sort of error from happening again.