Opinion | Cameras establish facts, combat unreliability


Photo Courtesy of Cody Logan

Security camera on the Mount Vernon, Washington river walk.

By Mark Toledano, Columnist

Security cameras have a certain disapproval associated with them. It’s easy to feel unsettled about being perpetually watched, but the reality of cameras is not like the movies. It’s usually not the case that someone is watching the security footage on the other end, looking for every slight transgression. That would be too tedious and expensive.

Instead, cameras serve a more useful purpose that humans cannot. They are mostly used after an incident occurs to confirm the facts of the event and pursue responsible parties.

Eyewitness testimony is one of the surest ways to convict a defendant in criminal court. But many studies since the 1960s have concluded that memory recollection is a shoddy and unreliable way to determine the facts of an event.

According to the University of Michigan Law School, “eyewitness misidentification is the single greatest cause of wrongful convictions nationwide.” When DNA testing was accepted by the courts in the 1990s, scores of wrongfully convicted people (who were convicted on bad eyewitness accounts) were set free after the tests proved their innocence.

How could eyewitnesses, who were so confident of what they saw, be wrong? Elizabeth Loftus, a psychology professor at the University of California Irvine, says the problem is how memory works. When remembering an event, the mind does not replay images; it recreates them. Every time a memory is recalled, the mind could make minute changes from the actual sensory perceptions the viewer experienced when the event occurred.

    Sign up for our newsletter!

    This is how an eyewitness might misidentify a suspect in a police lineup and end up testifying against an innocent person. During the lineup, the witness is asked if he or she recognizes any of the lined up suspects. Presuming that at least one of them must be guilty, it’s common for witnesses to make an approximation of the perpetrator’s appearance and splice together the memory of the perpetrator with the image of the person standing right in front of them.

    Robots, on the other hand, don’t have human memory. They replay their images, not recreate them, like us. Until robots become self-aware, Westworld-style, their ability to replay images is more credible than ours.

    You don’t have to search far for a case study on how effective security cameras can be in confirming identities. It’s happened right here on campus. Not even two years ago, fellow Illini Yingying Zhang was abducted and murdered by a doctoral student.

    Law enforcement used security footage from a camera near the abduction site to identify a cracked hubcap on the suspect’s car. With this unique characteristic and the car’s make and model, all given by security camera footage, the search narrowed and the manhunt was shortened to a few days. Without assistance from the security footage, Yingying’s killer might still be wandering around the Main Quad.

    Cameras don’t collect more information than they have to. They will store your facial image, but they aren’t programmed to do much more than that. Google knows magnitudes more about you than security cameras do. Those concerned about privacy have bigger fish to fry with big tech companies than with cameras.

    To ensure innocent people remain free, and criminals remain behind bars, we must relinquish acceptance that the human mind is good at remembering. It’s not. Cameras don’t lie, and they don’t succumb to the whims of imagination. Only cold, hard facts can save our broken justice system.

    Mark is a junior in ACES.

    [email protected]