By Andreas Weigend
More than three-quarters of American adults own a smartphone, and on average, they spend about two hours each day on it. In fact, it’s estimated that we touch our phones between 200 and 300 times a day—for many of us, far more often than we touch our partners.
That means that when we’re on our phones, we aren’t just killing time or keeping in touch. We’re “sensorizing” the world in ways that we may not yet fully comprehend.
There are now networked cameras and microphones everywhere—there are more than 1 billion smartphones out there, each presumably equipped with a camera. Most of the photos shared online are taken with a phone, with about 1 billion photos uploaded each day to Facebook alone, according to my calculations.
Even if you do not tag the people in an image, photo recognition systems can do so. Facebook’s DeepFace algorithm can match a face to one that has appeared in previously uploaded images, including photos taken in dramatically different lighting and from dramatically different points of view. Using identified profile photos and tagged photos and social-graph relationships, a very probable name can be attached to the face.
All of this might seem innocuous, but these photos of our friends and family and surroundings reveal a great deal. The issue has heightened significance with the publication, in February, of a paper from the Google Brain team. They’ve developed an impressive new method for extrapolating a high-resolution image from a pixelated or very low-resolution photograph—what they call “super resolution.”
To start, most people activate the GPS on their phones in order to get directions. By default, the metadata associated with a photo taken on a phone with GPS enabled includes the longitude and latitude where the photo was taken. While it is possible to delete these data from your own photos, you can’t control the metadata of images taken by others.
Geolocation metadata are not the only means for pinpointing your location, however. A well-known landmark in the background, a street sign, or a restaurant’s menu can give away your location. The length of the shadows on the ground provides the approximate time of day. It doesn’t take a human to make these observations; photo recognition systems can do it. Algorithms are even being trained on video taken from low-resolution surveillance and mobile cameras to identify individuals—and not by their face. A person pounding the pavement of a city street can be identified and tracked block-to-block by the unique characteristics of her gait.
Photo-recognition systems can also be used to interpret the environment in which a photo was taken. Several years ago, a small tech company called Jetpac identified and categorized the content of 150 million photos posted publicly on Instagram to build a directory of businesses searchable by their characteristics. If the photos taken at a restaurant showed a lot of mouths wearing lipstick, Jetpac’s app would tag the spot as “dressy.” If most of the faces in a photo of a bar were male, it would tag the spot as a gay bar. (Jetpac was acquired by Google in 2014.)
Many of the Instagram photos that Jetpac was analyzing had geolocation data attached to them. By combining its photo recognition results and tags with location IDs, Jetpac realized it could create a listing of gay bars in Tehran. Would sharing such a directory be a service or a disservice to Jetpac’s users? It might be a welcome development to an Iranian who didn’t want to risk coming out to the wrong person by asking either friends or strangers. But the consequences could be terrible for the gay community if the list of bars or users accessing the list got into the hands of the mullahs. If Jetpac was able to develop this refining capability, what’s to stop another company or government from doing the same?
Algorithms can also identify the emotion you’re feeling in a photo or video. Decades ago, Paul Ekman, professor emeritus at the University of California–San Francisco, observed that people around the world made distinct facial expressions, some of them lasting less than a second, in response to specific emotionally charged situations. More recently, Ekman served as an adviser to a San Diego company called Emotient, a company acquired by Apple in 2016 that developed software to identify emotional sentiments from camera feeds in real time. With a single high-resolution camera, Emotient’s algorithms can simultaneously “read” the emotional microexpressions on the faces of 400 people gathered in an area—say, a lecture hall or shopping mall. Emotient is working on adapting its algorithms for use in hospitals to detect pain on patients’ faces.
Researchers at Oxford University and the University of Edinburgh’s Medical Research Council Institute of Genetics and Molecular Medicine have developed a phone app that can be used to analyze photos against a database of rare genetic conditions, helping patients discover undiagnosed health conditions—including Fragile X syndrome, a learning disability affecting 1 in 4,000 boys and 1 in 6,000 girls that is associated with large ears and a long face.
The app Im2Calories, developed by research scientist Kevin Murphy’s group at Google, turns food photos into a food diary and calorie count, so that all those photos of you enjoying a meal with friends could add up to an assessment of your future health.
And this past January researchers at Japan’s National Institute of Informatics announced that they had copied a person’s fingerprint by taking a photo with a standard digital camera from nearly 10 feet away. The team was able to replicate the arches, loops, and whorls of skin on the finger pads well enough to unlock an identity authentication system. The researchers suggested that in two years’ time, people will be able to affix films containing titanium oxide to their finger pads to guard themselves from identity theft.
There’s simply no place for us to hide anymore.
These are just a few examples of the research projects that are gleaning unexpected insights from the images and videos people post online without a second thought. Many of the algorithms being developed will improve our lives—helping us to make better decisions about our personal relationships, work lives, and health by alerting us to signals we are not yet aware of. The problem comes when others have access to this data, too, and make decisions about us based on them, potentially without our knowledge.
Taking a photo or video in public isn’t illegal, nor is taking one with a person’s permission. It’s also not illegal to upload the file or store it in the cloud. Applying optical character recognition, facial recognition, or a super-resolution algorithm isn’t illegal, either. There’s simply no place for us to hide anymore.
For the past 100 years, we’ve depended on the “right to privacy” to protect us from the threats of unwanted attention. The case for a right to privacy was first made back in 1890, when former law firm partners Samuel Warren and Louis Brandeis railed in the Harvard Law Review against increasing intrusions into people’s lives. The offenders? “Recent inventions and business methods”—including photographs and circulation-hungry newspapers trading in gossip. As with many inventions, the right to privacy was devised to solve a personal problem: Warren and his family had recently been the victims of unflattering and unwanted sketches in the society columns. They clearly didn’t live at a time when 1 billion photos a day were being posted on Facebook.
The right to privacy was a great idea, but it was an idea of its time, when data were scarce, communities were localized, and communicating was costly. Life is different now. We aren’t going to be able to stop everyone from uploading photos and videos: Indeed, few of us would want to, as doing so would terribly constrain personal expression and social interaction. Instead, we need to start thinking about how these images of us might be used to make decisions about us—and focus on protections against discrimination rather than restrictions against collection.
Unfortunately, current U.S. data-use laws are a patchwork of feeble protections determined sector by sector. Most of the laws—including the Health Insurance Portability and Accountability Act, whereby you must authorize the sharing of health data with your insurer, and the Fair Credit Reporting Act, which lets you see your credit report—provide you with a small amount of access or control over data about you. Withholding this data isn’t really an option. Most recent legislative attempts to protect data have been aimed at requiring to notify people when there’s been a security breach and holding companies to account for the financial consequences of any breaches. These are all worthy goals, but they are woefully inadequate for the sensorized age of social data.
It’s unlikely that Congress will pass laws with more robust protections around the use of image data. But we can demand that companies be more transparent about the algorithms that they use to identify images of us and what might be learned about us, pixel by pixel.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.
No comments:
Post a Comment