Cars are getting smarter. Some can show you a video of what is behind you to help you park in a tight spot. Others can automatically apply the brakes if you are about to run into the car in front of you.
Now cars have a new power. They can snitch to an insurance company about your driving. A tracking device can be installed in your car to monitor how and when and how far you drive. Progressive and other insurers offer discounts on car insurance to drivers based on data from such devices.
Do you accelerate sharply, corner too closely, travel at night or drive great distances? Those traits can be used against you and prevent you from getting a discount. But many of those factors are beyond your control. If your job requires you to work in the evening, why should you be penalized by your insurer?
Most insurers’ devices are installed in the data port of car, under the drivers’ side of the dashboard, which limits their use to cars sold after 1998. But the Canadian insurer Desjardins uses a mobile phone app, Ajusto, that doesn’t even need to be installed in the car. But phone apps raise additional issues. Nothing prevents an insurer from matching data from the phone driving app with other information. Nearly two-thirds of smartphone owners look up health information on their devices. What if you’ve done a Google search for the side effects of an allergy medication? The insurer might take that to mean you are using the medication while driving, despite the drug’s warnings about drowsiness.
Who else will ultimately get the driving information? Will the police want to know who is driving faster than the speed limit? As a phone app, Ajusto can tap into location information. Will spouses and employers want to know where the driver has been? Already, information from toll passes has been used as evidence in criminal cases and divorce cases. If you get into an accident while using Progressive’s Snapshot device, Progressive will turn over their information about your driving style and history to the court.
These programs to reward safe drivers might actually lead to more accidents. A friend who used the Progressive device heard a series of beeps from his car if he braked too quickly. The only way to avoid the beeps was to stay four car lengths behind the car in front of him, but that meant other cars were constantly swerving in front of him. It also greatly increased the chance of his being rear-ended.
The tracking devices for cars are touted as a way to save you money. But the data they collect can be used against you. Progressive announced that it will start charging higher rates to drivers who volunteer to use its Snapshot device, but whose driving does not measure up. Courts can order that you turn over your driving information to someone who sues you. Tracking devices have real risks. What you might save in premiums, you’ll lose in privacy.
PROPOSED CHICAGO DATA SENSORS RAISE CONCERNS OVER PRIVACY, HIDDEN BIAS, Guest Blog by Michael Holloway and John McElligott
Beginning in mid-July, Chicagoans may notice decorative metal boxes appearing on downtown light poles. They may not know that the boxes will contain sophisticated data sensors that will continuously collect a stream of data on “air quality, light intensity, sound volume, heat, precipitation, and wind.” The sensors will also collect data on nearby foot traffic by counting signals from passing cell phones. According to the Chicago Tribune, project leader Charlie Catlett says the project will “give scientists the tools to make Chicago a safer, more efficient and cleaner place to live.” Catlett’s group is seeking funding to install hundreds of the sensors throughout the city. But the sensors raise issues concerning potential invasions of privacy, as well as the creation of data sets with hidden biases that may then be used to guide policy to the disadvantage of poor and elderly people and members of minority groups.
Project leaders and City officials deny that the sensors raise privacy concerns. According to Catlett, a computer scientist, the sensors will “count contact with the signal rather than record the digital address of each device, and “information collected by the sensors will not be connected to a specific device or IP address.” Brenna Berman, the city’s commissioner of information and technology, said that “privacy concerns are unfounded because no identifying data will be collected.” However, Alderman Robert Fioretti hascalled for a public hearing on the data sensors. Fioretti notes that City Council was never consulted about the plan, an Emanuel administration initiative, and states that the sensors raise “obvious invasion-of-privacy concerns.”
Raising a note of skepticism about the City’s privacy assurances, Professor Fred Cate of Indiana University’s Maurer School of Law noted the difficulty of avoiding the collection of personally identifiable information, even when protections intended to prevent the collection of personal information are in place: “Almost any data that starts with an individual is going to be identifiable.” Cate’s statement accords with scientific research showing that, in practice, supposedly anonymous or anonymized data can in many casesbe reidentified with an individual. Cate also raised the question of oversight: “If you spend a million dollars wiring these boxes, and a company comes in and says ‘We’ll pay you a million dollars to collect personally identifiable information,’ what’s the oversight over those companies?”
In light of the potential privacy concerns, Dean Harold Krent of IIT Chicago-Kent College of Law noted that transparency is key in Chicago’s operation of the sensors. The City must be clear about how many sensors there are and how they are used, and must ensure that the data captured by the sensors is easily accessible to public officials.
Jeremy Gillula, a staff technologist at the Electronic Frontier Foundation (EFF), pointed out that the proposed system may create unintentionally biased data sets. The proposed sensors will track contacts with signals from Wi-Fi and Bluetooth-enabled devices, but this will only reflect a subset of the overall foot traffic, since not all passers-by will be carrying devices with Wi-Fi or Bluetooth capabilities. In Boston, the use of a mobile app called Street Bump to track potholes in the city produced biased data because smartphone owners tended to live in wealthier areas. Similarly, many Tweets during Hurricane Sandy originated in the largely affluent borough of Manhattan, giving the impression that it was among the hardest-hit areas of the storm, while in fact lower-income, outlying areas such as Breezy Point, Coney Island and Rockaway were harder hit.
These examples reflect the fact that large datasets, while seemingly objective and abstract, are “intricately linked to physical place and human culture.” As the EFF has noted, “many groups are under-represented in today’s digital world (especially the elderly, minorities, and the poor). These groups run the risk of being disadvantaged if community resources are allocated based on big data, since there may not be any data about them in the first place.” Chicago will need to carefully validate the data collected from the proposed sensors to avoid introducing similar biases into policy and planning decisions.
Michael Holloway is a Legal Fellow at IIT Chicago-Kent’s Institute for Science, Law and Technology.
John McElligott is a Research Assistant at the IIT Chicago-Kent Institute for Science, Law and Technology. He is currently studying Law in his second year at the IIT Chicago-Kent College of Law.
What is the NSA collecting about activists, reporters and you? The NSA gathers the phone numbers, locations, and length of virtually all phone calls in the United States. It collects records of nearly everything you do online, including your browsing history and the contents of your emails and instant messages. It can create detailed graphs of your network of personal connections. It can create phony wireless connections in order to access your computer directly. It can intercept the delivery of an electronic device and add an “implant” allowing the agency to access it remotely.
Companies, too, undertake surveillance. Investigative reporter Adam Federman found that the “American Petroleum Institute (API) paid private global intelligence firm Stratfor more than $13,000 a month for weekly bulletins profiling activist organizations and their campaigns … from energy and climate change to tax policy and human rights, according to … WikiLeaks in 2012.” Federman reported that when a community group of 10 people met to screen environmental films and attend local environmental forums, a private security firm identified them as likely planning an eco-terrorism attack. A bulletin with the group’s information – where and when they met, and upcoming protests – was sent to the Pennsylvania Department of Homeland Security alongside information about other groups such as Al-Qaeda affiliated groups and pro-life activists.
When reporters cross borders, they are at increased risk of surveillance. As the 2013 federal district court case of Abidor v. Napolitano showed, border agents in much of the U.S. can search, copy, and detain a U.S. citizen’s laptop computer, cell phone, or other electronic device even when the agents have no reason to suspect any wrongdoing. The court held that the government had reasonable suspicion to search and detain Abidor’s laptop because Abidor, a Ph.D. student in Islamic history, had pictures of Hamas and Hezbollah rallies on his computer, and because he possessed both U.S. and French passports. When the laptop was returned, evidence showed that agents had examined Abidor’s personal files, including photos and chats with his girlfriend.
How, then, do reporters protect themselves and sources in an era of surveillance? At the TMC/IIT Chicago-Kent workshop, Gavin MacFadyen, Director of the Centre for Investigative Journalism at University College London, warned, “The first minute is the most crucial when the whistleblower calls a reporter.” At the workshop, a group of technical experts discussed technological tools and practices that journalists can use to protect themselves and their sources. Eva Galperin of the Electronic Frontier Foundation discussed threat modeling, in which a journalist or organization assesses potential threats to determine the level of protection needed. Threat modeling involves building a comprehensive list of people or entities who might be after information in one’s possession (say, an opposing lawyer, the NSA, or a foreign government). It then considers the nature of the information to determine the tools, such as encryption, which are required to protect it.
Once the level of threat is determined, reporters can use specific tools for defending against online surveillance. They can protect themselves and their sources by maintaining strong and unique passwords, detecting and avoiding fraudulent “phishing” emails, encrypting their laptops and other electronic devices, and using two-factor log-in authentication where available. They can protect anonymity with Tor, a powerful tool that works by obscuring the source and destination of online communications. They can use tools such as GPG to encrypt their emails and other communications and render them illegible to third parties.
Journalists can also use tools such as ObscuraCam, developed by the Guardian Project (unaffiliated with the U.K.’s Guardian news organization) to remove potentially identifying data from digital photos, and to obscure the faces of people appearing in photos in situations in which being identified might put them in danger. And news organizations can implement SecureDrop, a secure submissions system for receiving documents from anonymous sources.
No single tool or practice can render a journalist “NSA-proof” or immune to corporate spying, but appropriate tools and strong security practices can significantly evade surveillance and increase the reporters’ ability to deliver a well-researched, convincing story without exposing sources to harmful retaliation.
Susan, a professional woman in her 30s, met a man she thought she’d ultimately marry. Their relationship was sufficiently intimate that she sent him a naked photo of herself. When she caught him cheating, she broke up with him. He took revenge by posting that selfie on a revenge porn website, along with her name, the name of her town, and her social media contact information. She received messages from complete strangers asking for more naked photos. As she went about her daily life, she was afraid that one of those men would stalk her. She worried that her co-workers might have come across the photo. She knew that if she applied for a new job, that nude photo would come up in a Google search of her name. She’d been branded with a modern Scarlet Letter.
Across the Web, thousands of people attack their exes by posting disgusting comments about them, warnings not to date them, or nude photos of them. On October 1, California Governor Jerry Brown signed into law a bill criminalizing what has become known as revenge porn. The law assesses a thousand dollar fine in a narrow situation. It is a misdemeanor for a person to photograph “the intimate body part or parts of another identifiable person, under circumstances where the parties agree or understand that the image shall remain private, and the person subsequently distributes the image taken, with the intent to cause serious emotional distress, and the depicted person suffers serious emotional distress.”
But the law has serious limits. The law wouldn’t help Susan because it doesn’t cover selfies; it would only apply if her boyfriend had taken the photo and then later posted it. Even when an ex-boyfriend did take a photo and post it, it would be hard for the woman to prove that their understanding was that it would remain private. Didn’t she know there was at least a chance he was going to show it to his friends? And the requirement that he must have “the intent to cause serious emotional distress” is both hard to prove and too narrow. A man might evade punishment by claiming that by posting the photo he was just trying to brag that his girlfriend was hot. Or what if they were law students competing for the same job and he said he posted it to reduce her chances of winning the job? That wouldn’t be covered by the law.
And while the men who posted nude photos of their exes could be prosecuted under the law, it would provide no remedy for the women who want to get their photos removed from the web. Nude photos posted on one revenge porn site are often re-posted on dozens of other sites. A particular ugly or revealing photo might be replicated in hundreds of places on the Web.
A state law, such as that in California, can’t reach the main offenders— the websites that host revenge porn. A federal law adopted in the infancy of the Web, Section 230 of the Communications Decency Act, says that interactive computer services are immune from the types of suits for defamation and invasion of privacy that can be brought against traditional publishers. That makes sense with providers such as Comcast and websites such as Facebook (why should they be sued if I defame you in an email or post?), but it doesn’t make sense to grant immunity to websites whose sole purpose is to defame or invade privacy. It’s time to strip those websites of the ability to digitally gang rape women whose photos they post.
On revenge porn websites, the posting is just the beginning. Hunter Moore used to run a website, Is Anybody Up, where other men would write savage comments about the ugliness or sluttiness of the women in the photos. (“No sex with her unless she had a bag over her head” is one of the milder comments.) The more hits Moore’s site got, the more money he made through ads. “Hate can be monetized,” wrote Kelly Bourdet of Vice. Hunter Moore told the Village Voice how much he’d benefit if someone killed herself because of his posting her nude photo and comments about her: “So if someone fucking killed themselves? Do you know how much hate I’d get? All the Googling, all the redirects, all, like, the press…”
As I advocate in my book I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy, we need to revamp Section 230 to allow people to sue the revenge porn websites for defamation and invasion of privacy and to grant people the right to have their photos removed. The rationale for protecting internet service providers (that they shouldn’t have a duty to police transmissions to see if people are defaming each other) should not apply to protect websites whose whole business model is to defame and harass. Women like Susan should have a right to have her nude photo—intended for an audience of one—to be removed from a website that is exposing her to the world.
Are you in control of your digital self? ABA Journal web producer Lee Rawles talks with Lori Andrews, author of I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy about the lack of online privacy rights and the need for a social media constitution.
They discuss the changes that social networks have brought to all areas of the law, including evidence gathering; what evidence is admissible in courts; how social media can affect the right to a fair trial; and the right to control one’s image. Andrews touches on how secret data aggregation about your online activities can affect the price of your health insurance, the advertisements you see, what jobs you qualify for and the limits on your credit card balance.
Today is the 101st anniversary of International Women’s Day and women are facing a new threat to their rights—and sometimes even to their lives. The vast array of information available about us on the Web is leading to new forms of harassment and discrimination against women.
In a chilling revelation, a woman writes about a man who raped her years ago and was never brought to justice. She moved to another state and yet her rapist was able to find her and torment her. She speculates that he was able to find her on a website called Spokeo. The website, she said, provided “incredibly detailed” information about her and about her apartment where her rapist tracked her down. “It listed everything from the types of pets I had to my profession, and included a street-view map showing our building.”
Spokeo and other data aggregators collect personal online and offline information about individuals without their consent and sell that information. Other institutions—from employers to courts—use information from social networks and other websites against women. One third of employers say they’ve rejected job candidates because of a photo where they had a drink in their hand on a social network page or wore provocative clothing. But who does that apply to? Women.
Women have also lost custody of children, not because they’ve done anything wrong as a mom, but because they have posted something sexy on their boyfriend’s MySpace page. And when a male rival wanted to intimidate a woman, he posted a Google map of her house with a message that she had a rape fantasy and men should come and rape her.
The tactic of using sexual messages to put someone into harm’s way is standard on social networks and could be thought of as a new form of sexual harassment. A study by University of Maryland researchers found that users in a chat room with a female user name received twenty-five times more harassing private messages than users with a male name. Rather than being cornered and beat up in a dark alley, women now need to be concerned about being ganged up upon on the Web.
In my new book, I Know Who You Are and I Saw What You Did: Social Networks and the Death of Privacy, I call for a right to privacy on the Web and penalties for sexual harassment and discrimination on the Web. It’s time that offline rights apply online as well.
Credit: Web Ranking Images.
Social networks are transforming how relationships begin and end. One in five relationships now starts on social networks. But social networks also contribute to breakups and divorce. Instead of catching a whiff of another woman’s perfume on your husband’s shirt, you might instead find an X-rated photo that your husband accidentally tweeted to a woman in public mode rather than private mode. Or—as happened in a Connecticut case—your husband and his girlfriend might be sending each other Facebook gifts such as “Love Birds” and posting about the need for discretion. (Husband: “[n]o more Facebook. . . to public for me.” Girlfriend: “LOL o.k. under the radar . . . flying low. . . ”)
Social network information can be a smoking gun when people divorce. In an American Academy of Matrimonial Lawyers poll, 81 percent of divorce attorneys mentioned an increase in the use of social networking evidence over the past five years. Most of that evidence was found on Facebook (66 percent) or Myspace (15 percent).
Posts or photos indicating that one spouse cheated or has dangerous habits can help the other spouse receive more money in the split or gain sole custody of the kids. Divorce lawyer Linda Lea M. Viken recounted a custody battle where a father posted on his Facebook page that he was “single with no children looking for a fun time.” Divorce lawyer Kenneth Altshuler said, “Facebook has made it very easy to show lack of credibility and that is what can win a case. Once you catch them in one lie, nothing else they say is credible to the judge.”
The only way to guarantee that your posts won’t come back to haunt you in a custody case would be never to have had a social network page or to act like a Stepford parent and post only positive and glowing things about your every moment with your child. (Perhaps even doing that would backfire since it could be used to show that you are too enmeshed in your child’s life and won’t give your child enough space to grow.) Erasing a page you’ve previously created or deleting your social network presence entirely won’t help. Projects such as the Wayback Machine have probably captured screenshots of that page in its earlier incarnation.
Since parenthood is rewarding, demanding, and frustrating all at the same time, people may unthinkingly blurt out their frustrations in social media. What if you once tweeted that you didn’t want children? Should that statement be used to terminate your parental rights? In In re T.T., a Texas case, the court allowed such a statement from a dad’s Myspace profile to be used against him. What if you failed to mention kids on your Match.com profile? Would that show you were a bad mom? How about if you said, “I love my motorcycle” or “I love my iMac” but didn’t mention your children? Would that indicate that your kids played second fiddle to your possessions?
My personal view is that any social network statements about the child should be kept out of the case unless they indicate that the parent is likely to harm the child emotionally or physically. And a lack of statements about the child (or even a statement that one doesn’t have children) shouldn’t be used as a way to show parental unfitness.
U.S. Supreme Court. Credit: Mike Renlund.
As technology makes surveillance easier and cheaper, courts are grappling with how to apply the Fourth Amendment in the digital age. Prior to beepers, GPS, people checking in on Foursquare, and social networks, law enforcement monitoring of suspected offenders was limited by the constraints of manpower, budget and the risk that the officers following suspects might themselves be seen.
But now an increasing amount of information about people’s whereabouts, activities, purchases and intentions can be gleaned digitally, without an officer ever leaving the station. The U.S. Supreme Court’s decision this month in United States v. Jones provides little guidance about which activities might be considered searches, which require warrants, and which voluntary disclosures to third parties might waive Fourth Amendment rights.
Without a doubt, social networks like Facebook have enhanced the Constitutionally-protected freedom of association since they allows groups to form. But social networks have opened the door for people’s associations to be used against them. Read my post about this at the National Constitution Center's blog.
Sleigh bells ring…and people get lax about computer privacy. Your comfort and joy might be headed south if you don’t think about what you unwittingly reveal during the holidays:
Is Your Seatmate Stealing Business Secrets?
As you travel for the holidays, you’re probably focused on flight delays, the unfinished work you left behind, or how to avoid certain relatives. You may be thrilled about the chance to see old friends, get a change of climate, and stop thinking about the work you left behind. You probably aren’t thinking about your seatmate stealing business ideas or information by peeking at your screen. Unless you make an effort to protect that information when you nod off, your on-screen information could be fair game to that nosey passenger sitting next to you or across the aisle.
Lori is a law professor and the author of I KNOW WHO YOU ARE AND I SAW WHAT YOU DID: SOCIAL NETWORKS AND THE DEATH OF PRIVACY.
Sign up for Lori's newsletter.