FTC Privacy Workshop Notes

Meeting Notes

Below are notes by Aaron Titus that he agreed to contribute to the P3WG on the FTC Privacy Workshop that was held on December 7, 2009 in Washington DC.

Highlights From the FTC's Privacy Roundtable: Part 1

Posted by: Aaron Tituson Dec 8, 2009  

Tagged in: Privacy Law
The FTC’s December 7th Privacy Roundtableassembled a Who’s Who of privacy luminaries, academics, advocates, and industry players. This post highlights some of the more interesting comments from the meeting. I also tweeted the event (@aarontitus, #FTC #Privacy or #ftcpriv) and the FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

The meeting consisted of five panels. This posts highlights "Panel 5: Exploring Existing Regulatory Frameworks:"

  • During Session 5, Intuit's Chief Privacy Officer Barbara Lawlerposited that existing regulatory frameworks unfairly place the entire burden on consumers to protect themselves. "Consumers should expect a safe marketplace. They shouldn't be the ones to police the marketplace," she said.
  • Barbara Lawler also noted that "Data is never really at rest," because it's moving between data centers and backups in multiple locations throughout the globe. It is therefore incorrect to think of data, especially Cloud data, as being in one place. Instead, "data is in one place and many places at the same time," potentially in multiple jurisdictions.
  • Evan Hendricks of Privacy Times and Marc Rotenberg of EPICsuggested that the current model of "Notice and Consent" has failed to protect consumers, and that the FTC (and legislation in general) should return to well-established Fair Information Practices (FIPs), including a prohibition on "secret databases." Mr. Rotenberg went so far as to conclude that Notice and Choice principles are not a subset of FIPs, but instead "stand in opposition to fair information practices." He also joked that "the best part of Graham-Leach-Bliley Act is that you get paper notices you can tape on your window and get more privacy."
  • Ira Rubinstein of New York University School of Lawproposed that self-regulation is not binary or "monolithic," and that a self-regulatory scheme would be preferable, especially if viewed as a "continuum, based on government intervention." He argued that self-regulation would be especially appropriate in the United States, which has traditionally been very friendly to e-commerce.
  • Michael Donohue of OECDgave an overview of international legal concepts of privacy which generally agreeing with Marc Rotenberg's observation that "most countries have come to surprisingly similar conclusions about privacy."
  • J. Howard Beales of the GWU School of Businessargued in favor of a "harm-based model," because it is impossible to reach the best solution without first defining the harm. Marc Rotenberg responded that privacy harms are almost never financial.
  • Several panelists emphasized that privacy can be highly (and appropriately) subjective. One cited an example from a balding friend of his, "I don't care if anyone knows that I use Rogaine, but my 70-year-old grandmother would."
  • Fred Cate of the Center for Applied Cybersecurity Researchemphasized that the Notice and Consent model is flawed because some activities should not be consentable. For example, one may not "consent" to be served fraudulent or misleading advertising. Likewise, some uses of personal information should be prohibited and non-consentable. Most importantly, Notice and Choice are only tools- not the goal of privacy.
  • After Panel 5 was done, Bureau of Consumer Protection Director David C. Vladecksaid the FTC would investigate whether it is better to give consumers notice how their personal information may be used: 1. At the time of collection, or 2. At the time of use.
  • David C. Vladeck also said that the data broker industry warranted FTC attention because it is "largely invisible to the consumer."
    More highlights on the other sessions to come.

Highlights From the FTC's Privacy Roundtable: Part 2

Posted by:on Dec 9, 2009  

Tagged in: Privacy Law
This is part 2 of highlights from the FTC’s December 7th Privacy Roundtable. Part 1covered the panel on "Exploring Existing Regulatory Frameworks." This post highlights comments from "Benefits and Risks of Collecting, Using, and Retaining Consumer Data." This session was moderated by Jeffrey Rosen of The George Washington University Law School and Chris Olsen, of the FTC's Division of Privacy and Identity Protection.

  • Leslie Harris, President and CEO of the Center for Democracy & Technologyemphasized that information taken out of context can be used to unfairly judge a person, such as a search for "marijuana" or a medical condition. The danger increases when a rich profile of search terms and surfing data is constructed over time.
  • Susan Grant, Director of Consumer Protection for the Consumer Federation of America, said that "privacy is a fundamental human right."
  • Alessandro Acquisti of Carnegie Mellon University, Heinz College, explained that the definition of "sensitive information" continues to change with technology, new uses for information, and new ways to correlate and aggregate personal information. Technology cannot stop re-identification or de-anonymization, but should be used to increase the transaction costs for re-identifying personal information. He also spoke about how companies are bypassing consumer efforts to maintain privacy and anonymity through technologies such as flash cookies.
  • Richard Purcell, CEO of the Corporate Privacy Group, emphasized that citizens' health depends on anonymized health data used for research, and that privacy must be weighed using a cost-benefit analysis. He further
  • Michael Hintze, Associate General Counsel for Microsoft, explained that companies use log and use information for a number of legitimate reasons, such as security analysis, and search result optimization. However, he admitted that search terms can reveal individuals' "innermost thoughts," and that anonymization is not a silver bullet to protecting users. Instead, retention and deletion policies such as Microsoft's policy of deleting IP addresses and cross-cookie session information is designed to truly anonymize search data.
  • David Hoffman, Director of Security Policy and Global Privacy Officer for Intel, stressed that we should focus on data minimization. "We have wasted time arguing about what constitutes PII," when the question should be, "what information will have an impact on an individual?"
  • Jim Harper, Director of Information Policy Studies for The Cato Institute, argued that regulating too early can stifle innovation and prevent consumers from determining what they want themselves. Instead, we should attempt to define the problem set first. Mr. Harper explained that "there is a role for trial and error in determining what the real problems are," and that intellectualizing what consumers really want can lead to problems. Instead, we should "let a thousand flowers bloom" and let the social systems and advocacy draw out and solve the real issues.
  • David Hoffman generally agreed that we don't want to frustrate innovation, and that we are not currently in a position to understand all of the problems ourselves. He explained that it took a room full of experts the better part of a day to map out data flows. "We can't expect consumers to understand how the data flows if experts can't understand it now."
  • Leslie Harris and Alessandro Acquistisaid that Notification and transparency is necessary but not sufficient. Mr. Acquisti noted that consumers make decisions which are harmful to long-term privacy because humans are bad at making decisions when the benefits are short-term but harm is long-term. He compared privacy erosive behavior to smoking, since each smoker realizes that smoking causes cancer, but any individual cigarette doesn't hurt much.
    *
  • Susan Grant explained that consumers don't realize that their information can be used for other purposes, and that the benefits of marketing do not outweigh privacy concerns and fraud and abuse. Jim Harpercountered that advertisers can introduce a new medication to vulnerable populations, and that denying them that opportunity can create silent harms. Michael Hintze added that niche ads aren't good or bad- they're responsible or irresponsible
  • Richard Purcellalso argued that companies should spend the time and money to train their customers, and create "privacy by design" rather than "privacy by default." Finally, the FTC should "regulate the hell out of" lazy companies and bad actors.
  • Richard Purcellfurther emphasized that we lack a cohesive taxonomy for discussing privacy, and that we need to better define concepts such as "anonymity," "deidentification," and "sensitive data."
  • The panel was asked to consider widespread customer blacklisting. Susan Grantsaid that consumers need tools to discover and amend secret "bad customer" lists, since they have none now. Distinctions based upon invisible information is bad for consumers. Leslie Harrisagreed, saying that we need a law that provides access and correction for data brokers as well. She also criticized the FTC for failing to investigate privacy violations, saying that all of our bad examples are "accidental," not intentional long-term decisions to violate privacy, outed by the FTC.
  • In the larger context, Jim Harpersaid that Government access to personal information is the elephant in the room that nobody has yet addressed. Governments are beginning to discover "the cloud" for their own purposes, and when data is available to government on the current terms, it constitutes surveillance on a massive scale.

I'll do a few more installments in the coming days.

The FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.

Highlights From the FTC's Privacy Roundtable Part 3

Posted by: Aaron Tituson Dec 15, 2009  

Tagged in: Untagged 
This is part 3 of highlights from the FTC’s December 7th Privacy Roundtable. Part 1 covered the panel on "Exploring Existing Regulatory Frameworks," and Part 2covered the panel on "Benefits and Risks of Collecting, Using, and Retaining Consumer Data" This post highlights comments from "Consumer Expectations and Disclosures" and "Information Brokers."
Disclaimer: I took notes using my Twitter account. About halfway through the "Benefits and Risks" panel, Twitter decided that I was a spammer, and shut down my account. I was mad, and it meant that I did not cover the whole session.

Benefits and Risks of Collecting, Using, and Retaining Consumer Data

  • Lorrie Faith Cranor,Associate Professor of Computer Science, Carnegie Mellon Universitycommented on consumers' state of ignorance regarding how information flows, much like an unseen underground river. "Most people do not understand how information flows," or "what a third-party cookie is."
  • Alan Westin Professor Emeritus of Public Law and Government, Columbia Universityreferenced several of his studies which indicated that "…people are not prepared to equate [the need for] behavioral marketing with [funding] free services, and that "most people believe that they're being abused," but there was general consensuses that most people surveyed also believed that they were protected by law and regulations that do not actually exist. In the meantime, Mr. Westin's research also indicates that most people are no longer willing to trade privacy for freebies on the internet, because of the disconnect between "free" services and the fact that personal information pays for most of it.
  • Alan Davidson, Director of U.S. Public Policy and Government Affairs for Googleemphasized that the industry is trying to educate consumers and give them the tools they need in order to control their privacy, as evidenced by Google's dashboard, for instance. He suggested that the audience Bing "Google Dashboard" for more information.
  • Jules Polonetsky Co?Chair and Director of the Future of Privacy Forummade reference to the results of several large surveys conducted by his organization. For instance, one indicated that there is a substantial public misconception about what "Behavioral Advertising" is. Among the handful of survey respondents who had heard the term, all of them mistook "Behavioral Advertising" for the concept of subliminal advertising. His organization is also attempting to generate symbols explaining how personal information is used, an approach endorsed by Privacy Commons and other groups.
  • My apologies to Joel Kelsey, Policy Analyst for the Consumers Union, and Adam Thierer, President of University of Pennsylvania, Annenberg School for Communication. Each of these individuals actively participated, but unfortunately I was unable to capture their thoughts because I was under a temporary Twitter ban at the time.

Information Brokers

Short editorial: This session was by far the least enlightening.

  • Jennifer Barrett, Global Privacy and Public Policy Officer for Acxiomstarted off the panel by discussing what constituted "sensitive personal information." She replied that Acxiom classifies "sensitive information" is any information which could contribute to identity theft, whereas "restricted information" is an unlisted phone number, for example.
  • Rick Erwin, President of Experian Marketing Servicesexplained that they consider information on children, older Americans, and self-reported ailment data to be "sensitive," adding that Experian has "three decades of experience using sensitive information for marketing," and is able to adequately balance the interests of marketers and consumers. Mr. Erwin also discounted the harms of marketing, saying "we can't point to deep consumer harm based on bad advertising."
  • Pam Dixon, Executive Director of the World Privacy Forumdisagreed. She contended that the definition of "sensitive information" is difficult at best because otherwise benign information can be aggregated to create sensitive information. In regards to health information, getting consent from consumers is almost illusory because consumers have no way of knowing how the information will be used in the future. Informed consent is impossible without telling consumers what "boxes" they will be put in. Consumers need the right to know on what lists they will appear, for how long, and they must have the right to revoke their consent. Pam Dixon contended that "we need to make Opt Out work for consumers," and that opting out should always be free.
  • In response, Jennifer Barrett insisted that the Information Broker industry needs no further regulation: "We're already very regulated," she said.
  • Jim Adler, Chief Privacy Officer and General Manager of Systems for Intelius explained that they offer special opt-out services to government officials.
  • Chris Jay Hoofnagle, Lecturer in Residence at the University of California Berkeley School of Law was scheduled to participate but was unable due to technical difficulties.

The FTC has posted the webcast if you missed it.  The next Roundtable is scheduled for January 28, 2010 in Berkeley, CA and will also be broadcast online.