Posts Tagged ‘targeted advertising’

In the mix…WSJ’s “What They Know”; data potential in healthcare; and comparing the privacy bills in Congress

Monday, August 9th, 2010

1.  The Wall Street Journal Online has a new feature section, ominously named, “What They Know.” The section highlights articles that focus on technology and tracking.  The tone feels a little overwrought, with language that evokes spies, like “Stalking by Cellphone” and “The Web’s New Gold Mine: Your Secrets.”  Some of their methodology is a little simplistic.  Their study on how much people are “exposed” online was based on simply counting tracking tools, such as cookies and beacons, installed by certain websites.

It is interesting, though, to see that the big, bad wolves of privacy, like Facebook and Google, are pretty low on the WSJ’s exposure scale, while sites people don’t really think about, like dictionary.com, are very high.  The debate around online data collection does need to shift to include companies that aren’t so name-brand.

2.  In response to WSJ’s feature, AdAge published this article by Erin Jo Richey, a digital marketing analyst, addressing whether “online marketers are actually spies.” She argues that she doesn’t know that much about the people she’s tracking, but she does admit she could know more:

I spend most of my time looking at trends and segments of visitors with shared characteristics rather than focusing on profiles of individual browsers. However, if I already know that Mary Smith bought a black toaster with product number 08971 on Monday morning, I can probably isolate the anonymous profile that represents Mary’s visit to my website Monday morning.

3.  A nice graphic illustrates how data could transform healthcare. Are there people making these kinds of detailed arguments made for other industries and areas of research and policy?

4.  There are now two proposed privacy bills in Congress, the BEST PRACTICES bill proposed by Representative Bobby Rush and the draft proposed by Representatives Rick Boucher and Cliff Stearns.  CDT has released a clear and concise table breaking down the differences between these two proposed bills and what CDT recommends.  Some things that jumped out at us:

  • Both bills make exceptions for aggregated or de-identified data.  The BEST PRACTICES bill has a more descriptive definition of what that means, stating that it excepts aggregated information and information from which identifying information has been obscured or removed, such that there is no reasonable basis to believe that the information could be used to identify an individual or a computer used by the individual.  CDT supports the BEST PRACTICES exception.
  • Both bills make some, though not sweeping provisions, for consumer access to the information collected about them.  CDT endorses neither, and would support a bill that would generally require covered entities to make available to consumers the covered information possessed about them along with a reasonable method of correction.  Some companies, including a start-up called Bynamite, have already begun to show consumers what’s being collected, albeit in rather limited ways.  We at the Common Data Project hope this push to access also includes access to the richness of the information collected from all of us, and not just the interests asssociated with me.  It’ll be interesting to see where this legislation goes, and how it might affect the development of our datatrust.

In the mix..government surveillance, HIPAA updates, and user control over online data

Monday, July 19th, 2010

1) The U.S. government comes up with some, um, interesting names for its surveillance programs.  “Perfect Citizen” sounds like it’s right out of Orwell. As the article points out, there are some major unanswered questions.  How do they collect this data?  Where do they get it?  Do they use it just to look for interesting patterns that then lead them to identify specific individuals, or are all the individuals apparent and visible from the get-go?  And what are the regulations around re-use of this data?

2) Health and Human Services has issued proposed updated regulations to HIPAA, the law regulating how personal health information is shared. CDT has made some comments about how these regulations will affect patient privacy, data security, and enforcement.  HIPAA, to some extent, lays out some useful standards on things like how electronic health information should transmitted.  But it also has been controversial for suppressing information-sharing, even when it is legal and warranted.

So what if instead of talking about what we can’t do, what if we started talking about what we can do with electronic health data?  I’m not imagining a list of uses where anything outside of the list is barred, but rather an outline of the kinds of uses that are useful.  The whole point of electronic health records is to make information more easily shareable so care is more continuous and comprehensive and research more efficient and effective.

I love this bit from an interview with a neuroscientist who studies dog brains because, “dogs aren’t covered by Hipaa! Their records aren’t confidential!”

3) A start-up called Bynamite is trying to give users control over the information they share with advertisers online. It’s another take on something we’ve seen from Google and BlueKai, where users get to see what interests have been associated with them.  Like those services, Bynamite allows you to remove interests that don’t pertain to you or that you don’t want to share.  Bynamite then goes further by opting you out of networks that won’t let you make these choices.  That definitely sounds easier to managing P3P, and easier than reading through the policies of all the companies that participate in the National Advertising Initiative.

I agree with Professor Acquisti that all of us, when we use Google or any other free online service, are paying for our use of the service with our personal information, and that Bynamite is trying to make that transaction more explicit.  But I wonder if the value of the data companies have gained is explicit.  Is the price of the transaction fair?  Does 1 hour of free Google search equal x amounts of personal data bits?  Can you even put a dollar value on that transaction, given that the true value of all this data is in aggregate?

The accompanying blog post to this article cites a study demonstrating how hard it is to assign a dollar value to privacy.  The study subjects clearly did value “privacy,” but the price they put on it depended on how much they felt they had any privacy to begin with!

In the mix

Wednesday, June 3rd, 2009

Google is Top Tracker of Surfers in Study. (NY Times Bits Blog)

The Obama Administration’s Silence on Privacy. (NY Times Bits Blog)

This UK Sheriff Cites Officials for Serious Statistical Violations.  (WSJ The Numbers Guy)

Tuesday in the Mix

Tuesday, May 12th, 2009

Just Landed: Processing, Twitter, MetaCarta & Hidden Data (blprnt)

Greece Puts Brakes on Street View (BBC)

Developer of AdBlock Plus Proposes a Fairer Approach to Ad Blocking (ReadWriteWeb)

What Does Access to Real World Data Online Make Possible? (ReadWriteWeb)

Monday in the Mix

Monday, May 11th, 2009

Signs Your Wireless Carrier Loves You (NYT)

Calendar as filter (Dilbert.com)

New Search Service Aims at Answering Tough Queries, but Not Taking on Google (NYT)

Transparent Google?

Friday, March 27th, 2009

There’s some fascinating new stuff going on in the world of online tracking and targeted advertising.  First, Google rolled out its new behavioral targeting ad program with some features that long-time privacy advocates, like the Electronic Frontier Foundation and Michael Zimmer, found worthy of praise.

For people who choose not to be tracked, Google developed a plug-in that persists even after cookies are cleared.  Most other systems for opt-out rely on cookies.  Given that most people who are concerned about their privacy clear their cookies periodically, it was important to EFF that Google’s opt-out mechanism would remain even if all cookies were cleared.

Even more interesting was Google’s decision to link a page to the caption “Ads by Google” that explains the behavioral targeting technique with a list of interest categories that have been assigned to you.  In other words, Google is making more transparent what they know, or think they know about you.  You can then choose to remove some of those interest categories or to opt-out of tracking altogether.

As Zimmer points out, Google could show more fine-grained detail regarding what they know about you.  But it’s still a fascinating step for a major corporation to take.  Even better, Google isn’t the only one creating pages that show users how they’re being viewed for marketing purposes.

BlueKai and eXelate Media run “behavioral exchanges,” selling information to companies about website visitors.  Like Google, they both provide pages, here and here, where people can choose to opt-out of tracking altogether.  Otherwise, they can monitor and edit what interests are associated with them.

It’s hard to know how “transparent” all this really is to people who are not tech and privacy geeks.  Ultimately, companies need to improve data collection practices for everyone, not just people who care enough to find out.  And I would argue that it can’t be a model where a select few can just opt-out and protect themselves, and the companies can continue to do  anything they want to do with everyone else’s data. But it’s still a new way of managing your life online that doesn’t require as much investment in self-education and time as the many of the other methods described by EFF in its Surveillance Self-Defense Site.

Will this model become the dominant one in online tracking?  Compare the transparency of these companies with RealAge, an online quiz that’s just been outed as selling information to pharmaceutical companies who want to market directly to quiz takers.  What most consumers find instinctively distasteful is a feeling of being fooled.  RealAge claimed that it protected privacy by not giving personally identifiable information to the companies and that it is “providing value in return for the information” with ads that might interest the quiz takers, but it’s not the kind of value RealAge users consciously “paid” for.  What BlueKai, eXelate Media, and Google have shown is an understanding that for many people, their privacy is violated not just when a company knows such-and-such information is associated with Mr. Tom Smith, but when any of that information is being collected and shared without the full knowledge and consent of Tom Smith.

It’s obvious why RealAge chose to be vague about where their profits came from–would 27 million people have taken the test if the website had declared prominently that the information would be sold to pharmaceutical companies?  But it’s hard to see how sustainable that business model is.  Presumably, BlueKai and eXelate Media, as well as Google, will also get somewhat less data with their more transparent strategy.  But what model of business will still be around ten, twenty, fifty years in the future?


Get Adobe Flash player