Posts Tagged ‘Facebook’

Open Graph, Silk, etc: Let’s stop calling it a privacy problem

Tuesday, October 4th, 2011

The recent announcements of Facebook’s Open Graph update and Amazon Silk have provoked the usual media reaction about privacy. Maybe it’s time to give up on trying to fight data collection and data use issues with privacy arguments.

Briefly, the new features: Facebook is creating more ways for you to passively track your own activity and share it with others. Amazon, in the name of speedier browsing (on their new Kindle device), has launched a service that will capture all of your online browsing activity tied to your identity, and use it to do what sounds like collaborative filtering to predict your browsing patterns and speed them up.

Who cares if they know I'm a dog?

Who cares if they know I'm a dog? (SF Weekly)

Amazon likens what Silk is doing to the role of an Internet Service Provider, which seems reasonable, but since regulators are getting wary of how ISP’s leverage the data that passes through them, Amazon may not always enjoy that association.

EPIC (Electronic Privacy Information Center) has sent a letter to the FTC requesting an investigation of Facebook’s Open Graph changes and the new Timeline.

I’m not optimistic about the response. Depending on how the default privacy settings are configured, Open Graph may fall victim to another “Facebook ruined my diamond ring surprise by advertising it on my behalf” kerfuffle, which will result in a half-hearted apology from Zuckerberg and some shuffling around of checkboxes and radio buttons. The watchdogs aren’t as used to keeping tabs on Amazon, which has done a better job of meeting expectations around its use of customer data, so Silk may provoke a bit more soul-searching.

But I doubt it. In an excerpt from his book “Nothing to Hide: The False Tradeoff Between Privacy and Security” published in the Chronicle of Higher Education earlier this year, Daniel J. Solove does a great job of explaining why we have trouble protecting individual privacy at the cost of [national] security. In the course of his argument he makes two points which are useful in thinking about protecting privacy on the internet.

He quotes South Carolina law professor Ann Bartow as saying,

There are not enough privacy “dead bodies” for privacy to be weighed against other harms.

There’s plenty of media chatter monitoring the decay of personal privacy online, but the conversations have been largely theoretical, the stuff of political and social theory. We have yet to have an event that crystallizes the conversation into a debate of moral rights and wrongs.

Whatevers, See No Evil, and the OMG!’s

At one end of the “privacy theory” debate, there are the Whatevers, whose blasé battle cry of “No one cares about privacy any more,” is bizarrely intended to be reassuring. At the other end are the OMG!’s, who only speak of data collection and online privacy in terms of degrees of personal violation, which equally bizarrely has the effect of inducing public equanimity in the face of “fresh violations.”

However, as per usual, the majority of people exist in the middle where so long as they “See no evil and Hear no evil,” privacy is a tab in the Settings dialog, not a civil liberties issue. Believe it or not this attitude hampers both companies trying to get more information out of their users AND civil liberties advocates who desperately want the public to “wake up” to what’s happening. Recently, privacy lost to free speech – but more on that in a minute.

When you look into most of the privacy concerns that are raised about legitimate web sites and software, (not viruses, phishing or other malicious efforts) they usually have to do with fairly mundane personal information. Your name or address being disclosed inadvertently. Embarrassing photos. Terms you search for. The web sites you visit. Public records digitized and put on the web.

The most legally harmful examples involve identity theft, which while not unrelated to internet privacy, falls squarely in the well-understood territory of criminal activity. What’s less clear is what’s wrong with “legitimate actors” such as Google and Facebook and what they’re doing with our data.

Which brings us a second point from Solove:

“Legal and policy solutions focus too much on the problems under the Orwellian metaphor—those of surveillance—and aren’t adequately addressing the Kafkaesque problems—those of information processing.”

In other words, who cares if the servers at Google “know” what I’m up to. We can’t as yet really even understand what it means for a computer to “know” something about human activity. Instead, the real question is what is Google (the company, comprised of human beings) deciding to do with this data?

What are People deciding to do with data?

By and large, the data collection that happens on the internet today is feeding into one flavor or another of “targeted advertising.” Loosely, that means showing you advertisements that are intended for an individual with some of your traits, based on information that has been collected about you. A male. A parent. A music lover. The changes to Facebook’s Open Graph will create a targeting field day. Which, on some level is a perfectly reasonable and predictable extension of age-old advertising and marketing practices.

In theory, advertising provides social value in bridging information gaps about useful, valuable products; data-driven services like Facebook, Google and Amazon are simply providing the technical muscle to close that gap.

However, Open Graph, Silk and other data rich services place us at the top of a very long and shallow slide down to a much darker side of information processing, which has nothing to do with the processing, but about manipulation and balance of power. And it’s the very length and gentle slope of that slide that make it almost impossible for us to talk about what’s really going wrong, and makes it even somewhat pleasant to ride down on it. (Yes, I’m making a slippery slide argument.)

At the top of the slide, are issues of values and dehumanization.

Recently employers have been making use of credit checks to screen potential candidates, automatically rejecting applicants with low credit scores. Perhaps this is an ingenious, if crude, way to quickly filter down a flood of job applicants. While its utility remains to be proven, it’s with good reason that we pause to consider the unintended consequences of such a policy. In many areas, we have often chosen to supplement “objective,” statistical evaluations with more humanist, subjective techniques (the college application process being one notable example). We are also a society that likes to believe in second chances.

A bit further down the slide, there are questions of fairness.

Credit card companies have been using purchase histories as a way to decide who to push to pay their debt in full and who to strike a deal with. In other words, they’re figuring out who will be susceptible to “being guilted” and who’s just going to give them the finger when they call. This is a truly ingenious and effective way to lower the cost and increase the effectiveness of debt collection efforts. But is it fair to debtors that some people “get a deal” and others don’t? Surely, such inequalities have always existed. At the very least, it’s problematic that such practices are happening behind closed doors with little to no public oversight, all in the name of protecting individual privacy.

Finally, there are issues of manipulation where information about you is used to get you to do things you don’t actually want to do.

The fast food industry has been micro-engineering the taste, smell and texture of their food products to induce a very real food addiction in the human brain. Surely, this is where online behavioral data-mining is headed, amplified by the power to deliver custom-tailored experiences to individuals.

But it’s just the Same-Old, Same-Old

This last scenario sounds bad, but isn’t this simply more of the same old advertising techniques we love to hate? Is there a bright line test we can apply so we know when we’ve “crossed the line” over into manipulation and lies?

Drawing Lines

Clearly the ethics of data use and manipulation in advertising is something we have been struggling with for a long time and something we will continue to struggle with, probably forever. However, some lines have been drawn, even if they’re not very clear.

While the original defining study on subliminal advertising has since been invalidated, when it was first publicized, the idea of messages being delivered subliminally into people’s minds was broadly condemned. In a world of imperfect definitions of “truth in advertising” it was immediately clear to the public that subliminal messaging (if it could be done) crossed the line into pure manipulation, and that was unacceptable. It was quickly banned by the UK, Australia and the American Networks and the National Association of Broadcasters.

Thought Experiment: If we were to impose a “code of ethics” on data practitioners, what would it look like?

Here’s a real-world, data-driven scenario:

  • Pharmacies sell customer information to drug companies so that they can identify doctors who will be most “receptive” to their marketing efforts.
  • Drug companies spend $1 billion a year advertising online to encourage individuals to “ask your doctor about [insert your favorite drug here]” with vague happy-people-in-sunshine imagery.
  • Drug companies employ 90,000 salespeople (in 2005) to visit the best target doctors and sway them to their brands.

Vermont passed a law outlawing the use of the pharmacy data without patient consent on the grounds of individual privacy. Then, this past June 23rd, the supreme court decided it was a free-speech problem and struck down the Vermont law.

Privacy as an argument for hemming in questionable data use will probably continue to fail.

The trouble again is that theoretical privacy harms are weak sauce in comparison to data as a way to “bridge information gaps.” If we shut down use of this data on the basis of privacy, that prevents the government from using the same data to prioritize distribution of vaccines to clinics in high-risk areas.

Ah, but here we’ve stumbled on the real problem…

Let’s shift the conversation from Privacy to Access

Innovative health care cost reduction schemes like care management are starved for data. Privacy concerns about broad, timely analysis of tax returns have prevented effective policy evaluation. Municipalities negotiating with corporations lack data to make difficult economic stimulus decisions. Meanwhile private companies are drowning in data that they are barely scratching the surface of.

At the risk of sounding like a broken record, since we have written volumes about this already:

  • The problem does not lie in the mere fact that data is collected, but in how it is secured and processed and in who’s interest it is deployed.
  • Your activity on the internet, captured in increasingly granular detail is enormously valuable, and can be mined for a broad range of uses that as a society we may or may not approve of.
  • Privacy is an ineffective weapon to wield against the dark side of data use and instead, we should focus our efforts on (1) regulations that require companies to be more transparent about how they’re using data and (2) making personal data into a public resource that is in the hands of many.

 

In the mix…Facebook “breach” of public data, data-mining for everyone, thinking through the Panton Principles, and BEST PRACTICES Act in Congress

Friday, July 30th, 2010

1) Facebook’s in privacy trouble again. Ron Bowes created a downloadable file containing information on 100 million searchable Facebook profiles, including the URL, name, and unique ID.  What’s interesting is that it’s not exactly a breach.  As Facebook pointed out, the information was already public.  What Facebook will likely never admit, though, is that there is a qualitative difference between information that is publicly available, and information that is organized into an easily searchable database.  This is what we as a society are struggling to define — if “public” means more public than ever before, how do we balance our societal interests in both privacy and disclosure?

2) Can data mining go mainstream? The article doesn’t actually say much, but it does at least raise an important question.  The value of data and data-mining is immense, as corporations and large government agencies know well.  Will those tools every be available to individuals?  Smaller businesses and organizations?  And what would that mean for them?  It’s a big motivator for us at the Common Data Project — if data doesn’t belong to anyone, and it’s been collected from us, shouldn’t we all be benefiting from data?

3) In the same vein is a new blog by Peter Murray-Rust discussing open knowledge/open data issues, focusing on the Panton Principles for open science data.

4) A new data privacy bill has been introduced in Congress called “Building Effective Strategies to Promote Responsibility Accountability Choice Transparency Innovation Consumer Expectations and Safeguards” Act, aka “BEST PRACTICES Act.”  The Information Law Group has posted Part One of FAQs on this proposed bill.

Although the bill is still being debated and rewritten, some of its provisions indicate that the author of the bill knows a bit more about data and privacy issues than many other Congressional representatives.

  • The information regulated by the Act goes beyond the traditional, American definition of personally identifiable information.  “The definition of “covered information” in the Act does not require such a combination – each data element stands on its own and may not need to be tied to or identify a specific person. If I, as an individual, had an email address that was wildwolf432@hotmail.com, that would would appear to satisfy the definition of covered information even if my name was not associated with it.”
  • Notice is required when information will be merged or combined with other data.
  • There’s some limited push to making more information accessible to users: “covered entities, upon request, must provide individuals with access to their personal files.” However, they only have to if “the entity stores such file in a manner that makes it accessible in the normal course of business,” which I’m guessing would apply to much of the data collected by internet companies.

Mark Zuckerberg: It takes a village to build trust.

Friday, June 4th, 2010

This whole brouhaha over Facebook privacy appears to be stuck revolving around Mark Zuckerberg.

We seem to be stuck in a personal tug-of-war with the CEO of Facebook frustrated that a 26 year-old personally has so much power over so many.

Meanwhile, Mark Z. is personally reassuring us that we can trust Facebook which on some level implies we must trust him.

But should any single individual really be entrusted with so much? Especially “a 26 year-old nervous, sweaty guy who dodges the questions.” Harsh, but not a completely invalid point.

As users of Facebook, we all know that it is the content of all our lives and our relationships to each other that make Facebook special. As a result, we feel a sense of entitlement about Facebook policy-making that we don’t feel about services that are in many ways way more intrusive and/or less disciplined about protecting privacy (e.g. ISPs, cellphone providers, search).

Another way of putting it is, Facebook is not Apple! and as a result, needs a CEO who is a community leader, not a dictator of cool.

So we start asking questions like, why should Facebook make the big bucks at the expense of my privacy? Shouldn’t I get a piece of that?

(Google’s been doing this for over a decade now, but the privacy exposure over at Google is invisible to the end-user.)

At some point, will we decide we would rather pay for a service than feel like we’re being manipulated by companies who know more about us than we do and can decide whether to use that information to help us or hurt us depending on profit margin. Here’s another example.

Or are there other ways to counterbalance the corporate monopoly on personal information? We think so.

In order for us to trust Facebook, Facebook needs to stop feeling like a benevolent dictatorship, albeit one open to feedback, but also one with a dictator who looks like he’s in need of a regent.

Instead Facebook the company should consider adopting some significant community-driven governance reforms that will at least give it the patina of a democracy.


(Even if at the end of the day, it is beholden to its owners and investors.

For some context, this was the sum total of what Mark Z. had to say about how important decisions are made at Facebook:

We’re a company where there’s a lot of open dialogue. We have crazy dialogue and arguments. Every Friday, I have an open Q&A where people can come and ask me whatever questions they want. We try to do what we think is right, but we also listen to feedback and use it to improve. And we look at data about how people are using the site. In response to the most recent changes we made, we innovated, we did what we thought was right about the defaults, and then we listened to the feedback and then we holed up for two weeks to crank out a new privacy system.

Nothing outrageous. About par for your average web service. (But then again, Facebook isn’t your average web service.)

However, this is what should have been the meat of the discussion about how Facebook is going to address privacy concerns: community agency and decision-making, not Mark Z.’s personal vision of an interwebs brimming with serendipitous happenings.

Facebook the organization needs to be trusted. So it might be best if Mark Z. backed out of the limelight and stopped being the lone face of Facebook.

How might have that D8 interview have turned out if he had come on stage with a small group of Facebook users?

What governance changes would make you feel more empowered as a Facebook user?

Building a community: who’s in charge?

Friday, May 28th, 2010

From http://xkcd.com/

We’ve seen so far that for a community to be vibrant and healthy, people have to care about the community and the roles they play in it.  A community doesn’t have to be a simple democracy, one member/one vote on all decisions, but members have to feel some sense of agency and power over what happens in the community.

Of course, agency can mean a lot of things.

On one end of the spectrum are membership-based cooperatives, like credit unions and the Park Slope Food Coop, where members, whether or not they exercise it, have decision-making power built into the infrastructure of the organization.

On the other end are most online communities, like Yelp, Facebook, and MySpace.  Because the communities are all about user-generated content, users clearly have a lot of say in how the community develops.

But generally speaking, users of for-profit online services, even ones that revolve around user-generated content don’t have power to actually govern the community or shape policies.

Yelp, for example, allows more or less anyone to write a review.  But the power to monitor and remove reviews for being shills, rants or otherwise violations of its terms of use is centralized in Yelp’s management and staff.  The editing is done behind closed doors, rather than out in the open with community input.  Given its profit model, it’s not surprising that Yelp has been accused repeatedly of using its editing power as a form of extortion when it tries to sell ads to business owners.

Even if Yelp is innocent, it doesn’t help that the process is not transparent, which is why Yelp has responded by at least revealing which reviews have been removed.

(As for Facebook, the hostility between the company and at least some of its users is obvious.  No need to go there again.)

And then there are communities that are somewhere in between, like Wikipedia.  Wikipedia isn’t a member-based organization in a traditional sense.  Community members elect three, not all, of the board members of Wikimedia.  Each community member does not have the same amount of power as another community member – people who gain greater responsibilities and greater status also have more power.  But many who are actively involved in how Wikipedia is run are volunteers, rather than paid staff, who initially got involved the same way everybody does, as writers and editors of entries.

There are some obvious benefits to a community that largely governs itself.

It’s another way for the community to feel that it belongs to its members, not some outside management structure.  The staff that runs Wikipedia can remain relatively small, because many volunteers are out there reading, editing, and monitoring the site.

Perhaps most importantly, power is decentralized and decisions are by necessity transparent.  Although not all Wikipedia users have access to all pages, there’s an ethos of openness and collaboration.

For example, a controversy recently erupted at Wikipedia.  Wikimedia Commons was accused of holding child pornography.  Jimmy Wales, the founder of Wikipedia, then started deleting images.  A debate ensued within the Wikipedia community about whether this was appropriate, a debate any of us can read.  Ultimately, it was decided that he would no longer have “founder” editing privileges, which had allowed him to delete content without the consent of other editors.  Wikimedia also claims that he never had final editorial control to begin with.  Whether or not Wikimedia is successful, it wants and needs to project a culture of collaboration, rather than personality-driven dictatorship.

It’s hard to imagine Mark Zuckerberg giving up comparable privileges to resolve the current privacy brouhaha at Facebook.

But it’s not all puppies and roses, as anyone who’s actually been a part of such a community knows.

It’s harder to control problems, which is why a blatantly inaccurate entry on Wikipedia once sat around for 123 days.  Some community members tend to get a little too excited telling other members they’re wrong, which can be a problem in any organization, but is multiplied when everyone has the right to monitor.

Some are great at pointing out problems but not so good at taking responsibility for fixing them.

And groups of people together can rathole on insignificant issues (especially on mailing lists), stalling progress because they can’t bring themselves to resolve “What color should we paint the bikeshed?” issues.

Wikipedia has struggled with these challenges over the past ten years.  It now limits access to certain entries in order to control accuracy, but arguably at some cost to the vibrancy of the community.  Wikipedia is trying to open up Wikipedia in new directions, as it tries a redesign in the hope it will encourage more diverse groups to write and edit entries (though personally, it looks a lot like the old one).

Ultimately, someone still has to be in charge.  And when you value democracy over dictatorship, it’s harder but arguably more interesting, to figure out what that looks like.

In the mix…DNA testing for college kids, Germany trying to get illegally gathered Google data, and the EFF’s privacy bill of rights for social networks

Friday, May 21st, 2010

1) UC Berkeley’s incoming class will all get DNA tests to identify genes that show how well you metabolize alcohol, lactose, and folates. “After the genetic testing, the university will offer a campuswide lecture by Mr. Rine about the three genetic markers, along with other lectures and panels with philosophers, ethicists, biologists and statisticians exploring the benefits and risks of personal genomics.”

Obviously, genetic testing is not something to take lightly, but the objections quoted sounded a little paternalistic. For example, “They may think these are noncontroversial genes, but there’s nothing noncontroversial about alcohol on campus,” said George Annas, a bioethicist at the Boston University School of Public Health. “What if someone tests negative, and they don’t have the marker, so they think that means they can drink more? Like all genetic information, it’s potentially harmful.”

Isn’t this the reasoning of people who preach abstinence-only sex education?

2) Google recently admitted they were collecting wifi information during their Streetview runs.  Germany’s reaction? To ask for the data so they can see if there’s reason to charge Google criminally.  I don’t understand this.  Private information is collected illegally so it should just be handed over to the government?  Are there useful ways to review this data and identify potential illegalities without handing the raw data over to the government?  Another example of why we can’t rest on our laurels — we need to find new ways to look at private data.

3) EFF issued a privacy bill of rights for social network users.  Short and simple.  It’s gotten me thinking, though, about what it means that we’re demanding rights from a private company. Not to get all Rand Paul on people (I really believe in the Civil Rights Act, all of it), but users’ frustrations with Facebook and their unwillingness to actually leave makes clear that the service Facebook is offering is not just a service provided to just a customer.  danah boyd has a suggestion — let’s think of Facebook as a utility and regulate it the way we regulate electric, water, and other similar utilities.

Building a community: the costs and benefits of a community built on a quid pro quo

Tuesday, May 18th, 2010

A couple of posts ago, I wrote about how Yelp, Slashdot and Wikipedia reward their members for contributing good content with stars, karma points, and increased status, all benefits reserved just for their registered members.  All three communities, however, share the benefits of what they do with the general public.  You don’t have to contribute a single edit to a Wikipedia entry to read all the entries you want.  You don’t have to register to read Yelp reviews, nor to read Slashdot news.  For Wikipedia and Slashdot, you don’t even have to register to edit/make a comment.  You can do it anonymously.

In other communities, however, those who want to benefit from the community must also give back to the community.

Credit unions, for example, have benefits for their members and their members only.  Credit unions and banks offer a lot of the same services – accounts, mortgages, and other loans – but they often do so on better terms than banks do.  However, while a bank will offer a mortgage to a person who does not have an account at that bank, a credit union will provide services only to credit union members.

It is a quid pro quo deal – the credit union member opens an account and the credit union provides services in return.

A more particular example is the Park Slope Food Coop, a cooperative grocery store to which I belong.  Many food coops operate on multiple levels of access and benefits.  Non-members can shop, but may not get as big a discount as members.   Those who want to be members can choose to pay a fee or to volunteer their time.  The Park Slope Food Coop eliminates all those choices – you have to be a member to shop, and you have to work to be a member.  Every member of the Coop is required to work 2 hours and 45 minutes every 4 weeks.  The exact requirement can vary depending on the type of work you sign up for, and the kind of work schedule you have, but that work requirement exists for every single adult member of the Coop.  In return, you get access to the Coop’s very fresh and varied produce and goods, often of higher quality and at lower prices than other local stores.

Again, it’s quid pro quo, members work and they get access to food in return.

This is not to suggest that the arrangement members of credit unions and the Coop are acting in a mercenary way.  Quid pro quo doesn’t just mean “you scratch my back, I scratch yours.”  It means you do something and get something of equal value in return.

There are some real advantages to limiting benefits for community members and community members only.

The incentive to join is clear.  The community is often more tight-knit.  Most of all, there is no conflict of interest between what’s good for the community and what’s good for the members.  A bank serves its customers but it has an incentive to make money that goes beyond protecting its customers. Credit unions were not untouched by the financial crisis, but they were certainly not as entangled as commercial banks and are considered good places still to get loans if you have good credit.

There are also real disadvantages.

As both examples make clear, such communities tend to be small and local.  The Coop has more than 12,000 members, a lot for a physically small space, but nowhere close to the numbers that visit large supermarkets.  Credit unions boast that they serve 186 million people worldwide, but any particular credit union is much smaller.  Even the credit union associated with an employer as large as Microsoft is nowhere near as large as a national bank.  It’s difficult to scale the benefits of a credit union up.

Even if the group is kept small, the costs of monitoring this kind of community are obviously high.  In an organization like the Coop, someone needs to make sure everyone is doing their fair share of the work.  Stories about being suspended, applying for “amnesty,” and trying to hide spouses and partners abound.  The Coop is the grocery store non-members love to hate and a favorite subject in local media, with stories popping up every couple of years with headlines like, “Won’t Work for Food: Horror Stories of the World’s Largest Member-Owned Cooperative Grocery Store” and “Flunking Out at the Coop.”

Personally, I think the Coop functions surprisingly well, proven by its relative longevity among cooperative endeavors, but it’s certainly not a utopian grocery store where people hold hands and sing “Kumbaya” over artichokes.

Notably, both examples are also communities that mainly operate offline.  The Internet with its ethos of openness generally doesn’t favor sites that limit access only to members.  Registered users may need to log on to view their personal accounts, but few sites really limit the benefits of the site to members alone.

So is there any online community that limits the benefits of the community as strictly to members as my two offline examples?

The first example I could come up with was Facebook, and it’s actually a terrible one.  Facebook’s been all over the news for the changes that make its users’ information more publicly available, and new sites like Openbook are making obvious how public that information is.  At the same time, though, that public-ness is still not that obvious to the average Facebook user.  Information is primarily being accessed by third party partners (like Pandora), other sites using Facebook’s Open Graph, and other Facebook users (Community Pages, Like buttons across the Internet).  Facebook profiles can show up in public search results, but when you go to facebook.com, the first thing you see is a wall.  If you register, you can use Facebook.  If not, you can’t.

Facebook is perhaps most accurately an example of a community that looks closed but isn’t.  As danah boyd points out,

If Facebook wanted radical transparency, they could communicate to users every single person and entity who can see their content…When people think “friends-of-friends” they don’t think about all of the types of people that their friends might link to; they think of the people that their friends would bring to a dinner party if they were to host it. When they think of everyone, they think of individual people who might have an interest in them, not 3rd party services who want to monetize or redistribute their data. Users have no sense of how their data is being used and Facebook is not radically transparent about what that data is used for. Quite the opposite. Convolution works. It keeps the press out.

In a way, it shouldn’t surprise us that Facebook is pushing information public.  Its whole economic model is based on information, not on providing a service to its users.

Which leads me to the one good example of an online community where you really have to join to benefit — online dating sites.  Match.com, eHarmony, OKCupid — none of them let you look at other members’ profiles before you join.  OkCupid is free, but the others rely on an economic model of subscriptions, not advertising.

It seems dating is in that narrow realm of things people are willing to pay for on the Internet.

So I’m left wondering, is it possible to set up a free, large-scale, online community where benefits are limited to its members?  What are the other costs and benefits of a community where you have to give to get?  Closed versus open?  And do the benefits outweigh the costs?

In the mix…Linkedin v. Facebook, online identities, and diversity in online communities

Friday, May 14th, 2010

1) Is Linkedin better than Facebook with privacy? I’m not sure this is the right question to ask. I’m also not sure the measures Cline uses to evaluate “better privacy” get to the heart of the problem.  The existence of a privacy seal of approval, the level of detail in the privacy policy, the employment of certified privacy professionals … none of these factors address what users are struggling to understand, that is, what’s happening to their information.  73% of adult Facebook users think they only share content with friends, but only 42% have customized their privacy settings.

Ultimately, Linkedin and Facebook are apples to oranges.  As Cline points out himself, people on Linkedin are in a purely professional setting.  People who share information on Linkedin do so for a specific, limited purpose — to promote themselves professionally.  In contrast, people on Facebook have to navigate being friends with parents, kids, co-workers, college buddies, and acquaintances.  Every decision to share information is much more complicated — who will see it, what will they think, how will it reflect on the user?  Facebook’s constant changes to how user information makes these decisions even more complicated — who can keep track?

In this sense, Linkedin is definitely easier to use.  If privacy is about control, then Linkedin is definitely easier to control.  But does this mean something like Facebook, where people share in a more generally social context, will always be impossible to navigate?

2) Mark Zuckerberg thinks everyone should have a single identity (via Michael Zimmer).  Well, that would certainly be one way to deal with it.

3) But most people, even the “tell-all” generation, don’t really want to go there.

4) In a not unrelated vein, Sunlight Labs has a new app that allows you to link data on campaign donations to people who email you through Gmail.  At least with regards to government transparency, Sunlight Labs seems to agree with Mark Zuckerberg.  I think information about who I’ve donated money to should be public (go ahead, look me up), but it does unnerve me a little to think that I could email someone on Craigslist about renting an apartment and have this information just pop up.  I don’t know, does the fact that it unnerves me mean that it’s wrong?  Maybe not.

5) Finally, a last bit on the diversity of online communitiesit may be more necessary than I claimed, though with a slightly different slant on diversity.  A new study found that the healthiest communities are “diverse” in that new members are constantly being added.  Although they were looking at chat rooms, which to me seems like the loosest form of community, the finding makes a lot of sense to me.  A breast cancer survivors’ forum may not care whether they have a lot of men, but they do need to attract new participants to stay vibrant.

Building a community: Does a community have to be diverse to be successful?

Tuesday, May 11th, 2010

Last year, Wikipedia made headlines when a survey commissioned by the Wikimedia Foundation discovered only 13% of Wikipedia’s writers and editors are women.  Among people who read but don’t write or edit for Wikipedia, 69% are men and 31% are women.  The same survey found that Wikipedians were much more highly educated than the rest of the population, with 19% saying they have a Master’s degree and 4.4% saying they have a Ph.D.

Facebook and MySpace have similarly gotten press for news that the demographics of the social networks’ members vary across race, class, and education.

It shouldn’t surprise us that these sites, or any other sites, would be more popular among certain demographic groups.

All communities, online or off, tend to reflect their founders and the worlds they come from.

Mark Zuckerberg, Facebook

Facebook was founded by Mark Zuckerberg while he was at Harvard.  Facebook ended up more popular with Ivy League students.  Wikipedia was founded using wiki technology and principles from the open source software movement.  Wikipedians, not surprisingly, are “mostly male computer geeks,” as described by founder Jimmy Wales.  Yelp started in San Francisco, and the irreverent, young tone echoes the tone of many Silicon Valley start-ups, attracting irreverent, young people.  It’s not just that the sites’ founders attract people who are like them.  They set the tone, based on values they hold, that tend to be shared by people similar to them.

Jimmy Wales, Wikipedia

Even as sites grow and expand beyond the first adopters, communities can develop cultures that are more attractive to certain groups than others.

Women are allegedly more active than men on Facebook, whereas the opposite is true on Twitter. 

Although both sites involve sharing information, the mechanisms are quite different.

I’m not going to hazard a guess as to why women are more drawn to Facebook or why men are more drawn to Twitter.  I do think it’s funny when writers forget their personal preferences might not be universal.  This writer, this writer, and this writer, who agree Twitter is much better than Facebook — all men.

Martha Stewart

Here’s one prominent exception, Martha Stewart, who says,

First of all, you don’t have to spend any time on it, and, second of all, you reach a lot more people. And I don’t have to ‘befriend’ and do all that other dippy stuff that they do on Facebook.

Which sounds like a stereotypically male sort of thing to say.

But it is worth noting that certain ways of interacting are more appealing to some groups than others, even when sites are not being marketed specifically to one group or another.

Why does it matter?  A successful community is not necessarily a diverse one.

A forum for breast cancer patients won’t measure the health of its community by the number of men on it.

One of the most attractive things about the Internet is its ability to concentrate people with esoteric interests.

However, for communities with more universal goals, diversity is an important issue.

It makes sense that Wikipedia has publicly been working on making its community of writers and editors more diverse.  If Wikipedia’s goal is to create “a world in which every single human being can freely share in the sum of all knowledge,” it has to include the knowledge and perspective of people other than male computer geeks.  (I can’t say for sure, but I would bet there were a couple of male computer geeks involved in the writing of this rather literal exposition of the “sanitary napkin.”)

As part of that plan, Wikipedia is rolling out a redesign, which they hope will encourage more people to contribute their knowledge.

Whether or not the redesign drastically affects Wikipedia’s numbers, the plan will likely involve a delicate balancing act.

Wikipedia needs to attract new members without alienating the original members of its community.

If the old interface was intimidating to some people, it was probably equally attractive to others.  Those who didn’t find it intimidating could identify as part of a hard-core, committed group, an identity that can be crucial for energizing early members of a community.

It’s the problem of any community that wants to grow – how do you grow without destroying the sense of community that helped it start in the first place?

Large organizations have traditionally tried to maintain a sense of community with local chapters.

The Sierra Club and Habitat for Humanity International are both built on a network of local affiliates that have a certain amount of autonomy.  The Catholic Church and other religious organizations operate using a similar organizational structure, though with varying degrees of centralized control.

Online, the examples are fewer.

In fact, the only example of a community in our study that’s grown obviously beyond the boundaries of the original group is Facebook, and as I’ve discussed earlier, it’s an outlier.

It contains communities but is not actually a community in and of itself.  Despite the demographic differences between Facebook and MySpace, Facebook has arguably grown so big, those differences have become negligible.  It almost doesn’t matter if Facebook is somewhat more popular among certain groups when it has 400 million active users.  At the same time, though, each Facebook user’s experience of Facebook is filtered through is or her friends.  Even though the dilemma of whether to accept a friend request from a parent has become a common joke, most people on Facebook haven’t directly experienced how quickly Facebook has expanded.

This may be why Facebook has managed to transcend its origins so quickly as an online social network for Harvard students.  The feeling of intimacy and connection hasn’t changed for the average user.  It’s questionable whether Facebook can maintain that sense of localized community with the various changes it’s made to how user information is shared, but Facebook is gambling that it can.

In the mix…Everyone’s obsessed with Facebook

Friday, May 7th, 2010

UPDATE: One more Facebook-related bit, a great graphic illustrating how Facebook’s default sharing settings have changed over the past five years by Matt McKeon. Highly recommend that you click through and watch how the wheel changes.

1) I love when other people agree with me, especially on subjects like Facebook’s continuing clashes with privacy advocates. Says danah boyd,

Facebook started out with a strong promise of privacy…You had to be at a university or some network to sign up. That’s part of how it competed with other social networks, by being the anti-MySpace.

2) EFF has a striking post on the changes made to Facebook’s privacy policy over the last five years.

3) There’s a new app for people who are worried about Facebook having their data, but it means you have to hand it over to this company which also states, it “may use your info to serve up ads that target your interests.” Hmm.

4) Consumer Reports is worried that we’re oversharing, but if we followed all its tips on how to be safe, what would be the point of being on a social network? On its list of things we shouldn’t do:

  • Posting a child’s name in a caption
  • Mentioning being away from home
  • Letting yourself be found by a search engine

What’s the fun of Facebook if you can’t brag about the pina colada you’re drinking on the beach right at that moment? I’m joking, but this list just underscores that we can’t expect to control safety issues solely through consumer choices. Another thing we shouldn’t do is put our full birthdate on display, though given how many people put details about their education, it wouldn’t necessarily be hard to guess which year someone was born. Consumer Reports is clearly focusing on its job, warning consumers, but it’s increasingly obvious privacy is not just a matter of personal responsibility.

5) In a related vein, there’s an interesting Wall St. Journal article on whether the Internet is increasing public humiliation. One WSJ reader, Paul Cooper, had this to say:

The simple rule here is that one should always assume that everything one does will someday be made public. Behave accordingly. Don’t do or say things you don’t want reported or repeated. At least not where anyone can see or hear you doing it. Ask yourself whether you trust the person who wants to take nude pictures of you before you let them take the pictures. It is not society’s job to protect your reputation; it’s your job. If you choose to act like a buffoon, chances are someone is going to notice.

Like I said above, privacy in a world where the word “public” means really really public forever and ever, and “private” means whatever you manage to keep hidden from everyone you know, protecting “privacy” isn’t only a matter of personal responsibility. The Internet easily takes actions that are appropriate in certain contexts and republishes them in other contexts. People change, which is part of the fun of being human. Even if you’re not ashamed of your past, you may not want it following you around in persistent web form.

Perhaps on the bright side, we’ll get to a point where we can all agree everyone has done things that are embarrassing at some point and no one can walk around in self-righteous indignation. We’ve seen norms change elsewhere. When Bill Clinton was running for president, he felt compelled to say that he had smoked marijuana but had never inhaled. When Barack Obama ran for president 16 years later, he could say, “I inhaled–that was the point,” and no one blinked.

6) The draft of a federal online privacy bill has been released. In its comments, Truste notes, “The current draft language positions the traditional privacy policy as the go to standard for ‘notice’ — this is both a good and bad thing.” If nothing else, the “How to Read a Privacy Policy” report we published last year had a similar conclusion, that privacy policies are not going to save us.

Building a community: the implications of Facebook’s new features for privacy and community

Thursday, May 6th, 2010

As I described in my last post, the differences between MySpace and Facebook are so stark, they don’t feel like natural competitors to me.  One isn’t necessarily better than the other.  Rather, one is catering to people who are looking for more of a public, party atmosphere, and the other is catering to people who want to feel like they can go to parties that are more exclusive and/or more intimate, even when they have 1000 friends.

But this difference doesn’t mean that one’s personal information on Facebook is necessarily more “private” than on MySpace.  MySpace can feel more public.  There is no visible wall between the site and the rest of the Internet-browsing community.  But Facebook’s desire to make more of its users’ information public is no secret.  For Facebook to maintain its brand, though, it can’t just make all information public by default.  This is a company that grew by promising Harvard students a network just for them, then Ivy League students a network just for them, and even now, it promises a network just for you and the people you want to connect with.

Facebook needs to remain a space where people feel like they can define their connections, rather than be open to anyone and everyone, even as more information is being shared.

And just in time for this post, Facebook rolled out new features that demonstrate how it is trying to do just that.

Facebook’s new system of Connections, for example, links information from people’s personal profiles to community pages, so that everyone who went to Yale Law School, for example, can link to that page. Although you could see other “Fans” of the school on the school’s own page before, the Community page puts every status update that mentions the school in one place, so that you’re encouraged to interact with others who mention the school.  The Community Pages make your presence on Facebook visible in new ways, but primarily to people who went to the same school as you, who grew up in the same town, who have the same interests.

Thus, even as information is shared beyond current friends, Facebook is trying to reassure you that mini-communities still exist.  You are not being thrown into the open.

Social plug-ins similarly “personalize” a Facebook user’s experience by accessing the user’s friends.  If you go to CNN.com, you’ll see which stories your friends have recommended.  If you “Like” a story on that site, it will appear as an item in your Facebook newsfeed.  The information that is being shared thus maps onto your existing connections.

The “Personalization” feature is a little different in that it’s not so much about your interactions with other Facebook users, but about your interaction with other websites.  Facebook shares the public information on your profile with certain partners.  For example, if you are logged into Facebook and you go to the music site Pandora, Pandora will access public information on your profile and play music based on the your “Likes.”

This experience is significantly different from the way people explore music on MySpace.  MySpace has taken off as a place for bands to promote themselves because people’s musical preferences are public.  MySpace users actively request to be added to their favorite bands’ pages, they click on music their friends like, and thus browse through new music.  All of these actions are overt.

Pandora, on the other hand, recommends new music to you based on music you’ve already indicated you “Like” on your profile.   But it’s not through any obvious activity on your part.  You may have noted publicly that you “Like” Alicia Keys on your Facebook profile page, but you didn’t decide to actively plug that information into Pandora.  Facebook has done it for you.

Depending on how you feel about Facebook, you may think that’s wonderfully convenient or frighteningly intrusive.

And this is ultimately why Facebook’s changes feel so troubling for many people.

Although they aren’t ripping down the walls of its convention center and declaring an open party. As Farhad Manjoo at Slate says, Facebook is not tearing down its walls but “expanding them.”

Facebook is making peepholes in certain walls, or letting some people (though not everyone) into the parties users thought were private.

This reinforces the feeling that mini-communities continue to exist within Facebook, something the company should try to do as it’s a major draw for many of its users.

Yet the multiplication of controls on Facebook for adjusting your privacy settings makes clear how difficult it is to share information and maintain this sense of mini-communities.  There are some who suspect Facebook is purposefully making it difficult to opt-out.  But even if we give Facebook the benefit of the doubt, it’s undeniable that the controls as they were, plus the controls that now exist for all the new features, are bewildering.  Just because users have choices doesn’t mean they feel confident about exercising them.

On MySpace, the prevailing ethos of being more public has its own pitfalls.  A teenager posting suggestive photos of herself may not fully appreciate what she’s doing.  At the least, though, she knows her profile is public to the world.

On Facebook, users are increasingly unsure of what information is public and to whom.  That arguably is more unsettling than total disclosure.


Get Adobe Flash player