1) Slate had an interesting take on the bullying story in Massachusetts and the prosecutor’s anger at Facebook for not providing information, i.e., evidence of the bullying. Apparently, Facebook provided basic subscriber information, but resisted providing more without a search warrant. Emily Bazelon points out how this area of law is murky, and references the coalition forming around reforming the Electronic Communications Privacy Act, but her larger point is an extra-legal one. The evidence of bullying the DA was looking for was at one point public, even if eventually deleted. She points out that it may be hard for kids or parents who are upset to have the presence of mind to do this, but that they could take screenshots and preserve evidence themselves.
The case raises a lot of interesting questions about anonymity, privacy, and the values we have online. Anonymity on the Internet has been a rallying cry for so many people, but I wonder, if something is illegal in the offline world, should it suddenly be legal online because you can be anonymous and avoid prosecution? (Sexual harassment is a crime in the subway, too!) We now live in a world where many of us occupy space both online and offline. We used to think of them as completely separate spaces, and it’s true that the Internet gives us opportunities to do things, both good and bad, that we wouldn’t have offline. But it’s increasingly obvious that we need to transfer some of the rules we have about the offline world into the online one. For disability rights advocates, that includes pushing the definition of “public accommodation” to include online stores like Target, and suing them if their sites are not accessible to the blind using screen readers. For privacy advocates, that includes acknowledging that people have an expectation of privacy in their emails as well as their snail mail. Free speech in the offline world doesn’t mean you can say anything you want anywhere you want. Maybe it’s time to be more nuanced about how we protect free speech online as well.
2) It turns out Twitter is pretty good at predicting box office returns — what else might it predict?
3) Cases like this amaze me, because the parties are litigating a question that seems like a no-brainer. A New Jersey court upheld recently that an employee had an expectation of privacy in her Yahoo personal account, even if she accessed it on a company computer. Would we ever litigate whether an employee had an expectation of privacy in a piece of personal mail she brought to the office and decided to read at her desk?
4) The New York Times is acknowledging their readers’ online comments in separate articles, namely, this one describing readers’ reactions to federal mortgage aid. It’s a smart way to give online readers a sense that their comments are being read. I wonder if this is where the “Letters to the Editor” page is going. I’ve been wondering, who are these readers who are so happy to be the 136th comment on an article? But the people who write letters to the editor have always been people who have extra time and energy. In a way, online comments expands the world of people who are willing to write a letter to the editor.
5) Would we feel differently about government data mining if the government were better at it? Mimi and I went to a talk at the NYU Colloquium on Information Technology and Society where Joel Reidenberg, a law professor at Fordham, talked about how transparency of personal information online is eroding the rule of law. One of the arguments he made against government data mining was that it doesn’t work, with the example of airport security, its inability to stop the underwear bomber, and its terribly inaccurate no-fly lists. Well, the Obama administration just announced a new system of airport security checks that uses intelligence-based data mining that is meant to be more targeted. It’s hard to know now whether the new system will be better and smarter, but it raises a point those opposed to data mining don’t seem to consider — what if the government were better at it? Could data mining be so precise that it avoids racial profiling? Are there other dangers to consider, and can they be warded off without shutting down data mining altogether?