Posts Tagged ‘Research’

Time for my second cup of coffee

Thursday, January 29th, 2009

img_0376.jpgBeing a serious coffee drinker, I love studies that show drinking coffee is good for you.  This one is particularly gratifying—drinking three to five cups a day seems to be linked with a markedly decreased likelihood of developing dementia!

These are the kinds of studies that get a lot of press.  I guess all the other coffee-addicts out there also want to hear their habit is good for them (note its Number One status in most-emailed articles in the New York Times, Jan. 26, 2009).  It wasn’t too long ago, though, that caffeine was the devil-incarnate of hot beverages and anyone who cared about her health felt pressured to quit her coffee addiction.  So what’s going on?

The researchers in this study are careful to point out that their findings only hint at a link between coffee and decreased risk of dementia.  No conclusions can be drawn; no recommendations can be made.

But they did feel that the study was unusual in the kind and amount of data available to them.  Of the original 2000 subjects who were selected 21 years ago, 70% were still available for examination.  Because the subjects had reported their coffee consumptions at the beginning of the study, there was less risk that people were inaccurately recalling their consumption.

It’s surely a rare thing, to have a good longitudinal group of subjects, but ultimately, it still means that this finding comes from a group of 2000 people x 70% = 1400 people.  And as the researchers pointed out, any self-reported data is subject to inaccuracies.  So multiple inaccuracies in a sample of 1400 people—hmm, maybe I can’t congratulate myself on my coffee consumption after all.

When I talk to people about the research potential of online data collection through the Common Data Project, they’ll almost always say to me, “But can any conclusions from online data collection be accurate?”  But for me, the question should be, “Could any conclusions from online data collection be more accurate than what’s available now?”  How sure are we that the conclusions we’re drawing now are accurate?  You can imagine that longitudinal studies in particular, that rely on self-reporting anyway, could greatly benefit from online data collecting tools that would reduce the costs of collecting, monitoring, and updating information on thousands, maybe even tens of thousands of people.

I’m looking forward to seeing what future longitudinal studies will say about the health benefits of my coffee addiction.

A nonprofit wants to share its mailing list with some economists–would that bother you?

Thursday, March 13th, 2008

There’s a fascinating article in the New York Times Sunday Magazine on an economists’ study of what makes people donate by an interesting liberal-conservative pair, Dean Karlan and John List. They wanted to do an empirical study of fundraising strategies, to find out what kind of solicitations are the most successful. As the article points out, lab experiments of economic choices aren’t particularly realistic: “If you put a college sophomore in a room, gave her $20 to spend and presented her with a series of pitches from hypothetical charities, she might behave very differently than when sitting on her sofa sorting through letters from actual organizations.”

So Karlan and List found an opportunity for a field experiment, a partnership with an actual, unnamed nonprofit that allowed them to try different solicitation strategies and map the outcomes. They wrote solicitation letters that were similar, except some didn’t mention a matching gift, some mentioned a 1-to-1 match, some a 2-to-1, and some a 3-to-1. In the end, if a matching gift was mentioned, it increased the likelihood of a donation, but the size of the matching gift did not. As the author, David Leonhardt, notes, their findings and the findings of other economists in this area are significant to many people, from the nonprofits trying to be better fundraisers to economists studying human behavior, even to those who want to make tax policy more effective and efficient.

The article, however, didn’t mention whether the donors to the nonprofit had consented to their responses being shared with anyone other than the nonprofit. I’m not that concerned about whether donors’ privacy may have been egregiously violated. (I’m also not sure what’s required of nonprofits in this area.) I’m just curious to know, if they had been given the choice, would they have agreed to their information being shared with the economists? Obviously, the study wouldn’t have worked if potential donors had been told they would be sent different solicitation letters to measure their responses, but I think if most people on a nonprofit’s mailing list were asked if they would explicitly allow their information to be used in academic studies, they would consent. They might want assurances that their individual identities would be protected—that no one would know Mr. So-And-So had given zero dollars to a cause he publicly champions. But they might very well be willing to help the nonprofit figure out how to be more effective and be a part of an academic study that could shape public policy. They might even be curious to know how their giving measures compares to other donors in their income brackets or geographic areas.

Most people, myself included, have a knee-jerk antipathy to having their personal information shared with anybody other than the organization or company they give it to. But maybe we would feel differently if we were actually given some choices, if our personal identities could be protected, if sharing information could lead to more than just targeted advertising or more junk mail.

CDTF’s Presentation at the Workshop on Data Privacy

Friday, February 22nd, 2008

The Common Datatrust Foundation recently attended and made a short presentation at the Workshop on Data Privacy, hosted by Rutgers University’s Center for Discrete Mathematics & Theoretical Computer Science (DIMACS).

There were spirited conversations across disciplines as statisticians, mathematicians, computer scientists, and media experts discussed how to balance the public’s interest in both privacy and information sharing. The presentations ranged from tutorials on new security and privacy technology to the management of existing databases of personal information, such as the U.S. Census, as well as thought-provoking presentations on more abstract but highly relevant questions, such as what we mean when we say we want to protect “privacy.” As Professor Helen Nissenbaum from NYU Law School pointed out, certain kinds of information flow are appropriate for certain situations; there is no uniform way to understand privacy protection.

We were excited to see how our presentation provoked questions and conversations as well. Alex Selkirk introduced the concept of a “datatrust,” a secure, structured data storage system where each record in each dataset has a set of rules defining who may use it, what it may be used for, and with what level of anonymity it may be disclosed. The presentation focused primarily on one example of the current limits of data disclosure: the subprime mortgage crisis. Although there is a great deal of data held by banks and mortgage companies on subprime loans, investigators and researchers are unable to analyze the data because the data holders are bound by confidentiality agreements to individual borrowers. CDTF proposed that a datatrust, as a third party, could use new technology to anonymize and aggregate the data in a way that would allow researchers to query the loan data without forcing the disclosure of identifying details about the borrowers. Such data-sharing would further CDTF’s mission to both protect individual privacy and encourage the sharing of information for the public good.

We hope that the conversation we began at DIMACS will continue to engage conference participants and others in the coming months.


Get Adobe Flash player