Posts Tagged ‘Demography’

P.S. 8 in Brooklyn Heights gets an F. Really?!

Friday, September 19th, 2008

Is this a case of “question the data when it tells you something you don’t want to hear”?

Or is there something genuinely broken about NYC’s grading system? How could a school in a neighborhood where the median home sale price for June-August of this year was $2.775 million receive a failing grade? For that matter, what is an F going to do to the median home sales price? (But that’s neither here nor there.)

For what it’s worth, I think the grading system sounds reasonably well thought out: 5% is based on attendance, 10% on parent surveys (which actually gives P.S. 8’s parents an opportunity to influence the grade by expressing their satisfaction over the schools various extra-curricular activities.)

The lion’s share of the grade is based on year-over-year improvement (not for a particular grade, as in this year’s 4th grader’s versus last year’s 4th graders, but for the same class of students: this year’s 4th graders versus last year’s 3rd graders.) It’s a subtle, but importance difference.

In effect, what’s being measured is year-over-year improvement of the students, not the teachers. The remaining 25% is based on median scores (but calculated against other schools and schools with similar demographics).

Still, F seems a bit harsh. And the consequences of receiving an F sound harsh too.

Mr. Klein has said that schools that receive a D or an F two years in a row could be closed or the principal could be removed.

So, is NYC really better off without P.S. 8 than with it?

Perhaps there is something not quite right about the algorithm. Unless I’m completely misunderstanding this third-hand account of how the algorithm works, recent demographic changes in the P.S. 8 district that may be the reason why the school received an F, as opposed to a D or C. (P.S. 8 received a C in 2007.)

A quarter of the students now qualify for free lunch, compared with 98 percent in 2002, and more than half the students are white or Asian-American, up from 11 percent in 2002. Most of these changes are happening among the youngest children, before tests begin in the third grade.

In short, P.S. 8 is now competing against some of the most privileged NYC schools. However, P.S. 8’s privileged (aka white) kids are for the most part, still too young to be tested. As a result, the burden of competing with the city’s other well-off schools is carried mostly by P.S. 8’s less privileged middle-schoolers.

However, this rather significant oversight in the city’s algorithm is not what P.S. 8’s parents (at least the ones who were quoted in the NYT article) are putting forth as evidence of the grading system’s inadequacy.

Several P.S. 8 parents suggested that the F said more about the grading system than the school. They cited events like the annual read-a-thon fund-raiser, an art program that culminated in student work’s being showcased at the Guggenheim, and the school’s recent selection as Brooklyn’s Rising Star Public Elementary School for 2008 by Manhattan Media, a publisher of weekly newspapers.

But, what does the Guggenheim have to do with anything? Not much, if you buy into the city’s reasoning for what the grading system is trying to measure.

What we want with progress reports is to measure what schools add to kids, not what kids bring to the schools, Mr. Liebman said.

So what’s the issue here? A fundamental disagreement about what should be graded? Dissatisfaction with how its graded? Sour grapes over the results of the grades?

It’s hard to imagine that anyone would have made a peep about the city’s algorithm if P.S. 8 had received an A. At the end of the day, we all lend too much credence to data that tells us what we want to hear and too readily discount data that surprises us with bad news. It’s a modern-day variant of “shooting the messenger“. Although in the case of P.S. 8, it seems like the messenger did kinda munge up the message a bit, it certainly wasn’t as badly as some would like to believe.

When Privacy Doesn’t Matter

Friday, November 3rd, 2006

Last Thursday MSR hosted Professor Sam Clark from the University of Washington for a talk entitled “Relational Databases in the Social and Health Sciences: The View from Demography.” For someone interested in using data for driving decision-making, it was interesting to hear about someone using empirical data to model the impact of different policies on a societal problem.

My main take-aways from the talk were as follows:

  • Social scientists today rarely use relational database (RDBMS) technology, or when they do, they use antique software. Apparently much analysis is done in statistical packages (I’m guessing SAS, and the like), which apparently lack much of the data management technology that is indispensable when working with larger datasets. For social scientists in general, the potential of current database technologies is only just becoming apparent.
    • As I am not familiar with many of the alternatives, had I been physically at the talk on campus rather than watching on-line, I would have liked to clarify what has changed to make RDBMS more attractive than it was before. I can only surmise from the talk that large data sets have only recently become available to social scientists, and that previous data sets were too small to warrant the RDBMS.
    • Even now, Clark said his colleagues would categorize a “large” dataset to be around 500 Megabytes.
  • Many early attempts at moving demographic data to relational data structures failed because the impact of the schema design on the demographical data uses was underestimated by those developing such systems.
  • Breadth of data has significant value to the longitudinal studies social scientists are conducting. Yet, lack of agreement on how to collect and store data is hampering their ability to interrelate data sets. Therefore, developing and agreeing on a standard is very desirable. (Incidentally, this problem is not exclusive to demographic datasets.)
  • Clark has done several iterations on a standard schema, particularly for capturing “Event-Influence-State” type datasets, commonly used in demography.
    • The Structured Population Event History Register (SPEHR).
    • One example he shared with us assessed “the impact of male circumcision as an HIV prevention strategy”. By using a longitudinal study (2 years, 3,000 people) to feed his simulation, he was able to demonstrate likely outcomes of the policy intervention in different phases of the epidemic; data to feed a real-world policy decision. 🙂

He also shared lots of anecdotal statistics about the AIDS epidemic in Africa, massive infection rates and death rates, which I continue to find mind-boggling: What would day-to-day living look like in the U.S. if 20% of Americans were infected with HIV? Or if we suddenly had millions of “dual-orphans” (normally a rare phenomenon) to raise? How will Africa recover?

All in all a very interesting talk with food for thought on many fronts, but one issue was conspicuously missing: Privacy.

Up Next: How to evaluate a privacy statement when you’re dying of AIDS.


Get Adobe Flash player