So says an article in today's Science Daily, which reports on a recent study at Northwestern of children from the Philippines. The study finds that children from the Philippines have much lower levels of C-reactive protein (CRP), which indicates better resistance to disease. Exposure to germs was much higher for the children in the Philippines.
So what's wrong with this study? It's a very tenuous association, and from what I can gather in the articles, no attempt was made to ensure the children in the U.S. that were compared to the children in the Philippines were similar in other ways. They might be different in CRP due to other environmental or hereditary factors. Perhaps it's the weather? The diet? One of any number of things could account for the difference.
In addition, the study appears to ignore the much higher infant mortality rate and much lower life expectancy in the Philippines (you can try www.indexmundi.com for life expectancy and other information by country). In other words, even if higher germ exposure does mean lower CRP, does it actually mean less disease and longer life? The broad indication is that it does not.
In order for the study to be valid, it needs to adjust for whatever inherent differences (in addition to germ exposure) exist between Phillipino and US children, and then see if CRP levels are still different. An even better way to do such a study would be to study children living in similar environments (same place, socio-economic situation, etc.) and determine if the ones exposed to more germs had lower levels of CRP when they reached adult-hood.
I've seen articles (see this for example, but I can't find a more definitive one at this time) that indicate that children with early exposure to farm animals have fewer allergies, but nothing showing exposure to more serious germs is good. And some of the germs that we are exposed to are more than just common germs--they are deadly. It might be that those who are exposed to these deadly germs early, and live, are much better off later in life, but that is no reason to expose them to those germs unnecessarily. Of course, you wouldnt give your child a deadly disease so that, if they survived, they'd be resistant to it later in life.
We live in a society that is sometimes alarmist concerning germs, and I have written about this. Yet this doesn't mean that, on the whole, a clean environment does not promote good health, and the article cited above seems to only have the most tenous of indications that it may not.
Wednesday, December 9, 2009
Thursday, October 29, 2009
Why Swine Flu is not a bunch of hogwash
This updates my previous blog: "Why Swine flu is a bunch of hogwash?"
Things have changed a bit in the months since that blog, and the hysteria I cited has leveled off. President Obama did declare a swine flu emergency a couple days ago, but I think that was a good idea.
Here is what has changed:
1) Swine flu deaths have been at epidemic levels the last three weeks. The chart below (from the CDC) shows flu and pneumonia deaths as a percentage of all deaths. The upper black line indicates epidemic level, and the red line is the current level. The graph shows four years of weekly figures.While this graph doesn't look too serious, and 2008 levels were much further above the threshold at their peak, the scary thing here is that it is so early in the season. This graph serves as a reminder, too, that every year the flu kills thousands of people, and the flu vaccine could prevent a large number of those deaths.
2) Hospitals are already getting crowded. One of the big problems with a real epidemic is the overcrowding of hospitals. This means that the really sick people cannot get treatment, and that is part of the reason the emergency was declared. See this article in USA Today about over-crowding. ok, so it's USA Today, a paper that loves hyperbole, but, again, it's early in the season and any indication of overcrowding at this point is scary.
3) The vaccine is not yet fully available. The regular flu vaccine has been out for weeks. Unfortunately, almost none of the flu this year seems to be covered by that vaccine. The majority seems to be 2009 H1N1 (the swine flu). See this chart for a breakdown. Note the orange/brown is 2009 H1N1, and note the yellow means it is not tested for sub-type, so almost all typed flu is swine flu.
That's why I am worried. The other concern is that, even when the vaccine does come out, people won't take it. See my brother's blog about why you should and the crazies who say you should not.
Things have changed a bit in the months since that blog, and the hysteria I cited has leveled off. President Obama did declare a swine flu emergency a couple days ago, but I think that was a good idea.
Here is what has changed:
1) Swine flu deaths have been at epidemic levels the last three weeks. The chart below (from the CDC) shows flu and pneumonia deaths as a percentage of all deaths. The upper black line indicates epidemic level, and the red line is the current level. The graph shows four years of weekly figures.While this graph doesn't look too serious, and 2008 levels were much further above the threshold at their peak, the scary thing here is that it is so early in the season. This graph serves as a reminder, too, that every year the flu kills thousands of people, and the flu vaccine could prevent a large number of those deaths.
2) Hospitals are already getting crowded. One of the big problems with a real epidemic is the overcrowding of hospitals. This means that the really sick people cannot get treatment, and that is part of the reason the emergency was declared. See this article in USA Today about over-crowding. ok, so it's USA Today, a paper that loves hyperbole, but, again, it's early in the season and any indication of overcrowding at this point is scary.
3) The vaccine is not yet fully available. The regular flu vaccine has been out for weeks. Unfortunately, almost none of the flu this year seems to be covered by that vaccine. The majority seems to be 2009 H1N1 (the swine flu). See this chart for a breakdown. Note the orange/brown is 2009 H1N1, and note the yellow means it is not tested for sub-type, so almost all typed flu is swine flu.
That's why I am worried. The other concern is that, even when the vaccine does come out, people won't take it. See my brother's blog about why you should and the crazies who say you should not.
Thursday, October 15, 2009
Redskins are lucky to play bad teams, but how lucky?
A recent article in Yahoo Sports pointed out that the Washington Redskins are the first team in history to play six winless teams in a row. Here is their schedule so far (also according to the article cited above):
The calculation assumes, plausibly, that the Redskins have the same chance of playing any given team (unlike some college teams, who purposely make their schedules easy, this is not possible in the NFL).
The calculation also assumes, not plausibly, that teams that have thus far won no games have a 50-50 chance of winning each game. The implicit assumption there is that all NFL teams are evenly matched. The fact is that there are a few really good teams, a few really bad teams, and a bunch of teams in the middle. Thus, there are likely to be a bunch of winless teams after 5 games, and not, as the incorrect calculation below implies, only 1 winless team of 32 after 5 games.
Finally, the calculation, apparently in a careless error, assumes the chances of playing a winless team the first week are 50-50, when, of course, all teams are winless the first week.
So the Mr. Chase's (incorrect) calculation is
Week 1 chances: 50% ( 1 in 2)
Week 2 chances: 50% (1 in 2)
Week 3 chances: 50%*50%=25% (1 in 4)
Week 4 chances: 50%*50%*50%=12.5% (1 in 8)
Week 5 chances: 50%*50%*50%=12.5% (same as week 3 because the team they played had only played three games)
Week 6 chances: 50%*50%*50%*50%*50%=3.125% (1 in 32)
A law of probability is that the chance of two unrelated events happening is the product of their individual chances. Thus, if the chance of rain today is 50% and the chance of rain tomorrow is 50%, the chance of rain both days is 25%, if those chances are unrelated (which, by the way, they probably aren't). This is why the chances for multiple losses are multiplied together.
But back to the football schedule. To calculate the chances of 6 straight games against winless teams, Mr. Chase reasonably multiplied the 6 individual chances (again it assumed the 6 matchups were unrelated):
50% * 50% * 25% * 12.5% * 12.5% * 6.25% = .003%, or 1 in 32,768.
SO, the 32,768 is the number reported in the article.
The easy correction is that the chances of playing a winless team in the first game is 100%, so the calculation should be:
100% * 50% * 25% * 12.5% * 12.5% * 6.25% = .006%, or 1 in 16,384.
This error has been pointed out in comments on the article.
In addition, other comments point out the other major flaw: teams do not have equal probability of losing. Thus the chance that a team will be, say, 0-2 is not 25% (50%*50%) but something else, depending on the quality of the teams. At the extreme, half the teams lose every game and half win every game (this of course assumes losing teams only play against winning teams, but it is possible).
The reality is certainly not this extreme, which would imply a 50-50 chance each week of playing all losing teams (and thus a 1 in 32 chance of playing 6 in a row). So, how do we figure out the reality?
The easiest way is to look at, each week the percent of teams that are winless. If we assume the Redskins have an equal chance of playing each team, then we can compute the odds each week (click on the week to see the linked source). Note that everything is out of 31 teams instead of 32 because the Redskins can't play themselves.
Week 1: 31 out of 31 teams winless. Chances: 31/31=100%
Week 2: 15 out of 31 teams winless. Chances 15/31=48% (I am assuming no byes first week and I know redskins lost their first game).
Week 3: 8 out of 31 teams winless. Chances: 8/31=26%
Week 4: 6 out of 31 teams winless. Chances: 6/31 = 19%
Week 5: 6 out of 31 teams winless. Chances 6/31 = 19%
Week 6: 4 of 31 teams winless. Chances: 4/31 = 13%
So the actual chances, assuming the Redskins have an equal chance of playing each team each week and cannot play themselves, are: 100%*48%*26%*19%*19%*13% = 0.06%, or 1 in about 1,700. Much more likely than 1 in 32,000 but still pretty unlikely.
And after all these easy games, how are they doing? Unluckily for Redskins fans, not too well...they're 2-3 going into Sunday's game against the winless Chiefs.
Week 1 -- at New York Giants (0-0)
Week 2 -- vs. St. Louis Rams (0-1)
Week 3 -- at Detroit Lions (0-2)
Week 4 -- vs. Tampa Bay Buccaneers (0-3)
Week 5 -- at Carolina Panthers (0-3)
Week 6 -- vs. Kansas City Chiefs (0-5)
The author of the article, Chris Chase (or, as he notes, his dad-let's call him Mr. Chase), calculates the odds of this as 1 in 32,768. This calculation is incorrect and far too high for several reasons, which I get to below. But first, let me explain how the calculation was likely performed.The calculation assumes, plausibly, that the Redskins have the same chance of playing any given team (unlike some college teams, who purposely make their schedules easy, this is not possible in the NFL).
The calculation also assumes, not plausibly, that teams that have thus far won no games have a 50-50 chance of winning each game. The implicit assumption there is that all NFL teams are evenly matched. The fact is that there are a few really good teams, a few really bad teams, and a bunch of teams in the middle. Thus, there are likely to be a bunch of winless teams after 5 games, and not, as the incorrect calculation below implies, only 1 winless team of 32 after 5 games.
Finally, the calculation, apparently in a careless error, assumes the chances of playing a winless team the first week are 50-50, when, of course, all teams are winless the first week.
So the Mr. Chase's (incorrect) calculation is
Week 1 chances: 50% ( 1 in 2)
Week 2 chances: 50% (1 in 2)
Week 3 chances: 50%*50%=25% (1 in 4)
Week 4 chances: 50%*50%*50%=12.5% (1 in 8)
Week 5 chances: 50%*50%*50%=12.5% (same as week 3 because the team they played had only played three games)
Week 6 chances: 50%*50%*50%*50%*50%=3.125% (1 in 32)
A law of probability is that the chance of two unrelated events happening is the product of their individual chances. Thus, if the chance of rain today is 50% and the chance of rain tomorrow is 50%, the chance of rain both days is 25%, if those chances are unrelated (which, by the way, they probably aren't). This is why the chances for multiple losses are multiplied together.
But back to the football schedule. To calculate the chances of 6 straight games against winless teams, Mr. Chase reasonably multiplied the 6 individual chances (again it assumed the 6 matchups were unrelated):
50% * 50% * 25% * 12.5% * 12.5% * 6.25% = .003%, or 1 in 32,768.
SO, the 32,768 is the number reported in the article.
The easy correction is that the chances of playing a winless team in the first game is 100%, so the calculation should be:
100% * 50% * 25% * 12.5% * 12.5% * 6.25% = .006%, or 1 in 16,384.
This error has been pointed out in comments on the article.
In addition, other comments point out the other major flaw: teams do not have equal probability of losing. Thus the chance that a team will be, say, 0-2 is not 25% (50%*50%) but something else, depending on the quality of the teams. At the extreme, half the teams lose every game and half win every game (this of course assumes losing teams only play against winning teams, but it is possible).
The reality is certainly not this extreme, which would imply a 50-50 chance each week of playing all losing teams (and thus a 1 in 32 chance of playing 6 in a row). So, how do we figure out the reality?
The easiest way is to look at, each week the percent of teams that are winless. If we assume the Redskins have an equal chance of playing each team, then we can compute the odds each week (click on the week to see the linked source). Note that everything is out of 31 teams instead of 32 because the Redskins can't play themselves.
Week 1: 31 out of 31 teams winless. Chances: 31/31=100%
Week 2: 15 out of 31 teams winless. Chances 15/31=48% (I am assuming no byes first week and I know redskins lost their first game).
Week 3: 8 out of 31 teams winless. Chances: 8/31=26%
Week 4: 6 out of 31 teams winless. Chances: 6/31 = 19%
Week 5: 6 out of 31 teams winless. Chances 6/31 = 19%
Week 6: 4 of 31 teams winless. Chances: 4/31 = 13%
So the actual chances, assuming the Redskins have an equal chance of playing each team each week and cannot play themselves, are: 100%*48%*26%*19%*19%*13% = 0.06%, or 1 in about 1,700. Much more likely than 1 in 32,000 but still pretty unlikely.
And after all these easy games, how are they doing? Unluckily for Redskins fans, not too well...they're 2-3 going into Sunday's game against the winless Chiefs.
Wednesday, August 12, 2009
Unemployment down but joblessness is up?
There was a bit of interesting news that came out Friday--the nations unemployment rate actually declined, from 9.5% to 9.4%. This is true despite the fact that there was a net loss of jobs of 247,000 (see the NY Times article). How could this happen?
Well, the unemployment rate is calculated by taking the number unemployed and dividing by the labor force: Unemployment Rate= Number Unemployed / Labor Force.
The numerator in the equation, Number Unemployed, is defined as the number of people not employed minus anyone who hasn't looked for a job in the last 4 weeks. The denominator of the equation, Labor Force, is defined as the Number Unemployed plus the number of people currently working (either full or part-time).
Thus, if people give up (and giving up is defined as not looking for the last 4 weeks), they are no longer counted in either the numerator or denominator of the equation. And that is exactly what happened between June and July of this year. According to the BLS (bureau of labor statistics), 637,000 people left the labor force between June and July. Thus, even though the number of people employed fell (by a seasonally adjusted 155,000), the unemployment rate also fell, because the number of people looking for work fell also (267,000). The net result was a drop in unemployment even though fewer people were working and more people lost jobs than found jobs.
A note about the math. At first blush, you may wonder whether it matters, since the people not looking are removed both from the numerator (Number Unemployed) and denominator (Labor Force). But mathematically, it does matter. Suppose we have a ratio 2/10, which equals 20%. Subtract 1 from the numerator and 1 from the denominator and you have 1/9, which equals 11.1%. Thus we subtracted the same number from the numerator and denominator but we did not end up with the same 20%. Instead we ended up with far less (11.1%).
The general rule is that the ratio falls when subtracting the same number from the numerator and denominator as long as the ratio is less than 1. So, 2/10>1/9 but 20/10<19/9, for example. What this means for the unemployment rate (which is always less than 1 since 1 is 100% unemployment) is that when people leave the work force, the unemployment rate is somewhat artificially reduced. This is why we had more people losing their jobs but a decline in unemployment last month.
I would guess that the labor force drop-offs would be far higher during deeper recessions where many despair of getting work or decide to take a break from their search, and this guess is borne out by recent information on the BLS site, which cites the increase in discouraged workers this last year: "Among the marginally attached, there were 796,000 discouraged workers in July, up by 335,000 over the past 12 months. (The data are not seasonally adjusted.) Discouraged workers are persons not currently looking for work because they believe no jobs are available for them."
This NY Times chart of unemployment uses a more reasonable definition and shows unemployment far higher than the official 9.4% rate. It includes all those who have looked for a job in the past year as well as part-time workers who want full-time work as part of the unemployed, and the unemployment rate is between 10 and 20%, depending on the state.
Well, the unemployment rate is calculated by taking the number unemployed and dividing by the labor force: Unemployment Rate= Number Unemployed / Labor Force.
The numerator in the equation, Number Unemployed, is defined as the number of people not employed minus anyone who hasn't looked for a job in the last 4 weeks. The denominator of the equation, Labor Force, is defined as the Number Unemployed plus the number of people currently working (either full or part-time).
Thus, if people give up (and giving up is defined as not looking for the last 4 weeks), they are no longer counted in either the numerator or denominator of the equation. And that is exactly what happened between June and July of this year. According to the BLS (bureau of labor statistics), 637,000 people left the labor force between June and July. Thus, even though the number of people employed fell (by a seasonally adjusted 155,000), the unemployment rate also fell, because the number of people looking for work fell also (267,000). The net result was a drop in unemployment even though fewer people were working and more people lost jobs than found jobs.
A note about the math. At first blush, you may wonder whether it matters, since the people not looking are removed both from the numerator (Number Unemployed) and denominator (Labor Force). But mathematically, it does matter. Suppose we have a ratio 2/10, which equals 20%. Subtract 1 from the numerator and 1 from the denominator and you have 1/9, which equals 11.1%. Thus we subtracted the same number from the numerator and denominator but we did not end up with the same 20%. Instead we ended up with far less (11.1%).
The general rule is that the ratio falls when subtracting the same number from the numerator and denominator as long as the ratio is less than 1. So, 2/10>1/9 but 20/10<19/9, for example. What this means for the unemployment rate (which is always less than 1 since 1 is 100% unemployment) is that when people leave the work force, the unemployment rate is somewhat artificially reduced. This is why we had more people losing their jobs but a decline in unemployment last month.
I would guess that the labor force drop-offs would be far higher during deeper recessions where many despair of getting work or decide to take a break from their search, and this guess is borne out by recent information on the BLS site, which cites the increase in discouraged workers this last year: "Among the marginally attached, there were 796,000 discouraged workers in July, up by 335,000 over the past 12 months. (The data are not seasonally adjusted.) Discouraged workers are persons not currently looking for work because they believe no jobs are available for them."
This NY Times chart of unemployment uses a more reasonable definition and shows unemployment far higher than the official 9.4% rate. It includes all those who have looked for a job in the past year as well as part-time workers who want full-time work as part of the unemployed, and the unemployment rate is between 10 and 20%, depending on the state.
Friday, June 12, 2009
Riding a bike? Wear a helmet.
Now that the sun has finally come out in NYC today after what seems like weeks of rain and cold weather, it seems an appropriate time to talk about one of my favorite summer recreational activities--riding a bike.
Growing up in the 1970s, I don't think I ever saw a helmet, much less wore one. However, in the same way we've figured out that seatbelts (and airbags) save lives, we also now know that biking with a helmet makes you safer. The Consumer Product Safety Council reported that wearing a helmet can decrease risk (of head injury) by as much as 85%.
Sadly, there are still a lot of enthusiasts out there that have a take no prisoners type attitude about wearing helmets, even implying that they are less safe (see for instance the helmet section of this web page in bicycle universe). Yet I think anyone who understands the statistics will see that the "freedom" of riding without a helmet is far outweighed by the risk.
The Insurance Institute for Highway Safety (IIHS) has long been a great source for safety information. They've got the same goal that hopefully most of us do, reducing deaths and injuries. In a 2003 report, the IIHS reports that child bicycle deaths has declined by more than 50% since 1975 (despite increased biking and presumably because most children wear helmets now). In addition, about 92% of all bicycle deaths were cyclists not wearing helmets (see this report). The same report also shows that while child bicycle deaths have declined precipitously(from 675 in 1975 to 106 in 2007), adult deaths have increased since 1975 (from 323 to 583).
Helmet usage is harder to figure out, but most sources put overall use around 50%, with children's use higher. This means that, given that 92% of deaths are cyclists not wearing helmets, you stand about 11 times the chance of getting killed if you don't wear a helmet. This number can be played with a little and wittled down if you assume, say, that cyclists not wearing helmets bike more dangerously, but there would have to be enormous differences for helmets to be shown to be ineffective. Moreover, all the major scientific studies show large positive effects from helmet usage (see this ANTI-helmet site for a summary of the case-control studies).
So why, when you search the internet for helmet effectiveness, or read through the literarture of a number of pro-cycling organizations, do they cast dispersions upon helmet use? This one, for me, is an enigma. I understood why the auto industry was against airbags and seatbelts (they cost money) and why the cigarette and gun manufacturers are against regulation, but why do people care so much about us not wearing helmets. I can think of only a couple of things: a) cyclists want bike lanes and other safety measures without committing to anything on their own, and b) some are too lazy/cool to bother with a helmet. Of course, I'm a cyclist and clearly, I'm all for helmets (and yes, laws requiring them). I also think that if we want state and city governments to take us seriously about increasing cyclist safety through new bike lanes, changing traffic patterns, and building of greenways, we need to do our part, too.
Growing up in the 1970s, I don't think I ever saw a helmet, much less wore one. However, in the same way we've figured out that seatbelts (and airbags) save lives, we also now know that biking with a helmet makes you safer. The Consumer Product Safety Council reported that wearing a helmet can decrease risk (of head injury) by as much as 85%.
Sadly, there are still a lot of enthusiasts out there that have a take no prisoners type attitude about wearing helmets, even implying that they are less safe (see for instance the helmet section of this web page in bicycle universe). Yet I think anyone who understands the statistics will see that the "freedom" of riding without a helmet is far outweighed by the risk.
The Insurance Institute for Highway Safety (IIHS) has long been a great source for safety information. They've got the same goal that hopefully most of us do, reducing deaths and injuries. In a 2003 report, the IIHS reports that child bicycle deaths has declined by more than 50% since 1975 (despite increased biking and presumably because most children wear helmets now). In addition, about 92% of all bicycle deaths were cyclists not wearing helmets (see this report). The same report also shows that while child bicycle deaths have declined precipitously(from 675 in 1975 to 106 in 2007), adult deaths have increased since 1975 (from 323 to 583).
Helmet usage is harder to figure out, but most sources put overall use around 50%, with children's use higher. This means that, given that 92% of deaths are cyclists not wearing helmets, you stand about 11 times the chance of getting killed if you don't wear a helmet. This number can be played with a little and wittled down if you assume, say, that cyclists not wearing helmets bike more dangerously, but there would have to be enormous differences for helmets to be shown to be ineffective. Moreover, all the major scientific studies show large positive effects from helmet usage (see this ANTI-helmet site for a summary of the case-control studies).
So why, when you search the internet for helmet effectiveness, or read through the literarture of a number of pro-cycling organizations, do they cast dispersions upon helmet use? This one, for me, is an enigma. I understood why the auto industry was against airbags and seatbelts (they cost money) and why the cigarette and gun manufacturers are against regulation, but why do people care so much about us not wearing helmets. I can think of only a couple of things: a) cyclists want bike lanes and other safety measures without committing to anything on their own, and b) some are too lazy/cool to bother with a helmet. Of course, I'm a cyclist and clearly, I'm all for helmets (and yes, laws requiring them). I also think that if we want state and city governments to take us seriously about increasing cyclist safety through new bike lanes, changing traffic patterns, and building of greenways, we need to do our part, too.
Monday, May 18, 2009
Why Swine Flu is a bunch of hogwash.
I first thought of writing about this a couple of weeks ago, when the nationwide hysteria concerning swine flu was just beginning, but then, as quickly as it came, it went. Now, with the first death from swine flu in NY, the front pages of the major newspapers have returned to the topic. The New York Times article and headline, was, as always, something close to languid. However, the NY post's article and photos, are, also as usual, a bit hysterical. My son's school, apparent readers of the post, have covered all the water fountains with plastic bags, perhaps unaware that the CDC clearly states there seems to be little or no chance of infection through drinking water.
What's more is that, so far, this flu has been a very minor flu, with about 5,000 documented cases and 6 deaths. The blog of record relays that the "regular" flu has already killed something like 13,000 people in the US this year (it's not clear whether this is derived from the CDC's annual estimate of 36,000). This amounts to about 100 people a day.
While one CDC scientist estimates the number of people with the swine flu are 50,000 or so, this estimate assumes that under-reporting of swine flu is the same as under-reporting of flu in general. Given the focus on swine flu, I expect that under-reporting of it is far lower than of general flu, and thus, the true number with the swine flu is far fewer than 50,000. The CDC's currently weekly flu report shows about one-third of the 1,286 new cases as swine flu (novel H1N1). The same report has a great graph, showing an irregular spike in flu diagnosis, just at the time when reported flu usually falls.
There are three pieces of good news, despite the scary spiked graph. First, with spring, flu cases quickly fall, because flu spreads less when people are further away from each other (i.e., outside instead of inside). Second, cases are already falling (though it's only two weeks of data). Third, all types of flu diagnosis increased in the last two weeks versus the several weeks leading up to May), implying that one of the reasons (perhaps the only reason) for the spike is that we are testing much more than usual, due to the swine flu outbreak.
Thus, Swine flu has so far killed a documented 6 people in the U.S. out of more than 5,000 confirmed cases.
In conclusion, though our own hysteria may drive documented cases up some, and lead to my children having to bring a water bottle to school, the swine flu does not appear to be particularly dangerous or deadly.
What's more is that, so far, this flu has been a very minor flu, with about 5,000 documented cases and 6 deaths. The blog of record relays that the "regular" flu has already killed something like 13,000 people in the US this year (it's not clear whether this is derived from the CDC's annual estimate of 36,000). This amounts to about 100 people a day.
While one CDC scientist estimates the number of people with the swine flu are 50,000 or so, this estimate assumes that under-reporting of swine flu is the same as under-reporting of flu in general. Given the focus on swine flu, I expect that under-reporting of it is far lower than of general flu, and thus, the true number with the swine flu is far fewer than 50,000. The CDC's currently weekly flu report shows about one-third of the 1,286 new cases as swine flu (novel H1N1). The same report has a great graph, showing an irregular spike in flu diagnosis, just at the time when reported flu usually falls.
There are three pieces of good news, despite the scary spiked graph. First, with spring, flu cases quickly fall, because flu spreads less when people are further away from each other (i.e., outside instead of inside). Second, cases are already falling (though it's only two weeks of data). Third, all types of flu diagnosis increased in the last two weeks versus the several weeks leading up to May), implying that one of the reasons (perhaps the only reason) for the spike is that we are testing much more than usual, due to the swine flu outbreak.
Thus, Swine flu has so far killed a documented 6 people in the U.S. out of more than 5,000 confirmed cases.
In conclusion, though our own hysteria may drive documented cases up some, and lead to my children having to bring a water bottle to school, the swine flu does not appear to be particularly dangerous or deadly.
Monday, April 27, 2009
Facebook and grades
I don't have a long post for today, but I want to briefly discuss the discussion of a study on Facebook and grades. It was the subject of the Wall Street Journal's Numbers Guy blog last week: http://blogs.wsj.com/numbersguy/ .
The basic question is under what conditions should we publicize results, and should we wait for peer review?
Here was my comment:
I think if the caveats were printed along with the study results, then the publication is reasonable. Otherwise, we are being a bit paternalistic by implying that the general public cannot understand the caveats but we researchers can.
The basic question is under what conditions should we publicize results, and should we wait for peer review?
Here was my comment:
I think if the caveats were printed along with the study results, then the publication is reasonable. Otherwise, we are being a bit paternalistic by implying that the general public cannot understand the caveats but we researchers can.
Monday, March 23, 2009
How big a sample?
Suppose we want to figure out what percentage of BIGbank's 1,000,000 loans are bad. We also want to look at smallbank, with 100,000 loans. Many people seem to think you'd need to look at 10 times as many loans from BIGbank as you would for smallbank.
The fact is that you would use the same size sample, in almost all practical circumstances, for the two populations above. Ditto if the population were 100,000,000 or 1,000.
The reasons for this, and the concept behind it, go back to the early part of the 20th century when modern experimental methods were developed by (Sir) Ronald A. Fisher. Though Wikipedia correctly sites Fisher in its entry on experimental design, the seminal book, Design of Experiments, is out of stock at Amazon (for $157.50, you can get a re-print of this and two other texts together in a single book). Luckily, for a mere $15.30, you can get David Salsburg's (no relation and he spells my name wrong! ;-) ) A Lady Tasting Tea, which talks about Fisher's work. Maybe this is why no one knows this important fact about sample size--because we statisticians have bought up all the books that you would otherwise be breaking down the doors (or clogging the internet) to buy. Fisher developed the idea of using randomization to create a mathematical and probability framework around making inferences of data. In English? He figured out a great way to do experiments, and this idea, or randomization, is what allows us to make statistical inferences about all sorts of things (and the lack of randomization is what sometimes makes it very difficult to prove otherwise obvious things).
Why doesn't (population) size matter?
To answer this question, we have to use the concept of randomization, as developed by Fisher. First, let's think about the million loans we want to know about at BIGbank. Each of them is no doubt very different, and we could probably group them into thousands of different categories. Yet, let's ignore that and just look at the two categories we care about: 1) good loan or 2) bad loan. Now, with enough time studying a given loan, suppose we can reasonably make a determination about which category it falls into. Thus, if we had enough time, we could look at the million loans and figure out that G% are good and B% (100% - G%) are bad.
Now suppose that we took BIGbank's loan database (ok, we need to assume they know who they loaned money to), and randomly sampled 100 loans from it. Now, stop for a second. Take a deep breath. You have just entered probability bliss -- all with that one word, randomly. The beauty to what we've just done is that we've taken a million disparate loans and with them, formed a set of 100 "good"s and "bad"s, that are identical in their probability distribution. This means that each of the 100 sampled loans that we are about to draw has exactly a G% chance of being a good one and a B% chance of being a bad one, corresponding to the actual proportions in the population of 1,000,000.
If this makes sense so far, skip this paragraph. Otherwise, envision the million loans as quarters lying on a football field. Quarters heads up denote good loans and quarters tails up denote bad loans. We randomly select a single coin. What chance does it have of being heads up? G%, of course, because exactly G% of the million are heads up and we had an equal chance of selecting each one.
Now, once we actually select (and look at) one of the coins, the chances for the second selection change slightly, because where we had G% exactly, now there is one less quarter to choose from, so we have to adjust accordingly. However, that adjustment is very slight. Suppose, G were 90%. Then, we'd have, for the second selection, if the first were a good coin, a 899999/999999 chance of selecting another good one (that's an 89.99999% chance instead of a 90% chance). For smallbank, we'd be looking at a whopping reduction to an 89.9999% chance from a 90% chance. This gives an inkling of why population size, as long as it is much bigger than sample size, doesn't much matter.
So, now we have a sample set of 100 loans. We find that 80 are good and 20 are bad. Right off, we know that, whether dealing with the 100,000 population or the 1,000,000 population, that our best guess for the percentage of good loans, G, is 80%. That is because of how we selected our sample. It doesn't matter one bit how different the loans are. They are just quarters on a football field. It follows from the fact that we selected them randomly.
We also can calculate several other facts, based on this sample. For example, if the actual number of good loans were 90% (900,000 out of 1,000,000), we'd get 80 or fewer in our sample of 100 only 0.1977% of the time. The corresponding figure, if we had sampled from the population of 100,000 (and had 90,000 good loans), would be 0.1968%. What does this lead us to conclude? Very likely, the proportion of "good" loans is less than 90%. We can continue to do this calculation for different possible values of G:
If G were 89%: .586% of the time would you get 80 or fewer.
If G were 88%: 1.47% of the time would you get 80 or fewer.
If G were 87%: 3.12% of the time would you get 80 or fewer.
If G were 86.3%: 5.0% of the time would you get 80 or fewer.
If G were 86%: 6.14% of the time would you get 80 or fewer.
In each of the above cases, the difference between a population of 1,000,000 and 100,000 loans makes a difference only at the second decimal place, if that.
Such a process allows us to create something called a confidence interval. A confidence interval kind of turns this calculation on its head and says, "Hey, if we only get 80 or fewer in a sample 1.47% of the time when the population is 88% good, and I got only 80 good loans in my sample, it doesn't sound too likely that the population is 88% good." The question then becomes, at what percentage would you start to worry?
For absolutely no reason at all (and I mean that), people seem to like to limit this percent to 5%. Thus, in the example above, most would allow that, if we estimated G such that 5% (or more) of the time, 80 or fewer of 100 loans would be good (where 80 is the number of good in our sample), then they would feel comfortable. Thus, for the above, we would say, with "95% confidence, 86.3% or fewer of the loans in the population are good." If we also want a lower bound on the percent of G loans, we could calculate the percent of G such that there is a 5% chance that 80 or more loans in a sample of 100 would be good. This percentage is 72.3%, and we could say that "with 95% confidence, 72.3% or more of the loans in the population are good." We can combine these two 95% confidence intervals into a 90% confidence interval, since the percentages not included, of 5% in each of the two intervals add to 10%. We can thus say: "with 90% confidence, between 72.3% and 86.3% of the loans in the population are good." We can calculate the highest and lowest percent of good loans we estimate there to be in the population, with any level of confidence between 0 and 100%. We could state the above in terms of 99% confidence or in terms of 50% confidence. The higher the confidence, the wider the interval and the lower the confidence the narrower the interval.
Back to sample size versus population. As stated above, the population size, though 10 times bigger, doesn't makes a difference. For a given probability above, we are using the hypergeometric distribution to calculate the exact figure (the mathematics behind it are discussed some in my earlier post).
Here are some of the chances associated with a G of 85% and a sample size of 100 that yields 80 good loans or fewer.
Population infinite : 10.65443%
Population 1,000,000: 10.65331%
Population 100,000 : 10.64%
Population 10,000 : 10.54%
Population 1,000 : 9.49%
Population 500 : 8.21%
This example follows the rule of thumb: you can ignore the population size unless the sample is at least 10% of the population.
A note to the commenter regarding population size:
Anonymous, The reason that population size does barely matters is because the statistical inferences are based on the random behavior of the sample and this behavior does not depend on the population size. Suppose you randomly selected 20 people for a survey regarding preference for a black and blue dress versus a white and gold dress and all 20 preferred black and blue. Whether I told you these people were randomly selected from the state of NY or from the whole US, in either case, you would think that the preferences (state or national) clearly favor the black and blue. That intuitive feeling is because the inference you make in your mind is regarding the behavior of the sample. Statistics takes it a bit further and figures out, if the selection is truly random, how likely such an outcome would be under different scenarios. However, the key is the observed randomness in the sample and its size and not the population size. In other words, the sample size IS important because that is where you make your observations but the population is not as long as the sample is representative of it (and it will be, as long as the sample is random).
The fact is that you would use the same size sample, in almost all practical circumstances, for the two populations above. Ditto if the population were 100,000,000 or 1,000.
The reasons for this, and the concept behind it, go back to the early part of the 20th century when modern experimental methods were developed by (Sir) Ronald A. Fisher. Though Wikipedia correctly sites Fisher in its entry on experimental design, the seminal book, Design of Experiments, is out of stock at Amazon (for $157.50, you can get a re-print of this and two other texts together in a single book). Luckily, for a mere $15.30, you can get David Salsburg's (no relation and he spells my name wrong! ;-) ) A Lady Tasting Tea, which talks about Fisher's work. Maybe this is why no one knows this important fact about sample size--because we statisticians have bought up all the books that you would otherwise be breaking down the doors (or clogging the internet) to buy. Fisher developed the idea of using randomization to create a mathematical and probability framework around making inferences of data. In English? He figured out a great way to do experiments, and this idea, or randomization, is what allows us to make statistical inferences about all sorts of things (and the lack of randomization is what sometimes makes it very difficult to prove otherwise obvious things).
Why doesn't (population) size matter?
To answer this question, we have to use the concept of randomization, as developed by Fisher. First, let's think about the million loans we want to know about at BIGbank. Each of them is no doubt very different, and we could probably group them into thousands of different categories. Yet, let's ignore that and just look at the two categories we care about: 1) good loan or 2) bad loan. Now, with enough time studying a given loan, suppose we can reasonably make a determination about which category it falls into. Thus, if we had enough time, we could look at the million loans and figure out that G% are good and B% (100% - G%) are bad.
Now suppose that we took BIGbank's loan database (ok, we need to assume they know who they loaned money to), and randomly sampled 100 loans from it. Now, stop for a second. Take a deep breath. You have just entered probability bliss -- all with that one word, randomly. The beauty to what we've just done is that we've taken a million disparate loans and with them, formed a set of 100 "good"s and "bad"s, that are identical in their probability distribution. This means that each of the 100 sampled loans that we are about to draw has exactly a G% chance of being a good one and a B% chance of being a bad one, corresponding to the actual proportions in the population of 1,000,000.
If this makes sense so far, skip this paragraph. Otherwise, envision the million loans as quarters lying on a football field. Quarters heads up denote good loans and quarters tails up denote bad loans. We randomly select a single coin. What chance does it have of being heads up? G%, of course, because exactly G% of the million are heads up and we had an equal chance of selecting each one.
Now, once we actually select (and look at) one of the coins, the chances for the second selection change slightly, because where we had G% exactly, now there is one less quarter to choose from, so we have to adjust accordingly. However, that adjustment is very slight. Suppose, G were 90%. Then, we'd have, for the second selection, if the first were a good coin, a 899999/999999 chance of selecting another good one (that's an 89.99999% chance instead of a 90% chance). For smallbank, we'd be looking at a whopping reduction to an 89.9999% chance from a 90% chance. This gives an inkling of why population size, as long as it is much bigger than sample size, doesn't much matter.
So, now we have a sample set of 100 loans. We find that 80 are good and 20 are bad. Right off, we know that, whether dealing with the 100,000 population or the 1,000,000 population, that our best guess for the percentage of good loans, G, is 80%. That is because of how we selected our sample. It doesn't matter one bit how different the loans are. They are just quarters on a football field. It follows from the fact that we selected them randomly.
We also can calculate several other facts, based on this sample. For example, if the actual number of good loans were 90% (900,000 out of 1,000,000), we'd get 80 or fewer in our sample of 100 only 0.1977% of the time. The corresponding figure, if we had sampled from the population of 100,000 (and had 90,000 good loans), would be 0.1968%. What does this lead us to conclude? Very likely, the proportion of "good" loans is less than 90%. We can continue to do this calculation for different possible values of G:
If G were 89%: .586% of the time would you get 80 or fewer.
If G were 88%: 1.47% of the time would you get 80 or fewer.
If G were 87%: 3.12% of the time would you get 80 or fewer.
If G were 86.3%: 5.0% of the time would you get 80 or fewer.
If G were 86%: 6.14% of the time would you get 80 or fewer.
In each of the above cases, the difference between a population of 1,000,000 and 100,000 loans makes a difference only at the second decimal place, if that.
Such a process allows us to create something called a confidence interval. A confidence interval kind of turns this calculation on its head and says, "Hey, if we only get 80 or fewer in a sample 1.47% of the time when the population is 88% good, and I got only 80 good loans in my sample, it doesn't sound too likely that the population is 88% good." The question then becomes, at what percentage would you start to worry?
For absolutely no reason at all (and I mean that), people seem to like to limit this percent to 5%. Thus, in the example above, most would allow that, if we estimated G such that 5% (or more) of the time, 80 or fewer of 100 loans would be good (where 80 is the number of good in our sample), then they would feel comfortable. Thus, for the above, we would say, with "95% confidence, 86.3% or fewer of the loans in the population are good." If we also want a lower bound on the percent of G loans, we could calculate the percent of G such that there is a 5% chance that 80 or more loans in a sample of 100 would be good. This percentage is 72.3%, and we could say that "with 95% confidence, 72.3% or more of the loans in the population are good." We can combine these two 95% confidence intervals into a 90% confidence interval, since the percentages not included, of 5% in each of the two intervals add to 10%. We can thus say: "with 90% confidence, between 72.3% and 86.3% of the loans in the population are good." We can calculate the highest and lowest percent of good loans we estimate there to be in the population, with any level of confidence between 0 and 100%. We could state the above in terms of 99% confidence or in terms of 50% confidence. The higher the confidence, the wider the interval and the lower the confidence the narrower the interval.
Back to sample size versus population. As stated above, the population size, though 10 times bigger, doesn't makes a difference. For a given probability above, we are using the hypergeometric distribution to calculate the exact figure (the mathematics behind it are discussed some in my earlier post).
Here are some of the chances associated with a G of 85% and a sample size of 100 that yields 80 good loans or fewer.
Population infinite : 10.65443%
Population 1,000,000: 10.65331%
Population 100,000 : 10.64%
Population 10,000 : 10.54%
Population 1,000 : 9.49%
Population 500 : 8.21%
This example follows the rule of thumb: you can ignore the population size unless the sample is at least 10% of the population.
A note to the commenter regarding population size:
Anonymous, The reason that population size does barely matters is because the statistical inferences are based on the random behavior of the sample and this behavior does not depend on the population size. Suppose you randomly selected 20 people for a survey regarding preference for a black and blue dress versus a white and gold dress and all 20 preferred black and blue. Whether I told you these people were randomly selected from the state of NY or from the whole US, in either case, you would think that the preferences (state or national) clearly favor the black and blue. That intuitive feeling is because the inference you make in your mind is regarding the behavior of the sample. Statistics takes it a bit further and figures out, if the selection is truly random, how likely such an outcome would be under different scenarios. However, the key is the observed randomness in the sample and its size and not the population size. In other words, the sample size IS important because that is where you make your observations but the population is not as long as the sample is representative of it (and it will be, as long as the sample is random).
Wednesday, March 18, 2009
7 letter scrabble word redux
A recent article by the Wall Street Journal's "Numbers Guy" has re-surfaced one of my old posts regarding scrabble. In it I said that after the first turn, you must get an 8-letter word to use all your letters, because your seven letters need to connect to an existing word.
This, of course, is not correct, as was pointed out in comments to the Numbers Guy's blog (this was also pointed out by my sister). All you need to do to use all your letters with a 7-letter word is find a place to connect that is parallel to an existing word. For example, 'weather' could be connected parallel to a word ending in 'E", since 'we' is a word.
Maybe that's why my sister won so many scrabble games against me when I was a kid.
This, of course, is not correct, as was pointed out in comments to the Numbers Guy's blog (this was also pointed out by my sister). All you need to do to use all your letters with a 7-letter word is find a place to connect that is parallel to an existing word. For example, 'weather' could be connected parallel to a word ending in 'E", since 'we' is a word.
Maybe that's why my sister won so many scrabble games against me when I was a kid.
Wednesday, March 11, 2009
Are same-sex classes better?
Yesterday's New York Times had an article, "Boys and Girls Together, Taught Separately in Public School," about same-sex classes in New York City. In particular, the article focused on P.S. 140 in the Bronx. The article looks upon such classes favorably, despite the fact that there is, as far as I can tell, no evidence that such classes lead to better achievement.
In particular, the article states: "Students of both sexes in the co-ed fifth grade did better on last year’s state tests in math and English than their counterparts in the single-sex rooms, and this year’s co-ed class had the highest percentage of students passing the state social studies exam."
In other words, the City is continuing this program, even though the evidence indicates that not only are students in same-sex classes doing no better, they are doing worse! The principal, who has introduced some programs that have achieved material results, said: "“We will do whatever works, however we can get there...we thought this would be another tool to try.” This seems reasonable, but the article states,"...unlike other programs aimed at improving student performance, there is no extra cost." There may not be a monetary cost, but making these students laboratory rats in someone's education research project doesn't help them, and, apparently in this case, hurts them. Not to mention the opportunity cost of not exposing these children to other programs that might actually help.
To be fair, the scholarly literature is not consistent in its conclusions about whether same-sex classes improve achievement. However, many of the U.S. studies showed little or no improvement. See, for example:
Singh and Vaught's study
LePore and Warren
On the other hand, some English and Australian studies indicate that, at least for girls, same-sex classes or schools may result in higher achievement (see, for example, Gillibrand E.; Robinson P.; Brawn R.; Osborn A.) while others indicate that there are no differences (see Harker).
So the literature seems to be mixed, and I would imagine there are numerous confounding factors that make this something hard to measure--for example, typical single-sex classes in New York City consist of low-income minority students, where the boys are seen as being at-risk more than the girls. Contrast with the British and other foreign studies, where the girls are the greater concern for under-achievement.
Despite this, it's questionable how long it is ethical to continue a program, like the one at P.S. 140, where the current known outcome is that boys and girls are doing worse in same-sex classes.
In particular, the article states: "Students of both sexes in the co-ed fifth grade did better on last year’s state tests in math and English than their counterparts in the single-sex rooms, and this year’s co-ed class had the highest percentage of students passing the state social studies exam."
In other words, the City is continuing this program, even though the evidence indicates that not only are students in same-sex classes doing no better, they are doing worse! The principal, who has introduced some programs that have achieved material results, said: "“We will do whatever works, however we can get there...we thought this would be another tool to try.” This seems reasonable, but the article states,"...unlike other programs aimed at improving student performance, there is no extra cost." There may not be a monetary cost, but making these students laboratory rats in someone's education research project doesn't help them, and, apparently in this case, hurts them. Not to mention the opportunity cost of not exposing these children to other programs that might actually help.
To be fair, the scholarly literature is not consistent in its conclusions about whether same-sex classes improve achievement. However, many of the U.S. studies showed little or no improvement. See, for example:
Singh and Vaught's study
LePore and Warren
On the other hand, some English and Australian studies indicate that, at least for girls, same-sex classes or schools may result in higher achievement (see, for example, Gillibrand E.; Robinson P.; Brawn R.; Osborn A.) while others indicate that there are no differences (see Harker).
So the literature seems to be mixed, and I would imagine there are numerous confounding factors that make this something hard to measure--for example, typical single-sex classes in New York City consist of low-income minority students, where the boys are seen as being at-risk more than the girls. Contrast with the British and other foreign studies, where the girls are the greater concern for under-achievement.
Despite this, it's questionable how long it is ethical to continue a program, like the one at P.S. 140, where the current known outcome is that boys and girls are doing worse in same-sex classes.
Tuesday, February 3, 2009
The age-old NY subway question--unlimited or pay-per-ride?
Before you get onto the New York City subway these days, you have to purchase a metro card. For us daily commuters, the choice would appear to be obvious--purchase an unlimited card. With it, you can get on and off the subway as many times as you like within some period of time. Surely, the MTA prices it to make it worth the money.
However, when I do the calculation for my own behavior, I never seem to get my money's worth from the unlimited. This is because the unlimited card price is always higher than the cost of buying a per ride card if you are only using the card to commute to work during the week. Even if you are using it for one round trip on the weekend, you would still pay less by buying a "pay-per-ride" card unless you buy a 30-day card.
The following table shows the cost of each unlimited ride card, followed by the amount of trips that could be purchased for that same amount. Because the MTA gives a 15% bonus for all "par per ride" purchases over $7, the nominal value in the table shows the value that will be shown on your metro card if you purchase a pay per ride card.
The first line shows the one-day card, which can be purchased for $7.50. You can use that same $7.50 to purchase $8.63 in value instead, which will be good for 4 trips plus $0.63. Thus, you'd only want to get an unlimited one-day ride if you were making at least 2 round trips.
The 7 day unlimited costs $25. If you use that same $25 to instead purchase a pay-per-ride card, you get $28.75 of value, entitling you to 14 trips (plus $0.75 additional of stored value). If you go to work every weekday during the 7 day period, you'd use just 10 trips (5 round trips). If you also use the card for one round trip during the weekend, you are up to 12 trips, still 2.4 trips short of what you could have purchased with the $25 for a pay-per-ride.
As the table shows, you are always better off purchasing pay-per-ride cards instead of unlimited cards if you are just using your metro card for commuting. Even if you take one round trip in addition to work every week, only the 30-day unlimited would be worth it, and this only if you go to work every weekday during the period and use the card once each weekend. Many people work at home from time to time and there is typically a federal holiday each month, so the 30-day figures are optimistic.
The other issue with the unlimited is a psychological one: I get upset if I forget my unlimited card or end up not taking the subway a couple days when I could have used the card. With the pay-per-ride, you only pay for what you use. Perhaps more annoyingly, the pay-per-ride cards display the amount left each time you enter the subway, but the unlimited cards do not tell you the number of days left on your card when you enter the subway, and thus, if you don't keep track it yourself, you will be jammed in the legs with a locked turnstile at least once a month when you purchase an unlimited card.
I realize there are some who not only commute to work but also very frequently take subway trips to go out or run errands. For those, the unlimited cards may be worth it. For others, stick with the pay-per-ride.
However, when I do the calculation for my own behavior, I never seem to get my money's worth from the unlimited. This is because the unlimited card price is always higher than the cost of buying a per ride card if you are only using the card to commute to work during the week. Even if you are using it for one round trip on the weekend, you would still pay less by buying a "pay-per-ride" card unless you buy a 30-day card.
The following table shows the cost of each unlimited ride card, followed by the amount of trips that could be purchased for that same amount. Because the MTA gives a 15% bonus for all "par per ride" purchases over $7, the nominal value in the table shows the value that will be shown on your metro card if you purchase a pay per ride card.
Unlimited Days | Unlimited Cost | Nominal Value if purchased as a "pay per ride" card | Trips if purchased per trip | Trips used if going to and from work only, 5 days a week | Trips lost if buying unlimited only for work versus purchasing regular card | Also one weekend trip each week | Trips lost if buying unlimited versus purchasing regular card with 1 weekly fun trip |
1 | $7.50 | $8.63 | 4.3 | na | na | na | |
7 | $25.00 | $28.75 | 14.4 | 10 | 4.4 | 12 | 2.4 |
14 | $47.00 | $54.05 | 27.0 | 20 | 7.0 | 24 | 3.0 |
30 | $81.00 | $93.15 | 46.6 | 44 | 2.6 | 52 | -5.4 |
The first line shows the one-day card, which can be purchased for $7.50. You can use that same $7.50 to purchase $8.63 in value instead, which will be good for 4 trips plus $0.63. Thus, you'd only want to get an unlimited one-day ride if you were making at least 2 round trips.
The 7 day unlimited costs $25. If you use that same $25 to instead purchase a pay-per-ride card, you get $28.75 of value, entitling you to 14 trips (plus $0.75 additional of stored value). If you go to work every weekday during the 7 day period, you'd use just 10 trips (5 round trips). If you also use the card for one round trip during the weekend, you are up to 12 trips, still 2.4 trips short of what you could have purchased with the $25 for a pay-per-ride.
As the table shows, you are always better off purchasing pay-per-ride cards instead of unlimited cards if you are just using your metro card for commuting. Even if you take one round trip in addition to work every week, only the 30-day unlimited would be worth it, and this only if you go to work every weekday during the period and use the card once each weekend. Many people work at home from time to time and there is typically a federal holiday each month, so the 30-day figures are optimistic.
The other issue with the unlimited is a psychological one: I get upset if I forget my unlimited card or end up not taking the subway a couple days when I could have used the card. With the pay-per-ride, you only pay for what you use. Perhaps more annoyingly, the pay-per-ride cards display the amount left each time you enter the subway, but the unlimited cards do not tell you the number of days left on your card when you enter the subway, and thus, if you don't keep track it yourself, you will be jammed in the legs with a locked turnstile at least once a month when you purchase an unlimited card.
I realize there are some who not only commute to work but also very frequently take subway trips to go out or run errands. For those, the unlimited cards may be worth it. For others, stick with the pay-per-ride.
Tuesday, January 20, 2009
Nutty about Peanuts.
Visiting South Carolina this weekend, I picked up an old Southern favorite from Publix: peanut butter cookies (I didn't see my real favorite: boiled peanuts). No sooner had I returned home than my Mom admonished me for buying them, because they were unsafe, and possibly tainted with Salmonella.
Sure enough, there was an article in The State confirming the outbreak. So far, around 500 people around the country have been sickened (and possibly 6 deaths) from what is believed to be contaminated peanuts. USA Today confirms the continuing "epidemic" today. While these figures seem high, 500 people sickened with food poisoning in a period of four months, across the entire U.S., is hardly a risk worth mentioning. According to wrongdiagnosis.com, the number of incidents of food poisoning or sickness is 200,000 a day. OK, you might say, but Salmonella is pretty serious and if you don't take antibiotics you might be laid up for several days. Fine, but the same site says that there are about 1.4 million cases of Salmonella annually, or about 3,835 a day (the CDC says about 40,000 cases are reported annually, but that there are many more unreported).
So why are we getting exercised about a mere 4 cases a day, as with the current outbreak? My best answer is that 1) it makes for interesting news, 2) any problem that affects so broadly a population, even with minuscule or infinitesimal risk, is seen by reporters as being important, and 3) people cannot easily assess their relative risk.
As for me, I explained to my Mom that I'm not too concerned, and quickly had a peanut butter cookie before she could run back to the store. After waiting a day or so to make sure I was Salmonella-free, the rest of the family followed. ;-)
Sure enough, there was an article in The State confirming the outbreak. So far, around 500 people around the country have been sickened (and possibly 6 deaths) from what is believed to be contaminated peanuts. USA Today confirms the continuing "epidemic" today. While these figures seem high, 500 people sickened with food poisoning in a period of four months, across the entire U.S., is hardly a risk worth mentioning. According to wrongdiagnosis.com, the number of incidents of food poisoning or sickness is 200,000 a day. OK, you might say, but Salmonella is pretty serious and if you don't take antibiotics you might be laid up for several days. Fine, but the same site says that there are about 1.4 million cases of Salmonella annually, or about 3,835 a day (the CDC says about 40,000 cases are reported annually, but that there are many more unreported).
So why are we getting exercised about a mere 4 cases a day, as with the current outbreak? My best answer is that 1) it makes for interesting news, 2) any problem that affects so broadly a population, even with minuscule or infinitesimal risk, is seen by reporters as being important, and 3) people cannot easily assess their relative risk.
As for me, I explained to my Mom that I'm not too concerned, and quickly had a peanut butter cookie before she could run back to the store. After waiting a day or so to make sure I was Salmonella-free, the rest of the family followed. ;-)
Subscribe to:
Posts (Atom)
Suppose instead this was a study linking domestic air travel through a particular city to a new and deadly virus (say, swine flu?). Then there might be more reason to be more cautious (and paternalistic), because the cost of being wrong is very high. Still, there would be the counter-argument that not publishing could endanger people’s lives. We always have this trade-off, I believe, between unintentionally misleading people that a study is correct when it is not, and vice-versa.
In this open era, especially, I think the balance leans towards publishing, where the blogging/commenting public will quickly crucify the poor research and finding supporting evidence for good research.