A couple years back, I wrote on the chances of various "splits" in bridge (and explained why this is something bridge players care about) in this post, which also explains the math behind the chances.

However, in that post, I failed to include the possibility of 7 trumps being out, because it is fairly rare. Due to some poor bidding on my part, I found myself playing 4 spades last night, and my partner and I only had 6 trumps between us. Here are the chances of the different splits of 7 trumps that are out, between the other two players.

4-3 split: 62.2%

5-2 split: 30.5%

6-1 split: 6.8%

7-0 split: 0.5%

For completeness, here are splits with 6 and fewer (from the prior post).

For hands with 6 trumps out:

3-3 split : 35.5%

4-2 split: 48.4%

5-1 split: 14.5%

6-0 split: 1.5%

For hands with 5 trumps out, we get:

3-2 split: 67.8%

4-1 split: 28.3%

5-0 split: 3.9%

For hands with 4 trumps out:

2-2 split: 40.7%

3-1 split: 49.7%

4-0 split: 9.5%

For hands with 3 trumps out:

2-1 split: 78%

3-0 split: 22%

For hands with 2 trumps out:

1-1 split: 52%

2-0 split: 48%

It's worth mentioning that these probabilities are unconditional. Since the bidding that precedes playing any given hand gives some information, it is typically true that some splits can be ruled out or downplayed. For example. in the 4 spade hand I played last night, a 5-2 or (especially) worse split seemed unlikely, because there was no double from the other side, so I would've put the chances of a 403 split far higher than the unconditional 62%.

## Wednesday, October 21, 2015

## Friday, September 4, 2015

### See my new posts on my web site

My newer posts (and some of the old ones) are now on my website:

http://salthillstatistics.com/blog.php Salt Hill Blog

http://salthillstatistics.com/blog.php Salt Hill Blog

## Sunday, February 15, 2015

### Ultimate Frisbee: to Huck or not to Huck?

I play a lot of Ultimate Frisbee, a game akin to football in that there are end zones, but akin to soccer in that there is constant action until someone scores. In Ultimate, you can only advance by throwing the disc (so-called because we generally do not generally use Wham-O branded discs, which are called Frisbees). An incomplete pass or a pass out of bounds is a turnover, as is a "stall," where the offense holds the disc without throwing for more than 10 seconds.

In other words, in order for the offense to score, you need to complete passes until someone catches the disc in the end zone. The accepted method of doing this is to complete shorter, high-percentage passes. On a non-windy day, it seems fairly simple for at least one of your six teammates to get open and thus you can march down the field. Of course, one long pass, or "huck," can shortcut the process and give your team the quick score. Much like football, the huck is not typically done except in desperation (game almost over due to time or thrower almost stalled).

However, I am not at all sure this logic makes sense. Suppose you need six short passes to advance to a score. If your team completes short passes with a probability of 90%, you will score about 53% of the time (90% to the sixth power gives the chances of completing six passes in a row). In other words, as long as the chance of completing the huck is more than 53%, you would have a better chance of scoring with a huck.

Thus, the relative chances of scoring via the two methods depends on three things: 1) chance of completing a short pass, 2) chance of completing a huck, and 3) number of short passes needed for a score. The graph below shows the threshold huck completion rate (the rate at which it makes more sense to huck) for different short pass completion rates and always assuming 6 short passes is enough for a score and one huck is enough for a score.

Of course, this simple analysis assumes 6 throws equals a score, and it also leaves out a number of other factors. For example, an incomplete huck confers a field advantage to the hucking team because the opposing team has to begin from the point of in-completion (as long as it was in-bounds). On the other hand, it may not take long for the opposing team to figure out the hucking strategy and play a zone style defense that will lower the hucking chances considerably.

In other words, in order for the offense to score, you need to complete passes until someone catches the disc in the end zone. The accepted method of doing this is to complete shorter, high-percentage passes. On a non-windy day, it seems fairly simple for at least one of your six teammates to get open and thus you can march down the field. Of course, one long pass, or "huck," can shortcut the process and give your team the quick score. Much like football, the huck is not typically done except in desperation (game almost over due to time or thrower almost stalled).

However, I am not at all sure this logic makes sense. Suppose you need six short passes to advance to a score. If your team completes short passes with a probability of 90%, you will score about 53% of the time (90% to the sixth power gives the chances of completing six passes in a row). In other words, as long as the chance of completing the huck is more than 53%, you would have a better chance of scoring with a huck.

Thus, the relative chances of scoring via the two methods depends on three things: 1) chance of completing a short pass, 2) chance of completing a huck, and 3) number of short passes needed for a score. The graph below shows the threshold huck completion rate (the rate at which it makes more sense to huck) for different short pass completion rates and always assuming 6 short passes is enough for a score and one huck is enough for a score.

## Tuesday, September 30, 2014

### What is a p value and why doesn't anyone understand it?

I feel like I've written this too many times, but here we go again.

There was a splendid article in the New York times today concerning Bayesian statistics, except that, as usual, it had some errors.

Lest you think me overly pedantic, I will note that Andrew Gelman, the Columbia professor profiled in much of the article, has already posted his own blog entry highlighting a bunch of the errors (including the one I focus on) here.

Concerning p-values the article states:

"accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise." This is nonsense. I found this nonsense particularly interesting because I recently read almost the same line in a work written by an MIT professor.

Before I get to explaining why the Times is wrong, I need to explain what a p-value is. A p-value is a probability calculation, first of all. Second of all, it has an inherent assumption behind it (technically speaking, it is a conditional probability calculation). Thus, it calculates a probability assuming a certain state of the world. If that state of the world does not exist, then the probability is inapplicable.

An example: I declare:"The probability you will drown if you fall into water is 99%." "Not true," you say, "I am a great swimmer." "I forgot to mention," I explain, "that you fall from a boat, which continues without you to the nearest land 25 miles away...and the water is 40 degrees." The p-value is a probability like that -- it is totally rigged.

The assumption behind the p-value is often called a Null Hypothesis. The p-value is the chance of obtaining your particular favorable research result, under the "Null Hypothesis" assumption that the research is garbage. It is the chances that, given your research is useless, you obtained a result at least as positive as the one you did. But, you say, "my research may not be totally useless!" The p-value doesn't care about that one bit.

Suppose we are trying to determine whether an SAT prep course results in a better score for the SAT. The Null Hypothesis would be characterized as follows:

H0=Average Change in Score after course is 0 points or even negative. In shorthand, we could call the average change in score

Now suppose we have an experiment where we randomly selected 100 students who took the SAT and gave them the course before they re-took the exam. We measure each students change and thus calculate the average d for the sample (I am using a small d to denote the sample average while the large D is the average if we were to measure it across the universe of all students who ever existed or will exist). Suppose that this average for the 100 students is an score increase of 40 points. We would like to know, given the average difference, d, in the sample, is the universe average D greater than 0? Classical statistics neither tells us the answer to this question nor does it even give the probability that the answer to this question is "yes."

Instead, classical statistics allows us only to calculate the p-value: P(d>=40| D<=0). In words, the p-value for this example is the probability that the average difference in our sample is 40 or more, given that the Universe average difference is 0 or less (Null Hypothesis is true). If this probability is less than 5%, we usually conclude the Null Hypothesis is FALSE, and if the NUll Hypothesis were in fact true, we would be incorrectly concluding statistical significance. This incorrect conclusion is often called a false positive. The chance of a false positive can be written in shorthand as P(FP|H0), where FP is false positive, "|" means given, and H0 means Null hypothesis. (Technically, but not important here, we calculate the probability at D=0 even though the Null Hypothesis covers values less than zero, because that gives the highest (most conservative) value.) If the p-value is set at 5% for statistical significance, that means P(FP|H0)=5%.

A more general way of defining the p-value is that the p-value is the chance of obtaining a result at least as extreme as our sample result under the condition/assumption that the Null Hypothesis is true. If the Null Hypothesis is false (in our example if the universe difference is more than 0), the p-value is meaningless.

So why do we even use the p-value? The idea is that if the p-value is extremely small, it indicates that our underlying Null Hypothesis is false. In fact, it says either we got really lucky or we were just assuming the wrong thing. Thus, if it is low enough, we assume we couldn't have been that lucky and instead decide that the Null Hypothesis must have been false. BINGO--we then have a statistically significant result.

If we set the level for statistical significance at 5% (sometimes it is set at 1% or 10%), p-values at or below 5% result in rejection of the Null Hypothesis and a declaration of a statistically significant difference. This mode of analysis leads to four possibilities:

False Positive (FP), False Negative (FN), True Positive (TP), and True Negative(TN).

False Positives occur when the research is useless but we nonetheless get a result that leads us to conclude it is useful.

False Negatives occur when the research is useful but we nonetheless get a result that leads us to conclude that it is useless.

True Positive occur when the research is useful and we get a result that leads us to conclude that it is useful.

True Negatives occur when the research is useless and we get a result that leads us to conclude that it is useless.

We only know if the result was positive (statistically significant) or negative (not statistically significant)--we never know if the result was TRUE (correct) or FALSE (incorrect). The p-value limits the *chance* of a false positive to 5%. It does not explicitly deal with FN, TP, or TN.

Now, back to the quote in the article: "accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise."

Let's consider a journal that publishes 100 statistically significant results regarding SAT courses that improve scores and statistical significance is based on p-values of 5% or below. In other words, this journal published 100 articles with research showing that 100 different courses were helpful. What number of these courses actually are helpful?

Given what we have just learned about the p-value, I hope your answer is 'we have no idea.' There is no way to answer this question without more information. It may be that all 100 courses are helpful and it may be that none of them are. Why? Because we do not know if these are all FPs or all TPs or something in-between--we only know that they are positive, statistically significant results.

To figure out the breakdown, let's do some math. First, create an equation, using some of the terminology from earlier in the post.

The Number of statistically significant results = False positives (FP) plus True positives (TP). This is simple enough

We can go one step further and define the probability of a false positive given the Null hypothesis is true and the probability of a true positive given the alternative hypothesis is true -- P(FP|H0) and P(TP|HA). We know that P(FP|H0) is 5% -- we set this is by only considering a result statistically significant when the p-value is 5%. However, we do not know P(TP|HA), the chances of getting a true positive when the alternative hypothesis is true. The absolute best case scenario is that it is 100%--that is, any time a course is useful, we get a statistically significant result.

Suppose that we know that B% of courses are bad and (1-B)% of courses are helpful. Bad courses do not improve scores and helpful courses do. Further, let's suppose that N courses in total were considered, in order to get the 100 with statistically significant results. In other words, a total of N studies were performed on courses and those with statistically significant results were published by the journal. Let's further assume the extreme concept above that ALL good courses will be found to be good (no False Negatives), so that P(TP|HA)=100%. Now we have the components to figure out how many bad courses are among the 100 publications regarding helpful courses.

The number of statistically significant results is :

100= B*N*P(FP|H0) + (1-B)*N*P(TP|HA)

This first term just multiplies the (unknown) percent of courses that are bad by the total studies performed by the percent of studies that will give the false positive result that says the course is good. The second term is analogous, but for good courses that achieve true positive results. These reduce to:

100 = N(B*5% + (1-B)*100%) [because the FP chances are 5% and TP chances are 100% ]

= N(.05B +1 - B) [algebra]

= N(1-.95B) [more algebra]

==> B = (20/19)*(1- 100/N) [more algebra]

The published courses equal B*N*P(FP|H0), which in turn equals (1/19)*(N-100) [using more algebra].

If you skipped the algebra, what this comes down to is that the number of bad courses published depends on N, the total number of different courses that were researched.

If N were 100, then 0 of the publications were garbage and all 100 were useful.

If the N were 1,000, then about 947 were garbage, about 47 of which were FPs and thus among the 100 publications. So 47 garbage courses were among the 100 published.

If the total courses reviewed were 500, then about 421 were garbage, about 21 which were FPs and thus among the 100 publications.

You might notice, that given our assumptions, N cannot be below 100, the point at which no studies published are garbage.

Also, N cannot be above 2000, the point at which all studies published are garbage.

You might be thinking--we have no idea how many studies are done for each journal article accepted for publication though, and thus knowing that 100 studies are published tells us nothing about how many are garbage--it could be anything from 0 to 100% of all studies! Correct. We need more information to crack this problem. However, 5% garbage may not be so terrible anyway.

While it might seem obvious that 0 FPs is the goal, such a stringent goal, even if possible, would almost certainly lead to many more FNs, meaning good and important research would be ignored because its statistical significance did not meet a more stringent standard. In other words, if standards were raised to 1% or 0.1%, then some TPs under the 5% standard would become FNs under the more stringent standard, important research--thought to be garbage--would be ignored, and scientific progress would be delayed.

There was a splendid article in the New York times today concerning Bayesian statistics, except that, as usual, it had some errors.

Lest you think me overly pedantic, I will note that Andrew Gelman, the Columbia professor profiled in much of the article, has already posted his own blog entry highlighting a bunch of the errors (including the one I focus on) here.

Concerning p-values the article states:

"accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise." This is nonsense. I found this nonsense particularly interesting because I recently read almost the same line in a work written by an MIT professor.

**P-value explained in brief**Before I get to explaining why the Times is wrong, I need to explain what a p-value is. A p-value is a probability calculation, first of all. Second of all, it has an inherent assumption behind it (technically speaking, it is a conditional probability calculation). Thus, it calculates a probability assuming a certain state of the world. If that state of the world does not exist, then the probability is inapplicable.

An example: I declare:"The probability you will drown if you fall into water is 99%." "Not true," you say, "I am a great swimmer." "I forgot to mention," I explain, "that you fall from a boat, which continues without you to the nearest land 25 miles away...and the water is 40 degrees." The p-value is a probability like that -- it is totally rigged.

The assumption behind the p-value is often called a Null Hypothesis. The p-value is the chance of obtaining your particular favorable research result, under the "Null Hypothesis" assumption that the research is garbage. It is the chances that, given your research is useless, you obtained a result at least as positive as the one you did. But, you say, "my research may not be totally useless!" The p-value doesn't care about that one bit.

**More detail using an SAT prep course example**Suppose we are trying to determine whether an SAT prep course results in a better score for the SAT. The Null Hypothesis would be characterized as follows:

H0=Average Change in Score after course is 0 points or even negative. In shorthand, we could call the average change in score

**D**(for difference) and say H0:**D**<=0. Of course, we are hoping the test results in a higher score, so there is also a research hypothesis:**D**>=0. For the purposes of this example, we will assume the change that occurs is wholly due to the course and not to other factors, such as the students becoming more mature with or without the course, the later test being easier, etc.Now suppose we have an experiment where we randomly selected 100 students who took the SAT and gave them the course before they re-took the exam. We measure each students change and thus calculate the average d for the sample (I am using a small d to denote the sample average while the large D is the average if we were to measure it across the universe of all students who ever existed or will exist). Suppose that this average for the 100 students is an score increase of 40 points. We would like to know, given the average difference, d, in the sample, is the universe average D greater than 0? Classical statistics neither tells us the answer to this question nor does it even give the probability that the answer to this question is "yes."

Instead, classical statistics allows us only to calculate the p-value: P(d>=40| D<=0). In words, the p-value for this example is the probability that the average difference in our sample is 40 or more, given that the Universe average difference is 0 or less (Null Hypothesis is true). If this probability is less than 5%, we usually conclude the Null Hypothesis is FALSE, and if the NUll Hypothesis were in fact true, we would be incorrectly concluding statistical significance. This incorrect conclusion is often called a false positive. The chance of a false positive can be written in shorthand as P(FP|H0), where FP is false positive, "|" means given, and H0 means Null hypothesis. (Technically, but not important here, we calculate the probability at D=0 even though the Null Hypothesis covers values less than zero, because that gives the highest (most conservative) value.) If the p-value is set at 5% for statistical significance, that means P(FP|H0)=5%.

A more general way of defining the p-value is that the p-value is the chance of obtaining a result at least as extreme as our sample result under the condition/assumption that the Null Hypothesis is true. If the Null Hypothesis is false (in our example if the universe difference is more than 0), the p-value is meaningless.

So why do we even use the p-value? The idea is that if the p-value is extremely small, it indicates that our underlying Null Hypothesis is false. In fact, it says either we got really lucky or we were just assuming the wrong thing. Thus, if it is low enough, we assume we couldn't have been that lucky and instead decide that the Null Hypothesis must have been false. BINGO--we then have a statistically significant result.

If we set the level for statistical significance at 5% (sometimes it is set at 1% or 10%), p-values at or below 5% result in rejection of the Null Hypothesis and a declaration of a statistically significant difference. This mode of analysis leads to four possibilities:

False Positive (FP), False Negative (FN), True Positive (TP), and True Negative(TN).

False Positives occur when the research is useless but we nonetheless get a result that leads us to conclude it is useful.

False Negatives occur when the research is useful but we nonetheless get a result that leads us to conclude that it is useless.

True Positive occur when the research is useful and we get a result that leads us to conclude that it is useful.

True Negatives occur when the research is useless and we get a result that leads us to conclude that it is useless.

We only know if the result was positive (statistically significant) or negative (not statistically significant)--we never know if the result was TRUE (correct) or FALSE (incorrect). The p-value limits the *chance* of a false positive to 5%. It does not explicitly deal with FN, TP, or TN.

**Back to the Question of how many published studies are garbage, but it gets a little technical**Now, back to the quote in the article: "accepting everything with a p-value of 5 percent means that one in 20 “statistically significant” results are nothing but random noise."

Let's consider a journal that publishes 100 statistically significant results regarding SAT courses that improve scores and statistical significance is based on p-values of 5% or below. In other words, this journal published 100 articles with research showing that 100 different courses were helpful. What number of these courses actually are helpful?

Given what we have just learned about the p-value, I hope your answer is 'we have no idea.' There is no way to answer this question without more information. It may be that all 100 courses are helpful and it may be that none of them are. Why? Because we do not know if these are all FPs or all TPs or something in-between--we only know that they are positive, statistically significant results.

To figure out the breakdown, let's do some math. First, create an equation, using some of the terminology from earlier in the post.

The Number of statistically significant results = False positives (FP) plus True positives (TP). This is simple enough

We can go one step further and define the probability of a false positive given the Null hypothesis is true and the probability of a true positive given the alternative hypothesis is true -- P(FP|H0) and P(TP|HA). We know that P(FP|H0) is 5% -- we set this is by only considering a result statistically significant when the p-value is 5%. However, we do not know P(TP|HA), the chances of getting a true positive when the alternative hypothesis is true. The absolute best case scenario is that it is 100%--that is, any time a course is useful, we get a statistically significant result.

Suppose that we know that B% of courses are bad and (1-B)% of courses are helpful. Bad courses do not improve scores and helpful courses do. Further, let's suppose that N courses in total were considered, in order to get the 100 with statistically significant results. In other words, a total of N studies were performed on courses and those with statistically significant results were published by the journal. Let's further assume the extreme concept above that ALL good courses will be found to be good (no False Negatives), so that P(TP|HA)=100%. Now we have the components to figure out how many bad courses are among the 100 publications regarding helpful courses.

The number of statistically significant results is :

100= B*N*P(FP|H0) + (1-B)*N*P(TP|HA)

This first term just multiplies the (unknown) percent of courses that are bad by the total studies performed by the percent of studies that will give the false positive result that says the course is good. The second term is analogous, but for good courses that achieve true positive results. These reduce to:

100 = N(B*5% + (1-B)*100%) [because the FP chances are 5% and TP chances are 100% ]

= N(.05B +1 - B) [algebra]

= N(1-.95B) [more algebra]

==> B = (20/19)*(1- 100/N) [more algebra]

The published courses equal B*N*P(FP|H0), which in turn equals (1/19)*(N-100) [using more algebra].

If you skipped the algebra, what this comes down to is that the number of bad courses published depends on N, the total number of different courses that were researched.

If N were 100, then 0 of the publications were garbage and all 100 were useful.

If the N were 1,000, then about 947 were garbage, about 47 of which were FPs and thus among the 100 publications. So 47 garbage courses were among the 100 published.

If the total courses reviewed were 500, then about 421 were garbage, about 21 which were FPs and thus among the 100 publications.

You might notice, that given our assumptions, N cannot be below 100, the point at which no studies published are garbage.

Also, N cannot be above 2000, the point at which all studies published are garbage.

You might be thinking--we have no idea how many studies are done for each journal article accepted for publication though, and thus knowing that 100 studies are published tells us nothing about how many are garbage--it could be anything from 0 to 100% of all studies! Correct. We need more information to crack this problem. However, 5% garbage may not be so terrible anyway.

While it might seem obvious that 0 FPs is the goal, such a stringent goal, even if possible, would almost certainly lead to many more FNs, meaning good and important research would be ignored because its statistical significance did not meet a more stringent standard. In other words, if standards were raised to 1% or 0.1%, then some TPs under the 5% standard would become FNs under the more stringent standard, important research--thought to be garbage--would be ignored, and scientific progress would be delayed.

## Monday, May 5, 2014

### Another perspective on the admissions game--early admission

One thing I failed to consider in my previous blog is early admissions.

By admitting many or most of their students early, a college can appear to be very selective when, in fact, it is only selective for people who do not apply early. Applying early decision is the equivalent of ranking a school first, and schools thus know it will improve their matriculation rate by admitting students early. Also, students who really wanted to attend a particular school will perhaps be better than students who may have chosen the school 2nd or 3rd or worse.

A summary of actual acceptance rates at Ivy League schools, early and otherwise, appears here. To understand what is happening here, take Harvard, with the lowest overall acceptance rate of 5.8%. If you apply there through regular admissions, you have a 3.8% chance (less than 1 in 25) of being admitted. However, if you apply early decision, your chances increase to 18.4% (about 1 in 5 or 6). Of course, the quality of the students is likely different between the group that applies for regular admission and the group that applies early, so that the difference between two equally qualified students is likely lower. However, it seems doubtful that the entire difference is in quality of the application pool.

At a recent presentation I heard from an admissions officer at a local college, he stated outright that the standards change between early and later admissions even for "rolling" admissions schools. Put simply, early applicants get priority and are more likely to be accepted.

So what's the strategy? Apply early, but you only have one shot at early decision (typically you can only apply to one school). Therefore, apply to a top choice but the one in which you have a decent chance of getting into, according to that school's average SATs, grades, etc. If you reach too high, you will be rejected and relegated to the regular application pool, where chances of getting into top schools is far lower.

By admitting many or most of their students early, a college can appear to be very selective when, in fact, it is only selective for people who do not apply early. Applying early decision is the equivalent of ranking a school first, and schools thus know it will improve their matriculation rate by admitting students early. Also, students who really wanted to attend a particular school will perhaps be better than students who may have chosen the school 2nd or 3rd or worse.

A summary of actual acceptance rates at Ivy League schools, early and otherwise, appears here. To understand what is happening here, take Harvard, with the lowest overall acceptance rate of 5.8%. If you apply there through regular admissions, you have a 3.8% chance (less than 1 in 25) of being admitted. However, if you apply early decision, your chances increase to 18.4% (about 1 in 5 or 6). Of course, the quality of the students is likely different between the group that applies for regular admission and the group that applies early, so that the difference between two equally qualified students is likely lower. However, it seems doubtful that the entire difference is in quality of the application pool.

At a recent presentation I heard from an admissions officer at a local college, he stated outright that the standards change between early and later admissions even for "rolling" admissions schools. Put simply, early applicants get priority and are more likely to be accepted.

So what's the strategy? Apply early, but you only have one shot at early decision (typically you can only apply to one school). Therefore, apply to a top choice but the one in which you have a decent chance of getting into, according to that school's average SATs, grades, etc. If you reach too high, you will be rejected and relegated to the regular application pool, where chances of getting into top schools is far lower.

## Monday, April 28, 2014

### Getting into College

Now that I have a 9th-grader, I am starting to think about college admissions. The urban myth is: "If you were applying to college now, you'd never get into the (great) college you went to (in the 1980s or 1990s)."

This belief is driven by lower acceptance rates at many elite colleges, as well as the parents and peers of those who went to elite schools. This washington post article debunks this myth. It refers to an article about a study at the Center for Public Education, which has more detail. On the other hand, this paper shows that while overall selectivity fell, the top schools are more difficult to get into, at least as measured by SAT/ACT scores.

Here are some factors that could be at play:

1) Regression to the mean. People who went to great schools are, on average, high achievers compared to the general public. However, if you take the cohort who were accepted to these schools, some fraction will have gotten in by chance, scoring better or doing better just by chance. The next generation will regress to the mean, and this means it will appear as if colleges are more selective,. among those who went to more selective colleges (by the same token, among those who went to the least selective schools, there will be the opposite effect)...all else being equal of course. This is the same effect that results in the children of the tallest people being shorter than their parents, even though they still may be taller than the average person.

2) People apply to more schools. When your average person applies to 10 schools, whereas the average person used to apply to 3, acceptance rates can go down, resulting in higher perceived selectivity. This article shows the number of people applying to four or more schools more than doubling since the 70s. The increase in applications might also imply that students that never would have applied to, say, Harvard, are now applying. This is why a lower acceptance rate doesn't actually mean it is more difficult to get admitted, once you adjust for the quality of the student.

3) Slight increase in actual selectivity at a few schools. The New York Times had an interesting article regarding the changes in selectivity, which focused on the number of spots per 100,000 population (rather than the number admitted). Harvard, with the greatest drop in selectivity, had a drop of 27% (the article focused only on US student rates) in the last 20 years. While this might seem large, keep in mind that their admissions rate has dropped about two-thirds, from 18% to 6%, a much larger change.

4) Student quality improved. There is certainly room in the equation for a true increase in student quality. As the article above implies, the top schools did have moderate increases in test scores.

No matter whether college is the same or more difficult to get into, it certainly appears that it is more stressful. One solution for this is the med school solution (and NYC schools solution): a ranking and matching program. This is fairly simple and goes as follows: each student ranks each school he/she applies to in order of preference. Colleges rank the students that apply in order. Colleges are matched students that are highest on their list, beginning with students who ranked them first. Students are required to go to the college they are matched with, or enter a second consolation round.

This belief is driven by lower acceptance rates at many elite colleges, as well as the parents and peers of those who went to elite schools. This washington post article debunks this myth. It refers to an article about a study at the Center for Public Education, which has more detail. On the other hand, this paper shows that while overall selectivity fell, the top schools are more difficult to get into, at least as measured by SAT/ACT scores.

Here are some factors that could be at play:

1) Regression to the mean. People who went to great schools are, on average, high achievers compared to the general public. However, if you take the cohort who were accepted to these schools, some fraction will have gotten in by chance, scoring better or doing better just by chance. The next generation will regress to the mean, and this means it will appear as if colleges are more selective,. among those who went to more selective colleges (by the same token, among those who went to the least selective schools, there will be the opposite effect)...all else being equal of course. This is the same effect that results in the children of the tallest people being shorter than their parents, even though they still may be taller than the average person.

2) People apply to more schools. When your average person applies to 10 schools, whereas the average person used to apply to 3, acceptance rates can go down, resulting in higher perceived selectivity. This article shows the number of people applying to four or more schools more than doubling since the 70s. The increase in applications might also imply that students that never would have applied to, say, Harvard, are now applying. This is why a lower acceptance rate doesn't actually mean it is more difficult to get admitted, once you adjust for the quality of the student.

3) Slight increase in actual selectivity at a few schools. The New York Times had an interesting article regarding the changes in selectivity, which focused on the number of spots per 100,000 population (rather than the number admitted). Harvard, with the greatest drop in selectivity, had a drop of 27% (the article focused only on US student rates) in the last 20 years. While this might seem large, keep in mind that their admissions rate has dropped about two-thirds, from 18% to 6%, a much larger change.

4) Student quality improved. There is certainly room in the equation for a true increase in student quality. As the article above implies, the top schools did have moderate increases in test scores.

No matter whether college is the same or more difficult to get into, it certainly appears that it is more stressful. One solution for this is the med school solution (and NYC schools solution): a ranking and matching program. This is fairly simple and goes as follows: each student ranks each school he/she applies to in order of preference. Colleges rank the students that apply in order. Colleges are matched students that are highest on their list, beginning with students who ranked them first. Students are required to go to the college they are matched with, or enter a second consolation round.

## Sunday, December 29, 2013

### CitiBike share--what are the chances?

I have been working with Joe Jansen on the Citibike data in the R Language. Citibike is New York's bike sharing program, which started in may and currently has more than 80,000 annual members. The R Language is a freely available object oriented programming language designed originally for doing statistics at Bell Labs.

Joe has downloaded all the data and done an extensive analysis, which you can find here. I did a simpler analysis predicting trips using a statistical regression model and graphed it using the function ggplot2 in R. I found that maximum temperature, humidity, wind, and amount of sunshine to be significant factors in predicting the number of trips that will be taken on any given day. While rain was not a significant factor, it is likely confounded with sunshine, so it is only not a factor after accounting for amount of sunshine. Also, keep in mind that a number of days with rain, especially in the summer, are generally sunny days with an hour or two of rain or thunderstorms. The day of the week, surprisingly, was not an important factor influencing number of trips. The R-squared, which is a typical measure of predictive power and is on a scale from 0 to 100%, was more than 70%.

Here is a graph of the results that shows the predicted number of trips per 1,000 members versus the actual number of trips. The day of the week is indicated by the color of the point.

I am an amateur with the function ggplot, and so the legend for day of the week has the days of the week in alphabetical order rahter than Monday , tuesday, etc. Help on that and other aspects of ggplot for this graph would be welcome (please comment accordingly).

If day of the week made a difference, for any given point on the x-axis (predicted trips) you would have more of a certain color that is high on the y-axis than other colors. For example, if more trips occurred on weekends, you would have more of the green colors (Saturday and Sunday) on top. However, no such affect seems to exist. I guess people are enjoying Citibike every day of the week, or casual riders on the weekends are roughly making up for weekday commuting riders.

Joe has downloaded all the data and done an extensive analysis, which you can find here. I did a simpler analysis predicting trips using a statistical regression model and graphed it using the function ggplot2 in R. I found that maximum temperature, humidity, wind, and amount of sunshine to be significant factors in predicting the number of trips that will be taken on any given day. While rain was not a significant factor, it is likely confounded with sunshine, so it is only not a factor after accounting for amount of sunshine. Also, keep in mind that a number of days with rain, especially in the summer, are generally sunny days with an hour or two of rain or thunderstorms. The day of the week, surprisingly, was not an important factor influencing number of trips. The R-squared, which is a typical measure of predictive power and is on a scale from 0 to 100%, was more than 70%.

Here is a graph of the results that shows the predicted number of trips per 1,000 members versus the actual number of trips. The day of the week is indicated by the color of the point.

I am an amateur with the function ggplot, and so the legend for day of the week has the days of the week in alphabetical order rahter than Monday , tuesday, etc. Help on that and other aspects of ggplot for this graph would be welcome (please comment accordingly).

If day of the week made a difference, for any given point on the x-axis (predicted trips) you would have more of a certain color that is high on the y-axis than other colors. For example, if more trips occurred on weekends, you would have more of the green colors (Saturday and Sunday) on top. However, no such affect seems to exist. I guess people are enjoying Citibike every day of the week, or casual riders on the weekends are roughly making up for weekday commuting riders.

Subscribe to:
Posts (Atom)