|
College Selectivity
Does college selectivity have an effect on graduates’ long-term outcomes? Assessing the payoff to attending an elite university is difficult because elite universities admit and attract students that would have great earnings capacity no matter where they go to college. In addition to observable admissions credentials, there are two sets of unobservable factors that correlate with both long-term success and attending an elite school. The first set are the ones admissions committees observe but researchers cannot (enthusiasm, extracurricular activities, excellent writing). The second are characteristics that are latent even to the admissions committee, but nevertheless make a subject more likely to apply to and attend an elite college as well as to succeed in his or her career (ambition, more accurate assessment of one’s potential). Without controls, these factors would cause ordinary regression analyses to attribute a greater effect to the selectivity of a college than is actually operating. The authors of the studies below address the selection bias using a range of methodologies, and find that contrary to the conclusions from the bulk of existing literature on the subject, college eliteness makes little difference in terms of long-term earnings potential. Neither of the papers thoroughly address potential mismatch effects. At first blush, the findings might seem to contradict mismatch—after all, if the prestige of an institution doesn’t have any impact on earning potential, then it would follow that a college graduates’ earnings are related to his incoming ability, but not his incoming credentials relative to his peers. However, because neither paper performed within-race matching techniques, the effect of a significant disparity in incoming ability (as opposed to a disparity that’s within the range typical for non-preference admits) has not been tested. However, the papers do provide important insight for education policy: in designing effective education policy, we probably tend to exaggerate the true utility of attending an elite college.
Stacy Berg Dale and Alan B. Krueger
(1999)
Abstract
Dale and Krueger used the Mellon Foundation’s College and Beyond dataset to measure the effect of attending a selective university. The dependent variables they used were earnings for subjects that were working full-time during the last of the follow-up surveys (at age 37, or so.) The independent variable of interest was the institutional SAT score—that is, the average SAT score of the school that the subject attended.
Dale and Krueger illustrate the selection bias problem in assessing the utility of attending a selective college by running a simulation with 4,000 generated “subjects” whose earnings correlated with observable and unobservable measures of ability, but not with the selectivity of their college. OLS regressions that didn’t account for unobservables attributed great value to the selectivity of college—as great a value as that for the subject’s own SAT.
more...
To better control for unobserved pre-existing ability, Dale and Krueger developed two models (in addition to the usual OLS regression model.) First, they use a matching model that matches students who were accepted and rejected by similar institutions. Subjects that had similar application behavior AND similar admissions outcomes were lumped into groups, and dummy variables were introduced to the regression equation for each group. This model helped control for the set of factors that admissions officers were able to observe that researchers could not. The authors also used a “self-revelation” model that added to the standard regression equation a variable for the average institutional SAT scores of the colleges to which the subject applied, as well as dummy variables tracking the number of applications that the subject sent.
In both the matching and the self-revelation models, the coefficient for the institutional SAT score shrunk to zero and lost significance. (Tables 5a and 5b.) By adding an interaction term for parental income and institutional SAT, the authors determined that the value of attending an elite college is smaller for wealthier applicants (or, stated positively, the value of an elite education is greater for low SES students.) The authors found that college-goers received real returns in exchange for paying higher tuition (suggesting that more expensive schools really are providing some premium.) This tuition effect, like the institutional SAT effect, was greater for college-goers from lower SES backgrounds. (Table 9)
Dale and Krueger address mismatch or affirmative action only in passing. They limit the applicability of their findings to typical college-goers, whose range in perceived college options span an average of 139 SAT points. They point out that nothing has been said for college-goers that have college options spanning three hundred SAT points, for example. They do however mention “results not reported here” from running models that include an interaction variable of the subject’s SAT score with the school’s SAT score. The effect was significant and negative, even when including a one-sided variable (that cancels out subjects whose SAT scores exceed the average of their school.) The authors conclude “Thus, there is no evidence in these data that students who score relatively low on the SAT exam do worse in the labor market by attending schools with relatively high average SAT.”
The authors repeat their regressions limiting the pool to blacks in the College and Beyond sample. Though the results are not reported, the authors find that the outcomes are the same as those for the general population for the standard OLS regression and the self-revelation models.
SEAPHE Comments
The self-revelation model seems potentially problematic. While application behavior likely has a relationship to unobserved abilities, it probably also has a relationship to irrational optimism. It also may very well have a relationship to SES since students from wealthier backgrounds do not find application costs or expected tuition costs to be as much of an impediment. This might explain why the significance of parents’ income is lower in the self-revelation models than it is in the other models. So while the college-goers might have some information about themselves that we researchers cannot observe, this information is probably less accurate and useful than the information received by admissions committees. The matching model seems to have the advantages of the self-revelation model (by matching students on the set of schools to which they applied) and takes advantage of the screening that admissions offices have provided. Thus, it’s not surprising that the matching model outperformed the self-revelation model in the stochastic models.
Though not the subject of the article, Dale and Krueger’s treatment of affirmative action was disappointingly shallow. Their models detect a significant and large effect for the Black dummy variables, but the matching and self-revelation models might not be making appropriate comparisons since black students’ admissions as well as application behaviors are affected not only by unobserved measures of ability, but also by the longstanding practice of affirmative action. In other words, black subjects that are being matched to white students who applied to the same colleges and had the same admissions outcomes might not actually be comparable in terms of the qualities signaling potential. This might explain, or at least partially explain, the large and negative coefficient for the black dummy variable—it could be serving as a correction for inappropriate matches. Presumably, the authors did not have enough black subjects in their sample to run their matching model on the pool of black subjects alone.
The brief mention of potential mismatch effects (the unreported models that use SAT-Institutional SAT interaction variables) is difficult to assess. First, we don’t believe that the authors are able to test whether attending a school with a high average SAT relative to one’s own does has an effect on their performance in the labor market using the interaction model they used. It is impossible to interpret the negative coefficient on the interaction variable. Suppose a subject had an SAT score of 1000 and attended a college with an institutional SAT score of 1200. The interaction term would have the value 120,000. If the subject’s SAT score were raised by 200 points, the mismatch theory would predict that student would have better outcomes. Thus, the interaction term should be greater than zero (because the value of the interaction term would increase to 148,000.) If the subject’s SAT score stays the same but his college’s institutional SAT score is lowered by 200 points, the mismatch theory would again predict that the student would have better outcomes. Thus, the interaction term should be less than zero (because the value of the interaction term would decrease to 100,000.) So the model prevents a meaningful test for mismatch effects.
However, a couple of the articles’ findings, when combined, suggest a mismatch effect might be operating: (1) attending a more selective university causes a decrease in class rank. All three of the authors’ models found that the coefficient for institutional SAT score was large and negative when using percentile rank as the outcome measure. (2) An increase of 7 percentile points in class rank causes a 3.2 percent increase in earnings. Therefore, a decrease in class rank causes a depression in earnings potential.
Jennie E. Brand and Charles N. Halaby
(2005)
Abstract
Brand and Halaby estimate how much benefit is gleaned from attending an elite college in terms of long-term educational, occupational, and socio-economic status. The authors used the Wisconsin Longitudinal Study, which tracked over 10,000 graduates from the class of 1957 at Wisconsan high schools through thirty-five years of post-high school experience. By using both regression and matching techniques, the authors are able to minimize selection bias by comparing the long-term outcomes of graduates of elite colleges to very similar graduates from non-elite colleges.
more...
In the matching models, the authors compare students with a similar propensity to attend an elite college (regardless of whether they actually did or not). The propensity was determined by academic achievement (high school class rank, IQ scores, college track, semesters of math, type of school) and family background (SES as well as family structure, religious affiliation, and rural/urban setting). The authors then examined the average effect of the treatment (attending an elite institution) on students with the same propensity score. Outcome measures included college gradation, advance degree attainment, and occupational/wage outcomes.
The authors found that the returns for elite education are smaller, and lose statistical significance when controlling for the very complete set of pre-admissions information. Most interestingly, the authors were able to use their matching model to estimate not only the effects that the treatment of attending an elite college had on the people that went to them, but also what the treatment effect would have been on people that attended non-elite colleges. They found that the premium for subjects that actually attended elite colleges were smaller than the premiums would have been for similar students that didn’t attend elite colleges. This pattern held for nearly every outcome (graduation and further education, socio-economic status, and wages.)
SEAPHE Comments
The matching models here differ from those of Dale and Krueger. While Dale and Krueger used dummy variables for students matched by application behavior and admissions outcomes, Brand and Halaby believe average treatment effects controlling for propensity are a superior analytic approach. While regression models fit the variables to a pre-designated curve (usually linear or logarithmic), tests for average treatment effects do not have to assume anything about the shape of the functional relationship between the inputs and the outcome variables. The results of the two papers are very complimentary; Brand and Halaby’s estimated treatment effects for students that didn’t attend an elite college are reminiscent of Dale and Krueger’s finding that the benefits of an elite educations increase as parental income decreases.
One thing we really like about the Brand and Halaby paper is that it investigates the topic using a variety of models, and discusses the benefits and drawbacks of each model. The consistency of the findings over regression and matching models reinforce the validity of the conclusions. At the same time, the slight differences in the outcomes of the models add something to the methodological literature.
|