The Wall Street Journal reports that legislation will be introduced to identify which universities provide the most bang for the buck in terms of students’ employment prospects. See more here. With college tuition debt surpassing 1 trillion dollars and unemployment levels being relatively high after graduation, this is a serious issue. It’s alarming to know that employment information is not generally available to prospective students when they select a university.
There are three serious challenges in identifying “valuable” universities or programs:
1. Identification of meaningful metrics.
2. Data collection and analysis to support the metrics in 1.
3. Convey what is learned to prospective students
I’m not going to critically directly discuss metrics #1 in this post, but it’s a serious issue. After all, bad rubrics can lead to an explosion in the majors that score well in those rubrics whether it’s justified or not. Let’s assume that we can agree on some basic metrics, e.g., student loan default rates.
Instead, I’ll discuss the challenges in data collection and communication, since these are core analytics problems. No progress can be made toward recognizing valuable universities and degree programs without analytics. How should a university collect data, analyze the data, and report the results to prospective students.
Data collection is a problem for most universities – the students who report back salary information are usually those who have jobs. It’s almost impossible to infer what unemployment rates are. Some states, including my present home state of Virginia [Link], are collecting information about students on a large-scale so missing data will not be such a big problem. But there is still room for improvement:
Last year, Virginia lawmakers began requiring the State Council of Higher Education for Virginia to produce annual reports on the wages of college graduates 18 months and five years after they receive their degrees. Beginning this year, the reports must also include average student loan debt.
The state data have shortcomings. Paychecks for the same job can vary widely by location. Salary data don’t reflect self-employed graduates or those who work for the U.S. government or move to another state.
More analytics questions: How should data be analyzed regarding graduates who have gone on to graduate school? Students who are self-employed? What if many graduates go on to live in a city with a high cost of living and are paid more than there peers who live in more affordable places?
Employment rates are a function of major as well as university (as well as other factors, of course). Assessing by major and university introduces new challenges. Small programs–like entomology and maybe operations research programs–are going to be hard to assess. They will likely be sensitive to outliers that can skew expected values and missing data. We may not be able to say a whole lot about entomology majors at a university due to too few data points. Can we infer whether this major is a good investment based on other factors?
All of the tools I looked at for evaluating universities reported a single metric that reeked of expected values. There are few attempts to report a range or the uncertainty with the single metric. I suspect that this can be improved. Online retailers have done away with the average rating (based on 1-5 stars) with a confidence level based on the total number of reviews submitted. The confidence level is still conveyed as a single scalar value, but it’s more meaningful. How confident are we about the few entomology majors?
Another concern with a single metric is that it does not convey what has happened over time. I see room for analytics here to recommend, say, when it’s worthwhile to consider law school again after there has been a substantial decline in law school admissions [Link]. This is a big issue since loan default rates (one possible metric) has gone up everywhere over the last few years but at different rates (see this college – default rates increased from 10% to 20% in three years. Others are less troubling). Trends over time are important.
In terms of conveying information, this is hard to do at a university-wide level. Having said that, I like the chart below of public 4-year universities in Illinois. Having grown up in Illinois and knowing quite about about the public universities, I would personally rank the public universities there in ascending order of their student loan default rates, regardless of major.
I’m less inclined to do so in Virginia, where some metrics such as average salary can be misleading. For example, George Mason University graduates can earn quite a lot because they often get jobs in DC, where cost of living is through the roof. They are not necessarily better off than students who get jobs at, say, Virginia Tech.
Conveying information at the university level may be too coarse. I’ve checked out quite a few online tools for assessing the quality of different universities. The level of aggregation is sometimes alarming. This online tool does very coarse ratings at the state level. This is meaningless, because there are bad and good places to get a degree in every state, and they should not be aggregated. Some aggregation is necessary. This is an area where analytics can be useful: at what level should we report outcomes: at the state level, university level, college level, department level, or other (e.g., different regions or industries where graduates may get jobs).
What role do you think analytics will – or should – play in evaluating universities?