Sign up for our newsletters    

Sign up for our newsletters   

Baltimore City Paper home page.

MICA, Payscale, and more about statistical sampling than you want to know

April 1, 2014
By

MICA-gateway-300x180 (1)MICA’s media spokesman has been working hard to counter a Payscale.com survey, mentioned last week in The Atlantic, that found MICA to be one of the least cost-effective colleges in the country.[UPDATE, below] City Paper had some fun with it as well, comparing the Payscale-derived annual projected income of MICA grads to both high school grads in general and MICA adjuncts in particular—as adjuncts are gearing up for a union election.

Cedric Mobley lashed out at us with an email over the weekend, claiming Payscale’s numbers were incredible and its methodology “scientifically unsound.” He wrote much the same in this Baltimore Business Journal piece.

And he may have a better point than he even claims. A look at the Payscale methodology page finds a confusing claim, that, when estimating a 30-year median pay, the “90 percent confidence interval for small liberal arts schools is plus or minus 10 percent.”

Other stats sites, including this handy sample size calculator, define the “confidence interval” in a lower-is-better way, such that, if you were polling a group of 2,400 graduates and wanted a “90 percent confidence interval” at a 95 percent confidence level, you’d need to speak to only one student.

Now, if you wanted a confidence interval of 10 percent, you’d have to poll 92 of those 2400 students.

In other words, either Payscale doesn’t know what “confidence interval” means, we’re misunderstanding Payscale’s use of “confidence interval,” or the sample size is really, really, really tiny.

We’ve emailed and called Payscale to see which of these is the case and will update if or when we hear back.

That said, Mobley is in something of a bind. City Paper compared its adjunct instructors’ pay with that of high school graduates (we used National Center for Education Studies figures for that). We found that MICA adjuncts make, on an hourly basis, maybe a dollar or so more than the typical high school grad.

As MICA adjuncts pretty much all have at least a Master’s, and many have Ph.Ds, the comparison makes the adjunct pay look a bit worse that it would otherwise—and much worse even than the Payscale survey found for the “average” MICA grad.

++

UPDATE: Payscale spokesman Steven Gottlieb emailed us last night: turns out the company uses “confidence interval” upside-down to the way our little online calculator does. So when they say 90 percent, that means they’re certain to within a 10 percent margin of error. Which means they definitely heard from more than one MICA grad. Full email below:

The 90% confidence interval of 5-10% means we are 90% confident the true population median is +/- 5-10% of the reported median. This confidence interval, which represents the margin of error on the earnings figures, factors in both the sample size and spread in pay. The reason we do this is because you can have a school with a relatively small sample size (e.g., Harvey Mudd), but also a relatively small spread in pay since most graduates will earn a similar amount due to the fields of study offered there (STEM). You could also have a school that has a large sample, but a very wide spread in pay and thus your certainty about the median representing the true median is lower.

We have some comments based on the [BBJ] article linked:
·         Not sure where they got the sample size listed of 62 observations. We are assuming it is from our public facing research center and if so then this doesn’t accurately capture the sample utilized as the research center only surfaces a portion of our date set.
·         The comments assume our overall data skews low, which is an inaccurate assumption. A large part of what we do is review our compensation data relative to other sources to look for these sorts of systematic biases and this has not been observed.
·          It is correct  that we don’t include self-employed workers, which they say represent over half of their graduates. This has been an issue with Art schools in the past. Maybe those who are self-employed are more successful than those that aren’t so maybe our data is low for them overall.  It is too hard to say.

Tags: , ,

  • Joe Basile

    This STILL doesn’t get to the heart of the matter. PayScale did not survey graduates–it utilized data left by users of its website. That’s not a survey–it’s something else. If you want survey data, look at the data Cedric Mobley provided in the BBJ “rebuttal”. The SNAAP survey got responses from something like 1500 MICA alumni. Its findings are much more representative and simply don’t jibe with what PayScale is offering as “survey data”.
    PayScale says the number of MICA graduates who participated in their study, cited as 62 in some sources, is inaccurate. OK–what IS the number? There are almost 6000 MICA alumni–is it half that? A quarter? I bet it’s no where near a quarter, and is probably closer to 2%. Why won’t they say?

  • Rebecca

    “If you want survey data, look at the data Cedric Mobley provided in the BBJ “rebuttal” ”

    SNAAP, the organization MICA paid to produce the ‘data’ both Cedric Mobley and Joe Basile cite, and on whose board the current and future presidents of MICA serve, has been widely criticized. Ray Allen, MICA’s provost, along with Samuel Hoi, future president of MICA, are featured speakers at their upcoming conference. SNAAP is a marketing tool masquerading as a ‘survey’ and as disconnected from reality as the people who paid for it.

    Instead of killing the messenger or providing bogus data (in an outrageously arrogant way), how about dealing with the very real problems facing your school?

    I strongly suggest anyone seriously considering these rebuttal’s based on SNAAP to read this succinct analysis:
    http://createquity.com/2013/01/strategic-national-arts-alumni-project-the-condensed-version.html

    The article concludes with this quote:
    “Ultimately, the SNAAP report is largely inconclusive, in large part due to the questions about its representativeness and because one of the few metrics that could potentially serve as a common yardstick with other fields of study is complicated by SNAAP’s alternative approach to measuring employment status.”

    References:

    The current president of MICA, Fred Lazarus IV, served on the SNAAP Advisory Board from 2008 – 2012.
    http://www.mica.edu/About_MICA/Fred_Lazarus_IV_President/Awards_and_Boards.html

    The incoming president of MICA, Samuel Hoi, currently serves on the National Advisory Board for SNAAP, http://snaap.indiana.edu/about/board/samuel-hoi.html

    The current provost, Ray Allen, along with the incoming president, Samuel Hoi, are prominently featured in SNAAP’s 3millionstories project/conference:
    http://snaap.indiana.edu/usingSNAAPData/3MillionStoriesSpeakers.cfm

    And there are many other issues, too numerous to list here but easily found on the link above.

  • theaegean

    “More about statistical sampling than you want to know,” huh? Based on the title, I was hoping that the author would address the choice of using a 90% confidence level, or maybe provide an explanation of how it’s below the generally accepted standard of 95%. Better yet, how about an example showing just how imprecise the findings are. Instead we have an entire article based on the inability to decipher statistician shorthand: that the +/-5 to 10% is the confidence *interval* corresponding to a 90% confidence *level*.

  • Sara

    These writers are most likely MICA administrators. The top tier always wants to keep its money and status and is not particularly concerned with “facts” or “data.”