After our thorough in-class discussion of the GRE earlier this semester, I found myself thinking about the utility of standardized admission tests. These tests are administered to a range of students: high school students typically take the SAT and/or the ACT for admission to an undergraduate program, while college students typically take the GRE for admission to a graduate program. There are other standardized tests, such as the LSAT (for entry into a law program) and the MCAT (for entry into medical school), but I will focus on the GRE, as it was the crux of Monday’s dialogue.
Some arguments I’ve read in support of the GRE (and standardized tests in general) claim that it “levels the playing field,” because each curriculum varies wildly. Simply put, there is too much variability within programs. Two baccalaureate engineering programs may be ABET accredited, but that doesn’t guarantee all of the course content at one university was taught at another. Comparing standardized test scores directly pits applicants against each other while ignoring scholastic differences. However, someone who has a higher GRE score doesn’t mean he or she is smarter than someone who didn’t score as well; it is used to predict students’ performance in the first year of graduate school.
I find it interesting that ETS (the producer of the GRE) can use this test to predict student performance. The word “predict” is heavily rooted in statistics, and such a powerful claim must be substantiated by mountains of data. I’m not sure if that data is publicly available, but I’d like to see how exactly ETS correlates GRE scores with graduate school performance. Some commonly used metrics might include GPA and scientific productivity (number of publications, grants awarded, etc.). While statistically valid to an extent, there are many hard-to-measure and/or not easily quantifiable metrics to consider to obtain a full measure of “success.” Such metrics include cognitive maturity, program rigor, mental health status, social adaptability, creativity, and advisor agreeableness, to name a few. Each factor has the ability to impact a firmly quantitative measure that the GRE may use to measure success (like GPA). For example, moving to a new institution for graduate school made it hard for someone to make friends or study effectively (social adaptability). I wonder how ETS incorporates these more qualitative metrics into their analysis.
In class, I asked Dean DePauw why finding GRE scores was significantly harder than finding SAT scores across institutions. For instance, Stanford’s undergraduate mid-50% SAT scores are easily found on the Incoming Freshman Profile, but the mid-50% GRE scores are not mentioned anywhere on the graduate admission page. Someone responded and said GRE datasets are publicly available online through ETS (or some other site, I can’t exactly remember), but I find it odd that such data isn’t readily available. This goes hand-in-hand with my question asking if ETS should be more transparent with the GRE (ranging from developing questions, releasing scores, and even analyzing their data to improve their test).
One major issue arising from transparency is ETS’s business model. ETS still needs to profit, and providing too much insight into the company would certainly derail the company. Since ETS is a major distributor of GRE preparation supplies, divulging “test secrets” would not benefit their business model either, since competitors could buy the book and copy the contents. It’s a fine line to balance, and I don’t believe there’s an easy solution, but perhaps being more transparent about question development will make studying and taking the test less aggravating.
(Disclaimer: I have not taken the GRE, so my views in this article stem from what my classmates, colleagues, and friends have said about the test, as well as the hyperlinked articles in this post.)