Education, all levels

Have you ever taken any IRT / computer adaptive test with a suicide time limit? If the answer is no then I’m not sure you can fully appreciate what it’s testing or how much preparation is required to get a “good” score in 2023. There are other peculiarities that don’t really add up like large within-subject variation on multiple takes. That should basically never happen on a validated IRT yet the $500/hr expert tutors seem to experience it frequently.

I took the GRE on a computer almost 25 years ago. (Christ I’m old). Pretty sure it was adaptive and I assume there was a time limit. I really don’t remember much about it other than getting really drunk at the jersey shore after taking it.

I am sure I don’t appreciate how difficult it is to do well, but I continue to believe that it’s more accessible to potential applicants than most other application criteria, like whether you assisted in a research lab or competed in dressage.

1 Like

Have you competed in dressage lately? :harold:

2 Likes

The NCLEX (national nursing licensure exam) is adaptive.

After you give an answer, you can’t later go back and change it. If you miss questions about a particular topic, it’ll ask you more questions about that same topic.

They also love the five choice “select all that apply,” and it’s USUALLY three of them, but there’s a plausible 4th answer, and you have to decide. And if you get any part wrong, your answer is 100% wrong.

It was a very stressful test, but I passed!

2 Likes

YOU’RE old? I wrote it with pencil and paper. :grinning_face_with_smiling_eyes:

Pretty sure I was one of the last to not have a computer-based option.

I took that version of GRE too and it’s a way different test from GMAT. I guess it depends on how you’re interpreting score ranges and differentials though. Maybe I’m way off here but I assume almost everyone applying to your program is scoring 600+?

I took the GRE almost 40 years ago, and I’m sure I used paper and pencil.

What was even weirder was taking the Psychology GRE test, which had a starting time of 1pm. I never prepared for a standardized test by staying up late so I wouldn’t wake up too early.

There are one or two completely hopeless applicants every year with scores of like 450, but yes most of them are above 600.

Rhetorical question ldo

Never heard of this test

I’m highly skeptical that the scores are providing you with any useful information except to weed out those hopeless people. The official site says the mean score between Jan 2020 and Dec 2022 was 582, but notice that 580 only corresponds to the 40th percentile (!). That’s a significantly skewed distribution with half of the submitted scores falling between 600 (47th percentile) and 740 (97th percentile). I’ll call that the relevant range of scores since it bookends the minimum requirement for many “decent” schools while covering the average score of the top MBA programs on the high end. The stated measurement error on this test is 30 to 40 points. That’s quite large considering the relevant range of scores.

https://www.mba.com/exams/gmat-exam/scores/understanding-your-score

The difference between 600 and 740 could easily be measurement error plus understanding how to game the algorithm and not actually having any higher understanding of the content. The optimal strategy for most people involves a significant amount of guessing (specifically on quant) since you only get two minutes per question but are penalized much more severely for answering easy and medium difficulty questions incorrectly than you are rewarded for answering difficult questions correctly, and then there’s an additional massive penalty for not completing a section. More succinctly: You die if you miss the easy stuff or don’t finish, so don’t waste time on hard stuff because it doesn’t help much–just guess on those and hope the dice are hot.

Now that you can cancel a score after seeing it, people are free to roll the dice multiple times and only submit their best take which is why it’s a money grab carnival game. There just don’t seem to be too many people in the test prep community who are serious about it and only take it once. I would absolutely cancel any score below 700 and I assume many people are doing exactly that:

https://archive.ph/yGGY3

You only need $275 to spin the wheel again (or $300 to have someone in India take it online for you), plus the additional add-on fees for detailed score reports and official prep materials, plus hundreds of dollars for third party courses (optional), and thousands of dollars for private tutors (optional). It’s not uncommon at all to see people post / vlog about spending hundreds of hours and thousands of dollars on this stuff, sometimes with the desired outcome and sometimes not, all rat racing for that 750+ to get into Stanford MBA (Harvard safety) and begin the metamorphosis from Complete Fucking Loser into great business übermensch like Elon Musk.

The decline was even more dramatic during 2020 and 2021 when many schools announced changes to their application timelines and testing requirements in response to the COVID-19 pandemic. GMAC reported that only 66,626 unique GMAT examinees sent 301,107 score reports to programs around the world in 2021, down by 50% from the 133,345 GMAT test takers who sent 557,587 score reports (-46.0%) in 2017.

Edit: Oh weird switch talking about total score reports. Who cares? Total number of takes is the only relevant thing. Lmao at one of the big name private tutors claiming that encouraging multiple takes is a great development. Not exactly clear for whom it’s great for. She’s a “content and curriculum lead” for MBA dotcom (the makers of the test) and also employed by a third party test prep company. Every single aspect of this is designed to extract the maximum number of dollars from students.

you guys talkin’ about stuff?

2 Likes

It wouldn’t be fair to give this much hate to GMAT without pointing out that the GRE is equally terrible. None of the top mathematics PhD programs require the GRE general test–and it is strictly not accepted by several of them. Only Harvard and UCLA appear to require the math subject test among the top ten schools ranked by US News. Beyond that, many of the better public R1 PhD programs that I checked also do not require any scores other than TOEFL, and Rackham (Univ of Michigan) has a detailed statement about their school-wide decision to no longer accept GRE scores for PhD programs:

https://rackham.umich.edu/about/strategic-vision/discontinuing-the-use-of-the-gre-in-rackham-phd-admissions-decisions/

Why has Rackham decided to discontinue the use of the GRE general test in its Ph.D. admissions decisions?

Financial costs of the GRE potentially deter well qualified applicants to Rackham Ph.D. programs, as GRE costs and application fees are out-of-pocket costs to applicants. Additionally, prospective applicants may perceive the need to engage test preparation services at significant cost—many for-pay test preparation firms guarantee outcomes on the GRE, including score increases equivalent to decades in percentile. In this way, using the scores can introduce inequity based on race, ethnicity, gender, first-generation status, and socioeconomic status into our admissions processes in a manner that is not well controlled.

The benefits of using the GRE in Ph.D. admissions have also not been demonstrated. The goals of Ph.D. education include success in research and scholarship, and the production of a dissertation. The research literature does not demonstrate that these long-term measures are predicted by GRE scores.

That PDF is the slide deck used for the presentation and includes the citations for the research upon which the decision was based. Of those studies, Sealy et al. 2019 stood out to me. A potential problem with many of these studies is that subjects admitted based on GRE scores tend to have high GRE scores. One could argue–and I have several posts above–that above a certain score threshold, these tests are not providing meaningful differentiation, but perhaps below a certain threshold they are. They do not find that in this paper for GRE scores spanning a much wider range:

This study avoids the typical biases of most GRE investigations of performance where primarily high-achievers on the GRE were admitted. GRE scores, while collected at admission, were not used or consulted for admission decisions and comprise the full range of percentiles, from 1% to 91%.

We report on the 32 students recruited to the Vanderbilt IMSD from 2007–2011, of which 28 completed the PhD to date. While the data set is not large, the predictive trends between GRE and long-term graduate outcomes (publications, first author publications, time to degree, predoctoral fellowship awards, and faculty evaluations) are remarkably null and there is sufficient precision to rule out even mild relationships between GRE and these outcomes.

2 Likes

Computers for videogames in Kentucky’s new Esports Lounge.

Any chance you can c/p that? Very curious to read it but don’t have a WSJ sub

Archive.fo/baF4u

Well, I guess Kentucky is going on dlk9s jr’s college list.

1 Like

Looks like archive.whacamole is inoperable without tracking cookies now.