When looking at quantitative research, remember teenaged boys in the 1950s and 1960s and their cars

Human Science

Inquiry

Quantitative

One of my longstanding suspicions about quantitative research—I remember thinking about this while walking down a hallway on the east side of Meiklejohn Hall at California State University, East Bay—is that some of this stuff looks for all the world like teenage boys from an earlier era and their cars, tweaking their engines to try to get better performance. The specific accusation here is that researchers are tweaking their methods to obtain desired results.

I am not a statistician so I knew I would never be able to prove this sort of research misconduct.

But it turns out I was right. Research fraud can be difficult to detect or prove, but Gary Smith offers some hints.[1] I talk about “big data” mining in my critique of artificial idiocy;[2] that appears here as “HARKing,” “hypothesizing after results are known.”[3]

In [Andrew] Gelman’s garden-of-forking-paths analogy, p-hacking occurs when a researcher seeks empirical support for a theory by trying several paths and reporting the path with the lowest p-value. Other times, a researcher might wander aimlessly through the garden and make up a theory after reaching a destination with a low p-value. This is hypothesizing after the results are known — HARKing.[4]

The pressure scholars feel to publish, to maintain eligibility for tenure, and thus to escape the low wages and other abuse of adjuncts, yields perverse results:

Some are tempted by an even easier strategy — simply make up whatever data are needed to support the desired conclusion. When Diederik Stapel, a prominent social psychologist, was exposed in 2011 for having made up data, it led to his firing and the eventual retraction of 58 papers. His explanation: “I was not able to withstand the pressure to score points, to publish, to always have to be better.” He continued: “I wanted too much, too fast.”

It is just a short hop, skip, and jump from making up data to making up entire papers. In 2005, three MIT graduate students created a prank program they called SCIgen that used randomly selected words to generate bogus computer-science papers. Their goal was to “maximize amusement, rather than coherence” and, also, to demonstrate that some academic conferences will accept almost anything.[5]

SCIgen is apparently still available and some are using it. And yes, the implication is dire for peer review:[6]

Cyril Labbé, a computer scientist at Grenoble Alps University, wrote a program to detect hoax papers published in real journals. Working with Guillaume Cabanac, a computer scientist at the University of Toulouse, they found 243 bogus published papers written entirely or in part by SCIgen. A total of 19 publishers were involved, all reputable and all claiming that they publish only papers that pass rigorous peer review. One of the embarrassed publishers, Springer, subsequently announced that it was teaming with Labbé to develop a tool that would identify nonsense papers. The obvious question is why such a tool is needed. Is the peer-review system so broken that reviewers cannot recognize nonsense when they read it?[7]

And of course, none of this stuff can be replicated, leading to scientific reversals.[8] Some reversals are to be expected—scientific conclusions are always tentative. But we’re likely seeing more and these combine with political interference, as with COVID-19, to undermine public trust in science.[9]

Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research


Gilead

Donald Trump

Coup attempt


Fig. 1. “Jake Angeli (Qanon Shaman), seen holding a Qanon sign at the intersection of Bell Rd and 75th Ave in Peoria, Arizona, on 2020 October 15.” Photography by TheUnseen011101 [pseud.], October 15, 2020, via Wikimedia Commons, public domain.

Jacqueline Alemany et al., “Trump lawyers meeting with Justice Dept. on classified documents case,” Washington Post, June 5, 2023, https://www.washingtonpost.com/national-security/2023/06/05/trump-lawyers-meeting-justice-doj/

Josh Dawsey and Amy Gardner, “Trump-funded studies disputing election fraud are focus in two probes,” Washington Post, June 5, 2023, https://www.washingtonpost.com/nation/2023/06/05/trump-funded-studies-disputing-election-fraud-are-focus-two-probes/

Rozina Sabur, “Flood at Donald Trump’s Mar-a-Lago resort raises suspicions in classified documents case,” Telegraph, June 5, 2023, https://www.telegraph.co.uk/world-news/2023/06/05/donald-trump-mar-a-lago-resort-flood-pool-documents/

Chris Walker, “Trump’s Lawyers Have 2-Hour Meeting With DOJ Over Mar-a-Lago Documents Case,” Truthout, June 5, 2023, https://truthout.org/articles/trumps-lawyers-have-2-hour-meeting-with-doj-over-mar-a-lago-documents-case/

Casey Newton, “The platforms give up on 2020 lies,” Platformer, June 6, 2023, https://www.platformer.news/p/the-platforms-give-up-on-2020-lies

Spencer S. Hsu et al., “Trump special counsel shifts focus of possible indictment to S. Florida,” Washington Post, June 7, 2023, https://www.washingtonpost.com/national-security/2023/06/06/miami-grand-jury-trump-classified-documents/


  1. [1]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  2. [2]David Benfell, “Our new Satan: artificial idiocy and big data mining,” Not Housebroken, April 5, 2021, https://disunitedstates.org/2020/01/13/our-new-satan-artificial-idiocy-and-big-data-mining/
  3. [3]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  4. [4]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  5. [5]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  6. [6]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  7. [7]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  8. [8]Gary Smith, “How Shoddy Data Becomes Sensational Research,” Chronicle of Higher Education, June 6, 2023, https://www.chronicle.com/article/how-shoddy-data-becomes-sensational-research
  9. [9]David Shaywitz, “Three Recent Reversals Highlight the Challenges of COVID Science,” Bulwark, June 22, 2020, https://www.thebulwark.com/three-recent-reversals-highlight-the-challenges-of-covid-science/

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.