Human science · Inquiry · Quantitative · Artificial intelligence idiocy

Human Science

Inquiry

Quantitative
Artificial intelligence idiocy


Fig. 1. Image credited to David Blaikie from Hampshire, UK, via Wikimedia Commons, CC BY 2.0.

I address statistical artificial idiocy in a post on my blog, Not Housebroken. Generative artificial idiocy apparently still depends on statistics[1] and it is indeed hard to imagine how a computer could deal with massive amounts of data in “machine learning” in the way that Nabil Alouani describes without resorting to statistical methods.[2] But the picture gets worse when you look at what these models actually do:

[Generative artificial idiocy models are] really just sort of designed to predict the next word. And so there will be some rate at which the model does that inaccurately.[3]

This isn’t fixable. It’s inherent in the mismatch between the technology and the proposed use cases.[4]

When used to generate text, language models “are designed to make things up. That’s all they do,” [Emily] Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.

“But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”[5]

Henry Williams, “Artificial Intelligence May Make Traffic Congestion a Thing of the Past,” Wall Street Journal, June 26, 2018, https://www.wsj.com/articles/artificial-intelligence-may-make-traffic-congestion-a-thing-of-the-past-1530043151

Louise Matsakis, “Tumblr’s Porn-Detecting AI Has One Job—and It’s Bad at It,” Wired, December 5, 2018, https://www.wired.com/story/tumblr-porn-ai-adult-content/

Jeremy Kahn, “Google’s ouster of a top A.I. researcher may have come down to this,” Fortune, December 9, 2020, https://fortune.com/2020/12/09/google-timnit-gebru-top-a-i-researcher-large-language-models/

Alex Hanna, [Twitter thread], Thread Reader App, February 18, 2021, https://threadreaderapp.com/thread/1362476196693303297.html

Mitchell Clark and Zoe Schiffer, “After firing a top AI ethicist, Google is changing its diversity and research policies,” Verge, February 19, 2021, https://www.theverge.com/2021/2/19/22291631/google-diversity-research-policy-changes-timnet-gebru-firing

Ina Fried, “Google tweaks diversity, research policies following inquiry,” Axios, February 19, 2021, https://www.axios.com/google-tweaks-diversity-research-policies-following-inquiry-8baa6346-d2a2-456f-9743-7912e4659ca2.html

Zoe Schiffer, “Google fires second AI ethics researcher following internal investigation,” Verge, February 19, 2021, https://www.theverge.com/2021/2/19/22292011/google-second-ethical-ai-researcher-fired

Anthony Levandowski, “The former Uber exec who was pardoned by Trump has closed his church that worshipped AI, donating its funds to the NAACP,” Business Insider, February 19, 2021, https://www.businessinsider.com/uber-google-ai-anthony-levandowski-trump-pardon-church-naacp-2021-2

Salil Tripathi, “Twitter is caught between politics and free speech. I was collateral damage,” Columbia Journalism Review, March 12, 2021, https://www.cjr.org/first_person/twitter-is-caught-between-politics-and-free-speech-i-was-collateral-damage.php

Alyse Stanley, “Twitter Banned Me for Saying the ‘M’ Word: Memphis,” Gizmodo, March 15, 2021, https://gizmodo.com/twitter-banned-me-for-saying-the-m-word-memphis-1846474378

Melissa Heikkilä, “Dutch scandal serves as a warning for Europe over risks of using algorithms,” Politico, March 29, 2022, https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

Tori Orr, “So you want to be a prompt engineer: Critical careers of the future,” VentureBeat, September 17, 2022, https://venturebeat.com/ai/so-you-want-to-be-a-prompt-engineer-critical-careers-of-the-future/

Peter Allen Clark, “AI’s rise generates new job title: Prompt engineer,” Axios, February 22, 2023, https://www.axios.com/2023/02/22/chatgpt-prompt-engineers-ai-job

Drew Harwell, “Tech’s hottest new job: AI whisperer. No coding required,” Washington Post, February 25, 2023, https://www.washingtonpost.com/technology/2023/02/25/prompt-engineers-techs-next-big-job/

Nabil Alouani, “ChatGPT Hype Is Proof Nobody Really Understands AI,” Medium, March 5, 2023, https://medium.com/geekculture/chatgpt-hype-is-proof-nobody-really-understands-ai-7ce7015f008b

James Vincent, “Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow,” Verge, March 22, 2023, https://www.theverge.com/2023/3/22/23651564/google-microsoft-bard-bing-chatbots-misinformation

Alyssa Lukpat, “AI Poses ‘Risk of Extinction’ on Par With Pandemics and Nuclear War, Tech Executives Warn,” Wall Street Journal, May 30, 2023, https://www.wsj.com/articles/ai-threat-is-on-par-with-pandemics-nuclear-war-tech-executives-warn-39105eeb

Sam Sabin, “Generative AI is making voice scams easier to believe,” Axios, June 13, 2023, https://www.axios.com/2023/06/13/generative-ai-voice-scams-easier-identity-fraud

Kevin Jiang, “Google’s new AI search function is revolutionary — but don’t believe everything it says, experts say,” Toronto Star, June 15, 2023, https://www.thestar.com/business/technology/2023/06/15/googles-new-ai-search-function-is-revolutionary-but-dont-believe-everything-it-says-experts-say.html

Madhumita Murgia and Anjli Raval, “AI in recruitment: the death knell of the CV?” Financial Times, June 18, 2023, https://www.ft.com/content/98e5f47a-7d0d-4e63-9a63-ff36d62782b8

Cordilia James, “The Best AI Apps to Try Now,” Wall Street Journal, June 19, 2023, https://www.wsj.com/articles/ai-apps-tools-214958d8

Margaret Heffernan, “The tech sector’s free pass must be cancelled,” Financial Times, June 28, 2023, https://www.ft.com/content/c668bf20-40ea-4eb9-8910-ab63d213a63b

Tamia Fowlkes and Julian Mark, “Elon Musk sets new daily Twitter limits for users,” Washington Post, July 1, 2023, https://www.washingtonpost.com/technology/2023/07/01/elon-musk-new-twitter-user-limits/

Thomas Germain, “Google Says It’ll Scrape Everything You Post Online for AI,” Gizmodo, July 3, 2023, https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Will Bolton, “AI girlfriend ‘told crossbow intruder to kill Queen Elizabeth II at Windsor Castle,’” Telegraph, July 6, 2023, https://www.telegraph.co.uk/news/2023/07/05/ai-windsor-intruder-queen-elizabeth-jaswant-singh-chail/

Deepa Seetharaman and Keach Hagey, “Outcry Against AI Companies Grows Over Who Controls Internet’s Content,” Wall Street Journal, July 30, 2023, Outcry Against AI Companies Grows Over Who Controls Internet’s Content

Matt O’Brien, “Chatbots sometimes make things up. Is AI’s hallucination problem fixable?” Associated Press, August 1, 2023, https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4

Yuan Yang and Anna Gross, “Chinese AI scientists call for stronger regulation ahead of landmark summit,” Financial Times, November 1, 2023, https://www.ft.com/content/c7f8b6dc-e742-4094-9ee7-3178dd4b597f

Archie Bland, “Thursday briefing: What the meltdown at OpenAI means for the future of artificial intelligence,” Guardian, November 23, 2023, https://www.theguardian.com/world/2023/nov/23/first-edition-openai-sam-altman

Joanna Stern, “Talking to Chatbots Is Now a $200K Job. So I Applied,” Wall Street Journal, November 29, 2023, https://www.wsj.com/tech/ai/talking-to-chatbots-is-now-a-200k-job-so-i-applied-258bd5f0

Angela Palumbo, “Microsoft and OpenAI Are Sued by New York Times for Copyright Infringement,” Barron’s, December 27, 2023, https://www.barrons.com/articles/microsoft-openai-new-york-times-ai-lawsuit-a32dd304

Maxwell Zeff, “PennsylvaniaGPT Is Here to Hallucinate Over Cheesesteaks,” Gizmodo, January 9, 2024, https://gizmodo.com/pennsylvaniagpt-chatgpt-open-ai-governor-shapiro-1851153510

Michael Acton, “Apple cancels secretive electric car project in shift to focus on AI,” Financial Times, February 28, 2024, https://www.ft.com/content/78bc9f62-8450-45c0-8c59-c5d87a122825

Matthew Guariguila, “The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies,” Electronic Frontier Foundation, March 20, 2024, https://www.eff.org/deeplinks/2024/03/how-avoid-ai-apocalypse-one-easy-step

Henry Mance, “AI keeps going wrong. What if it can’t be fixed?” Financial Times, April 6, 2024, https://www.ft.com/content/648228e7-11eb-4e1a-b0d5-e65a638e6135

  1. [1]Jeremy Kahn, “Google’s ouster of a top A.I. researcher may have come down to this,” Fortune, December 9, 2020, https://fortune.com/2020/12/09/google-timnit-gebru-top-a-i-researcher-large-language-models/
  2. [2]Nabil Alouani, “ChatGPT Hype Is Proof Nobody Really Understands AI,” Medium, March 5, 2023, https://medium.com/geekculture/chatgpt-hype-is-proof-nobody-really-understands-ai-7ce7015f008b
  3. [3]Daniela Amodei, quoted in Matt O’Brien, “Chatbots sometimes make things up. Is AI’s hallucination problem fixable?” Associated Press, August 1, 2023, https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4
  4. [4]Emily Bender, quoted in Matt O’Brien, “Chatbots sometimes make things up. Is AI’s hallucination problem fixable?” Associated Press, August 1, 2023, https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4
  5. [5]Matt O’Brien, “Chatbots sometimes make things up. Is AI’s hallucination problem fixable?” Associated Press, August 1, 2023, https://apnews.com/article/artificial-intelligence-hallucination-chatbots-chatgpt-falsehoods-ac4672c5b06e6f91050aa46ee731bcf4