Revolutions Don’t Need Ad Campaigns

AI companies are pouring money into advertising. Either they are nervous about the market and want to create a need where none exists or they are nervous about their product and want to bluff their way to success.

If AI was as revolutionary as everyone claims, it wouldn’t need an ad campaign.

See also this story in the Economist about advertising in ChatGPT (paywalled) and Freddie deBoer’s essay about the actual technological impact of the digital revolution (spoiler: not much).

(both links via Alan Jacobs)

A New Addiction

There’s a lot that’s chilling about this post on AI by Matthew Milliner, but this quote in particular stuck out to me:

Recently I had a three-hour conversation with a woman on a plane who was struggling with a family issue, and we actually got somewhere. She thanked me for the time, confessing she had almost gone to AI the night before.

I don’t know whether those are Milliner’s words or the woman’s. Regardless, they are the words of an addict.

“I almost went to the bottle.”

“I almost went to porn.”

“I almost lit a cigarette.”

You Can See the Problem

When my soon-to-be brother-in-law first visited our new flat last year, he asked me about the kind of roller shutters we had installed, if they were electrically operated and if I could activate them remotely. I told him that the real estate developer had stuck to manual levers to keep the cost down as much as possible, but we could, if we wanted, easily add a little motor on the side.

But I told him that I preferred this manual system anyway. If one day I can’t open or close the shutters, I will know where the problem comes from: a mechanical issue with the roller.

Nicolas Magand

An upcoming issue of Good Work is focused on “Machines,” and it strikes me that the ability to see the mechanics involved is part of a machine’s appeal. Modern devices—especially “smart” devices—tend to hide the machinery, either by design or because they’re so complex, which makes them impossible to tinker with. And, as Matthew Crawford taught us in Shop Class as Soul Craft, tinkering with stuff is a human instinct. “We want to feel that our world is intelligible,” he says, “so we can be responsible for it.”

In books and movies from the twentieth century, people are always fixing stuff themselves—cars, toasters, space ships. Fifty years later, when tech has crept into even more aspects of our lives, tinkering, let alone fixing, feels almost impossible. You can’t see the problem, so you can’t understand the problem, so you don’t feel responsible for the problem. But lack of responsibility is uncomfortable. We want to feel responsible. Responsibility is good for us.

(In his post, Nicolas also links to a post called “My Coffee Maker Just Makes Coffee” by Bradley Taunt. Also worth a skim.)

Smart Poison

There are a lot of reasons to say no when your pre-teen asks for a smartphone. One of the most obvious reasons is that smartphones give you unfiltered and unlimited access to entertainment, which isn’t a good thing for anyone, least of all teenagers. It’s like carrying around a TV and video game console in your pocket. (I mean, that’s exactly what it is.)

Almost more detrimental than constant entertainment is social media. (I’d throw texting in there, too.) Jean Twenge has been studying teen mental health for ten years now and documented some disturbing trends in her book iGen. Her hypothesis was that the spike in teenage depression was caused by smartphones and social media. Not everyone was convinced, and a lot of other explanations were proposed. In this recent newsletter, Twenge looks at thirteen alternative explanations for “the high levels of distress among teens,” including the economy, COVID, school shootings, and climate change. It should come as no surprise that she has good reasons for dismissing all of them.

Blog Museum

This reminds me of an idea I’d still like to see put into practice: a service that pulls posts from old blogs and collects them into a daily digest. I love reading blogs, especially old ones. Scrolling through a bunch of posts from the same author, sometimes written over the course of years, is like reading their journal. It gives you a picture of that writer’s personality and the ideas they’ve wrestled with over the years. It would be exciting to be reminded every so often that such an archive exists.

A Pencil Named Steve

The Turing Test is one which has baffled me since I first heard about it. Basically, when Turing was asking himself what it would mean for a computer to become sentient, he decided that he would count a computer as sentient if, in conversation with it, you were unable to tell whether or not it was a human being. 

This struck me as bizarre at the time, and more bizarre as we have seen the test be run thousands of times in peoples’ conversations with ChatGPT and its ilk. (For those who missed it, in June of last year, Blake Lemoine, one of Google’s engineers, was fired after he became convinced, “talking” with one of the prototype natural language simulations, called LAMBDA, that the chatbot was in fact conscious). If there is one thing humans are good at, it’s believing things to be persons. It’s like… very easy for us, and very hard for us not to do. We are persons, and we anthropomorphize absolutely everything. I used to be scared of the curtains in my parents bedroom because in the dark it seemed like there were people standing behind them. The tendency to see faces in patterns is so pervasive that it has a name – pareidolia. 

We are even prone to do this if we make the thing in question: if we paint a frowny face on a rock we kind of feel like the rock might be unhappy. If you give a child a stuffy, you know that child is going to immediately begin to ascribe personhood to it, even if it is a stuffed animal you made and she saw you making it. Of course humans would be able to make a computer program that would be able to fool other humans into thinking it was a person, and in Lemoine’s case, able to fool himself. We’re incredibly good at making personish things and incredibly good at then kind of thinking they are persons. 

Just to point out, this tendency, to make personish things, then get excessively impressed with and excited about them and ascribe agency and maybe power to them and ask them for help with things, is noted in the Old Testament a good bit; it is called idolatry.

Susannah Black Roberts

This sort of thing reminds me, as it always does, of Steve the Pencil:

Average is the New Zero

As a follow-up to the most recent Scriptnotes, John August wrote a blog post that included one listener’s response:

Language models are built on “training data,” which is the text you feed into a learning process to produce the output. For very sophisticated models, the training data is vast: for something like ChatGPT, it includes something like all the text you can scrape off of the last twenty years of the Internet, or so.

But this means ChatGPT is about as smart as the average writer on the Internet has been over the past twenty years — and indeed, the models that comprise GPT drag the results toward the average, not the extraordinary, because the average has much nicer statistical properties than the extraordinary for companies that seek to produce a marketable, scalable product from their models, which requires the ability to tweak, diagnose, and defend what you’re selling.

Ultimately what these models mean is that with the click of a button you can now be just as good as the average writer who posts content to the Internet, and so the old “average” is now the new “zero.” If you wrote at the average level of the Internet in 2022 you now write at the zero level.

Emphasis in original