Some thoughts on LLMs

Here are some thoughts on large language models (LLMs), derived from a recent email exchange concerning the digital simulation of Daniel Dennett recently created by Eric Schwitzgebel, Anna Strasser, Matthew Crosby.

Do LLMs such as GPT-3 perform speech acts? Is there a continuum between what they do and what we do when we speak? Or is there a sharp difference — we performing genuine speech acts and GPT-3 merely giving the appearance of doing so?

There’s a case for thinking that GPT-3 can perform speech acts. It can make appropriate contributions to conversations within domains it has been trained on. If game-playing AIs such Alpha-Zero can make genuine moves within a board game, why can’t GPT-3 make genuine moves with in a language game?

I think it can. It can make conversational moves with words. The problem is that that’s all it can do. It can’t do any of the other things we do with words. It can’t inform, instruct, persuade, encourage, suggest, imply, deceive, and so on. It can’t perform any illocutionary acts. It can’t even assess its own conversational contributions and select the best.

It can’t do any of this, of course, because it lacks the raft of psychological and social attitudes and aptitudes that illocutionary acts require. (As Dennett has noted, Grice’s theory of communication is an attempt to elucidate this implicit background; see chapter 13 of his From Bacteria to Bach and Back.) Even a non-human animal with a simple call system can perform more types of speech act than GPT-3 can.

And here there is a big contrast with the game-playing AIs. Alpha Zero can’t do much with go counters; it’s limited to making clever moves in games of go. But we don’t do much more with go counters ourselves. Go is a game, and making clever moves is the object. Of course, we can achieve other things through playing go, such as winning a bet, but we also play the game simply to pass the time.

Language, by contrast, is far more than a game. It’s a tool — a hugely complex Swiss army knife of a tool — and one that GPT-3 doesn’t have a clue how to use. If there’s a sharp divide here, it’s between systems that play language like a game and ones that use it as a tool.

It is true that humans sometimes treat language as a game. Think of a simultaneous translator whose only object is to reproduce the moves made in one language with parallel moves in another. But these are rare cases. (This example was suggested by a comment by Douglas Hofstadter.)

A final, broader point. One of the worries about AIs such as GPT-3 is that we may end up creating systems that are behviourally but not psychologically identical to us — counterfeit humans, which would trick us into treating them as equals. I suspect this is a red herring. We have very little reason to create beings like ourselves, with all our limitations, weaknesses, and occasional concomitant sublimities. (And if we do, we can already produce them by biological means.)

What we will want are machines that aren’t like us — ones that can do things we can’t do or that make a better job of things we can. We will want artificial astronauts, explorers, builders, doctors, companions, inspirers, inventors — and some will want artificial cheaters, exploiters, and fighters. The danger is that we will give these beings a veneer of humanity — an ability to play social interaction like a game — which will lead us to both overestimate and underestimate their real abilities.

Posted in Blog and tagged , .

One Comment

  1. How do we know whether they are only playing? Is there a sharp divide preventing us from de-psychologizing all types of speech acts?
    By suggesting that there is a continuum between what LLMs do and what we do when we speak, you seem to suppose that there are types of speech acts for which consciousness (or concrete psychological states) are not a necessary condition. According to this gradualist view, making appropriate contributions to conversations (= conversational moves with words) within limited domains seems to be sufficient to qualify as a minimal speech act performer but not as a full-fledged one. However, when you describe what we as full-fledged speech act performers can do with words you point to a sharp divide, claiming that only entities (living beings?) with psychological and social attitudes and aptitudes can succeed in performing all types of speech act because they are not just playing rule-guided language games but are also able to use language as a tool.
    I would love to hear more about what are the conditions which enable entities to use language as a tool. Discussing such conditions, I suppose we will discover grey areas in which non-living entities might enter certain areas. And then we can start to investigate whether non-living entities might be able to leave the playing ground and do more than just play games.

Leave a reply (comments will be held for moderation)

This site uses Akismet to reduce spam. Learn how your comment data is processed.