Technology and the human minds

As some of you may know, I have my own particular take on the dual-process theory of reasoning, and I recently I wrote a longish paper applying this take to issues surrounding AI and cognitive enhancement. The abstract of the article runs as follows:

According to dual-process theory, human cognition is supported by two distinct types of processing, one fast, automatic, and unconscious, the other slower, controlled, and conscious. These processes are sometimes said to constitute two minds—an intuitive old mind, which is evolutionarily ancient and composed of specialized subsystems, and a reflective new mind, which is distinctively human and the source of general intelligence. This theory has far-reaching consequences, and it means that research on enhancing and replicating human intelligence will need to take different paths, depending on whether it is the old mind or the new mind that is the target. This chapter examines these issues in depth. It argues first for a reinterpretation of dual-process theory, which pictures the new mind as a virtual system, formed by culturally transmitted habits of autostimulation. It then explores the implications of this reinterpreted dual-process theory for the projects of cognitive enhancement and artificial intelligence, including the creation of artificial general intelligence. The chapter concludes with a brief assessment of the risks of those projects as they appear in this new light.

The paper appeared in a 2021 collection titled The Mind-Technology Problem, edited Robert W. Clowes, Klaus Gärtner, and Inês Hipólito. If you don’t have access the volume, here is an eprint of the article.

In addition, Anna Strasser has prepared a fantastic PowerPoint presentation summarizing the main ideas of the article, which she has kindly given me permission to share.

Some thoughts on LLMs

Here are some thoughts on large language models (LLMs), derived from a recent email exchange concerning the digital simulation of Daniel Dennett recently created by Eric Schwitzgebel, Anna Strasser, Matthew Crosby.

Do LLMs such as GPT-3 perform speech acts? Is there a continuum between what they do and what we do when we speak? Or is there a sharp difference — we performing genuine speech acts and GPT-3 merely giving the appearance of doing so?

There’s a case for thinking that GPT-3 can perform speech acts. It can make appropriate contributions to conversations within domains it has been trained on. If game-playing AIs such Alpha-Zero can make genuine moves within a board game, why can’t GPT-3 make genuine moves with in a language game?

I think it can. It can make conversational moves with words. The problem is that that’s all it can do. It can’t do any of the other things we do with words. It can’t inform, instruct, persuade, encourage, suggest, imply, deceive, and so on. It can’t perform any illocutionary acts. It can’t even assess its own conversational contributions and select the best.

It can’t do any of this, of course, because it lacks the raft of psychological and social attitudes and aptitudes that illocutionary acts require. (As Dennett has noted, Grice’s theory of communication is an attempt to elucidate this implicit background; see chapter 13 of his From Bacteria to Bach and Back.) Even a non-human animal with a simple call system can perform more types of speech act than GPT-3 can.

And here there is a big contrast with the game-playing AIs. Alpha Zero can’t do much with go counters; it’s limited to making clever moves in games of go. But we don’t do much more with go counters ourselves. Go is a game, and making clever moves is the object. Of course, we can achieve other things through playing go, such as winning a bet, but we also play the game simply to pass the time.

Language, by contrast, is far more than a game. It’s a tool — a hugely complex Swiss army knife of a tool — and one that GPT-3 doesn’t have a clue how to use. If there’s a sharp divide here, it’s between systems that play language like a game and ones that use it as a tool.

It is true that humans sometimes treat language as a game. Think of a simultaneous translator whose only object is to reproduce the moves made in one language with parallel moves in another. But these are rare cases. (This example was suggested by a comment by Douglas Hofstadter.)

A final, broader point. One of the worries about AIs such as GPT-3 is that we may end up creating more complex versions of them, which display a wide range of human-like behaviours but lack a rich human psychology — counterfeit humans, which would trick us into treating them as equals. I suspect this is a red herring. We have little reason to create machines that behave like us, with all our limitations, weaknesses, and occasional concomitant sublimities, especially as we can already produce new humans by biological means.

What we will want are machines that aren’t like us — ones that can do things we can’t do or that make a better job of things we can. We will want artificial astronauts, explorers, builders, doctors, companions, inspirers, inventors — and some will want artificial cheaters, exploiters, and fighters. The danger is that we will give these beings a veneer of humanity — an ability to play social interaction like a game — which will lead us to both overestimate and underestimate their real abilities.

Revised 7/12/22