Posted on Sat 12 October 2024

llms again, sorry

New paper: LLMs don’t do formal reasoning.

Well, of course.

LLMs don’t do informal reasoning, either.

Humans are great at pattern recognition. We even recognize faces in clouds and tree trunks. We recognize wheelbarrows, bears and crabs and archers in the stars. We make things that have meaning, and we communicate through speech and text and art.

LLMs are great pattern-generators. They are extremely well-tuned to make patterns that look like they might have meaning. A human trying to communicate may be bad at it, but they have an underlying model of the world that they are referencing and updating. An LLM is not trying to communicate anything. An LLM has a model of language, not a model of the world.

The map is not the territory. All models are wrong, but some are useful.

The situations in which it is reasonable to use an LLM are exactly the situations in which it is reasonable to roll some dice and use that to read the table of random monster encounters; to pull a card from an Oblique Strategies deck; to twirl the knobs on the synth and see if you can get a cool sound. In years past, you could type in a good list of keywords to Google and hit the I’m Feeling Lucky button.

Attempts to use LLMs for more than this fail. Often they do not fail in such an obvious way that the results are immediately discarded, which is where most of the danger resides.

-30-


© -dsr-. Send feedback or comments via email — by continuing to use this site you agree to certain terms and conditions.

Built using Pelican. Derived from the svbhack theme by Giulio Fidente on github.