3 Comments

I really like this post, thank you. I wish I had more time to think about this stuff, but one thought I had recently, which I think fits together with what you have written here, is to think about the context window for LLMs. I think that sets the boundary of the length of any specific grammatical structure or simulation. Maybe, and this is where I’m speculating, but maybe if we think about language as emergent from our interaction with the world, we might be able to think about these LLMS as containing huge sets of cassettes of functional grammatical mini programs. Sone of these operate well in the world, and some have less direct applicability, but it might give a hint as to why these things are so damn useful.

Expand full comment

For dedicated story creation, Dramatron will be more interesting to study, and its latest test run has been analyzed here: https://deepai.org/publication/co-writing-screenplays-and-theatre-scripts-with-language-models-an-evaluation-by-industry-professionals

ChatGPT is easier to use for now, but still, it needs to be trained, and the amount of time I spent on it so far never yielded more than basic, generic output. It will improve over time, no doubt.

Expand full comment