When the Seagulls Cry by Ryukishi 07 is practically Online Algorithms: The Visual Novel, or even Infrabayesianism: The Visual Novel. I find it hard to recommend though, because while the concept is interesting and the mysteries good, the writing is overlong and the worldbuilding cringe. (Or maybe i just hate VNs as a medium).
Minor spoilers (explaining the story's premise):
The main meta-story is a murder mystery game for two players: the “detective” and the “witch” who presents clues and insults the detective's intelligence by presenting him elaborate “supernatural” non-explanations. (The last part is not in the rules, but it's traditional.) The main plot point in that story is that as the witch player begins to lose, she starts cheating, retroactively changing the mystery solution, constrained only by the facts presented to the detective. Soon it becomes apparent that cheating is in fact expected, and the detective has to brute-force all solutions Absurdle-style.
Major spoilers (for ongoing rationalist drama): Grokking the ontology presented in Seagulls led me to understand how people may become Zizians.
See also Gwern's more spoilerrific review.
Seconded Book of the New Sun. But note that Wolfe also writes in an obtuse high-literature style that might be offputting to the typical ratfic reader, which made me drop BotNS the first time through; you'd better read some of his short stories first to get some priors.
“Safe?” said Mr. Beaver. “Who said anything about safe? 'Course Aslan isn't safe. But he's good. He's the King, I tell you.”
I claim that if Dennet's Criterion justifies the realism about physical macro-objects, then it must also justify the realism about simulacra, so long as satisfies analogous structural properties.
simulacra : GPT :: objects : physics
I propose the term anglophysics for the hypothetical field of study whose existence you're implying here.
What aspect of the real world do you think the model fails to understand?
No, seriously. Think for a minute and write down your answer before reading the rest.
You just wrote your objection using text. In the limit, a LLM that predicts text perfectly would output just what you wrote when prompted by my question (modulo some outside-text about two philosophers on LessWrong), therefore it's not a valid answer to “what part of the world does the LLM fail to model”.
The root of this paradox is that the human notion of ‘self’ actually can refer to at least two things: the observer and the agent. FDT and similar updateless decision theories only really concern themselves with the second.
An FDT agent cannot really “die” as they do exist in all places of the universe at once. Whereas, as you have noticed, an observer can very much die in a finite thought experiment. In reality, there is no outside-text dictating what can happen and you would just get quantum-immortaled into hell instead.
This case is not really a scenario exposing a bug in FDT, it's rather FDT exposing an absurdity in the scenario's prior — stipulating that you should be only concerned with one copy of you (the one outside the simulation).
Thought experiments can arbitrarily decide that one copy of the agent does not correspond to an observer whose values should be catered to and therefore can be assigned a probability/utility of 0. But you, the one who implements the algorithm, cannot differentiate between yourselves based on whether they are simulated by Omega, only based on what you have observed.
"Why? I am an image of His image. Do we not share the same values?" said the woman.
Snakes can't talk.
Reminder what C.S. Lewis said in The Magician's Nephew:
"Creatures, I give you yourselves," said the strong, happy voice of Aslan. "I give to you forever this land of Narnia. I give you the woods, the fruits, the rivers. I give you the stars and I give you myself. The Dumb Beasts whom I have not chosen are yours also. Treat them gently and cherish them but do not go back to their ways lest you cease to be Talking Beasts. For out of them you were taken and into them you can return. Do not so."
I think this is the first time i'm seeing an author portraying some“one” losing “their” sapience timelessly, having been retrocursed into never having been hnau in the first place.
[The Talking Beasts actually were totally real. Walter Wangerin's Dun Cow saga is a good account of the tragedy of what happened to them.]
If your intuitions about the properties of qualia are the same as mine, you might appreciate this schizo theory pattern-matching them to known physics.
I have an objection towards the Troll Proof you linked in the picture. Namely, line 2 does not follow from line 1. □C states that “C is provable within the base system”, but the Troll Proof has “base system + □C→C” as its assumptions, per line 3.
The strongest statement we can get at line 2 is □((□C→C)→C), by implication introduction.
I discovered John C Wright's Golden Age trilogy thanks to one Eliezer Yudkowsky, who mentioned it multiple times in his notorious Sequences. By the end of the first book i was expecting something very much in the deception genre you've mentioned — a tragic psychological horror about an unreliable narrator being gaslit about the nature of reality. This is a genre i really enjoy, and i kind of hoped for a novel-length version of Scott Alexander's The Last Temptation of Christ.
I did not get that, the trilogy goes in a wildly different direction. But saying whether for better or for worse — heck, even saying if it's a good or bad book — would constitute massive SPOILERS for anyone who discovered this book through Yudkowsky. (If you know why he dropped it, it should be obvious).
I kept hate-reading it.