That assumes the scenario is iterated, I'm talking it'd precomit to do so even in a one-of scenario. The resxzt of you argument was my point, that the same reasoning goes for anger.
Wow, people are still finding this occasionally. It fills me with Determination.
Um no. The specif sequence of muscle contractions is the action, and the thing they try to achieve is beautiful patterns of motion with certain kinds of rhythm and elegance, and/or/typically the perception of such in an observer.
This thing is still alive?! :D I really should get working on that updated version sometime.
Didn't think of it like that, but sort of I guess.
It has near maximal computational capacity, but that capacity isn't being "used" for anything in particular that is easy to determine.
This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.
Well, that's quite obvious. Just imagine the blackmailer is a really stupid human with a big gun that'd fall for blackmail in a variety of awful ways, and has a bad case of typical mind fallacy, and if anything goes other than their expectations they get angry and just shot them before thinking through the consequences.
Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That's not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.
Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn't happened yet. or maybe the patches that dont come up in long enoguh get commented out.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it's architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of "real" is "stuff I might care about".
My point wasn't that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn't True, then it wouldn't be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can't prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It's a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn't need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.
The solution here might be that it does mainly tell you they have constructed a coherent story in their mind, but that having constructed a coherent story in their mind is still usefull evidence for being true depending on what else you know abaut the person, and thus worth telling. If the tone of the book was differnt, it might say: