I think you're looking at later stages of development than I am. By the time Turing came around the thousands-year-long effort to formalize computation was mostly over; single models get way too much credit because they herald the triumph at the end of the war. It took many thousands of years to get to the point of Church/Goedel/Turing. I think that regarding justification we haven't even had our Leibniz yet. If you look at Leibniz's work he combined philosophy (monadology), engineering (expanding on Pascal's calculators), cognitive science (alphabet of thought), and symbolic logic, all centered around computation though at that time there was no such thing as 'computation' as we know it (and now we know it so well that we can use it to listen to music or play chess). Archimedes is a much earlier example but he was less focused. If you look at Darwin he spent the majority of his time as a very good naturalist, paying close attention to lots of details. His model of evolution came later.
With morality we happen to be up quite a few levels of abstraction where 'looking at lots of details' involves paying close attention to themes from evolutionary game theory, microeconomics, theoretical computer science &c. Look at CFAI to see Eliezer drawing on evolution and evolutionary psychology to establish an extremely straightforward view of 'justification', e.g. "Story of a Blob". It's easy to stumble around in a haze and fall off a cliff if you don't have a ton of models like that and more importantly a very good sense of the ways in which they're unsatisfactory.
Those reasons aren't convincing by themselves of course. It'd be nice to have a list of big abstract ideas whose formulation we can study on both the individual and memetic levels. E.g. natural selection and computation, and somewhat smaller less-obviously-analogous ones like general relativity, temperature (there's a book about its invention), or economics. Unfortunately there's a lot of success story selection effects and even looking closely might not be enough to get accurate info. People don't really have introspective access to how they generate ideas.
Side question: how long do you think it would've taken the duo of Leibniz and Pascal to discover algorithmic probability theory if they'd been roommates for eternity?
If so, what do you base your claim on (besides your intuition)?
I think my previous paragraph answered this with representative reasons. This is sort of an odd way to ask the question 'cuz it's mixing levels of abstraction. Intuition is something you get after looking at a lot of history or practicing a skill for awhile or whatever. There are a lot of chess puzzles I can solve just using my intuition, but I wouldn't have those intuitions unless I'd spent some time on the object level practicing my tactics. So "besides your intuition" means like "and please give a fine-grained answer" and not literally "besides your intuition". Anyway, yeah, personal experience plus history of science. I think you can see it in Nesov's comments from back when, e.g. his looking at things like game semantics and abstract interpretation as sources of inspiration.
I think you're looking at later stages of development than I am.
You're right, and perhaps I should better familiarize myself with earlier intellectual history. Do you have any books you can recommend, on Leibniz for example?
Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.
In a utility-maximizing AI, mental states can be reduced to smaller components. The AI will have goals, and those goals, upon closer examination, will be lines in a computer program.
But in the blue-minimizing robot, its "goal" isn't even a line in its program. There's nothing that looks remotely like a goal in its programming, and goals appear only when you make rough generalizations from its behavior in limited cases.
Philosophers are still very much arguing about whether this applies to humans; the two schools call themselves reductionists and eliminativists (with a third school of wishy-washy half-and-half people calling themselves revisionists). Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.
I took a similar tack asking ksvanhorn's question in yesterday's post - how can you get a more accurate picture of what your true preferences are? I said:
A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.
The problem is that not signing up for cryonics is also a "revealed preference". "You wouldn't sign up for cryonics, which means you don't really fear death so much, so why bother running from a burning building?" is an equally good argument, although no one except maybe Marcus Aurelius would take it seriously.
Both these arguments assume that somewhere, deep down, there's a utility function with a single term for "death" in it, and all decisions just call upon this particular level of death or anti-death preference.
More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior. People guess at their opinions about death by analyzing these behaviors, usually with a bit of signalling thrown in. If they desire consistency - and most people do - maybe they'll change some of their other behaviors to conform to their hypothesized opinion.
One more example. I've previously brought up the case of a rationalist who knows there's no such thing as ghosts, but is still uncomfortable in a haunted house. So does he believe in ghosts or not? If you insist on there being a variable somewhere in his head marked $belief_in_ghosts = (0,1) then it's going to be pretty mysterious when that variable looks like zero when he's talking to the Skeptics Association, and one when he's running away from a creaky staircase at midnight.
But it's not at all mysterious that the thought "I don't believe in ghosts" gets reinforced because it makes him feel intelligent and modern, and staying around a creaky staircase at midnight gets punished because it makes him afraid.
Behaviorism was one of the first and most successful eliminationist theories. I've so far ignored the most modern and exciting eliminationist theory, connectionism, because it involves a lot of math and is very hard to process on an intuitive level. In the next post, I want to try to explain the very basics of connectionism, why it's so exciting, and why it helps justify discussion of behaviorist principles.