I think you've pretty much stated the exact opposite of my own moral-epistomological worldview.
I don't like the analogy with physics. Physical theories get tested against external reality in a way that makes them fundamentally different from ethical theories.
If you want to analogize between ethics and science, I want to compare it to the foundations of mathematics. So utilitarianism isn't relativity, it's ZFC. Even though ZFC proves PA is a consistent and true theory of the natural numbers, it's a huge mistake for a human to base their trust in PA o...
I haven't. I'll see if I can show up for the next one.
this was also the part of Dalliard's critique I found most convincing. Shalizi's argument seems to a refutation of a straw man.
One thing Dalliard mentions is that the 'g' derived from different studies are 'statistically indistinguishable'. What's the technical content of this statement?
thanks for the link.
Not that I feel particularly qualified to judge, but I'd say Dalliard has a way better argument. I wonder if Shalizi has written a response.
wow that's a neat service.
It looks like we may have enough people interested in Probability Theory, Though I doubt we all live in the same city. I live near DC.
Depending on how many people are interested/where they live, it might make sense to meet over video chat instead.
I'm 32.
So you are assuming that it will be wanting to prove the soundness of any successors? Even though it can't even prove the soundness of itself? But it can believe in it's own soundness in a Bayesian sense without being able to prove it. There is not (as far as I know) any Godelian obstacle to that. I guess that was your point in the first place.
OK, forget about F for a second. Isn't the huge difficulty finding the right deductions to make, not formalizing them and verifying them?
This is all nifty and interesting, as mathematics, but I feel like you are probably barking up the wrong tree when it comes to applying this stuff to AI. I say this for a couple of reasons:
First, ZFC itself is already comically overpowered. Have you read about reverse mathematics? Stephen Simpson edited a good book on the topic. Anyway, my point is that there's a whole spectrum of systems a lot weaker than ZFC that are sufficient for a large fraction of theorems, and probably all the reasoning that you would ever need to do physics or make real wo...
The problem is if your mathematical power has to go down each time you create a successor or equivalently self-modify. If PA could prove itself sound that might well be enough for many purposes. The problem is if you need a system that proves another system sound and in this case the system strength has to be stepped down each time. That is the Lob obstacle.
The result works for theories as simple as Peano arithmetic.
I don't think you've chosen your examples particularly well.
Abortion certainly can be a 'central' case of murder. Immagine aborting a fetus 10 minutes prior to when it would have been born. It can also be totally 'noncentral': the morning after pill. Abortions are a grey area of central-murder depending on the progress of neural devlopment of the fetus.
Affermative action really IS a central case of racism. It's bad for the same reason as segregation was bad, because it's not fair to judge people based on their race. The only difference is that it's not nearly AS bad. Segregation was brutal and oppressive, while affermative action doesn't really affect most peopel enough for them to notice.
What do you think you're adding to the discussion by trotting out this sort of pedantic literalism?
Unless someone explicitly says they know something with absolute 100% mathematical certainty, why don't you just use your common sense and figure that when they say they "know" something, they mean they assign it a very high probability, and believe they have epistemologically sound reasons for doing so.
"Trust your intuitions, but don't waste too much time arguing for them"
This is an excellent point. Intuition plays an absolutely crucial point in human thought, but there's no point in debating an opinion that (by definition, even) you're incapable of verbalizing your reasons for. Let me suggest another maxim:
Intuitions tell you where to look, not what you'll find.
wait so, are you agreeing with me or disagreeing?
What makes you think Hitler didn't deliberately think about how to yell at crowds?
You're confusing "reason" with inappropriate confidence in models and formalism.
I vote for the meta-thread convention, or for any other mechanism that keeps meta off the front page.
I think the main problem with mormon2's submission was not where it was posted, but that it was pointless and uninformed.
I suggest you run an experiment. Go try to eat at a restaurant and explicitly state your intention not to tip. I predict the waiter will tell you to fuck off, and if the manager gets called out, he'll tell you to fuck off too.
I basically agree with you, though I'm not sure the legal distinction between "theft" and "breach of contract" is meaningful in this context. As far as I know there's no law that says you have to tip at all. So from a technical legal perspective, failing to tip is neither theft nor breach of contract nor any other offense.
It may not be legal theft, but it's still moral theft. You sat down and ate with the mutual understanding that you would tip. The only reason the waiter is bringing you food is because of the expectation that you will tip. If you announced your intention not to tip, he would not serve you, he would tell you to fuck off. The tip is a payment for a service, it is not a gift. The fact that the agreement to pay is implicit, the fact that the precise amount of the payment is left partially unspecified are merely technicalities that do not change the basic fact that the tip is a payment, not a gift.
You don't tip in order to be altruistic, you tip because you informally agreed to tip by eating in a restaurant in the first place. If you don't tip (assuming the service was acceptable), you aren't being virtuous, you're being a thief.
Perhaps you should say the correct moral move is to tip exactly 15%.
I believe EY has already explained that he's trying to make more rationalists, so they can go and solve FAI.
If I think I know a more efficient way to make a widget, I still need to convince somebody to put up the capital for my new widget factory.
But if results depend on my ability to convince rich people, that's not prediction market!
what!? Why not?
I guess it depends on how you define bullet-biting. Let me be more specific: voted up for accepting an ugly truth instead of rationalizing or making excuses.
Voted up for bullet-biting.
Arbitrage, in the broadest sense, means picking up free money - money that is free because of other people's preferences
except, finding exploitable inconsistencies in other peoples preferences that haven't yet been destroyed by some other arbitrageur actually requires a fair bit of work and/or risk.
Do you vote?
Well, no.
Status is a informal, social concept. The legal system doesn't have much to do with "awarding" it.
In my experience, children are cruel, immoral, egotistical, and utterly selfish. The last thing they need is to have their inflated sense of self worth and entitlement stroked by the sort of parenting you seem to be advocating. Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless. They should indeed be grateful that anyone would take the trouble to feed and care for someone as stupid and useless as they, and repay the favor by becoming stronger.
Children ought to have fundamentally lower status, not just because they're children per se, but because they're stupid and useless.
So then the legal system should award status based on usefulness and intelligence, not age as in the present system.
Children are ignorant and powerless; that's not the same as stupid and useless.
an other example: cox's theorem.
"The truly fast way to produce a human-relative ideal moral agent is to create an AI with the interim goal of inferring the "human utility function" (but with a few safeguards built in, so it doesn't, e.g., kill off humanity while it solves that sub-problem),"
That is three-laws-of-robotics-ism, and it won't work. There's no such thing as a safe superintelligince that doesn't already share our values.
it's perfectly possible for one twin to get fat while the other doesn't. If it doesn't happen often, it's because features like willpower are more controlled by genes than we think, not because staying thin doesn't depend on willpower.
I figured it out! Roger Penrose is right about the nature of the brain!
just kidding.
Yes, I think it will change the decision. You need a very large number of minuscule steps to go from specs to torture, and at each stage you need to decimate the number of people affected to justify inflicting the extra suffering on the few. It's probably fair to assume the universe can't support more than say 2^250 people, which doesn't seem nearly enough.
These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?
As with Torture vs. Specks, the point of this is to expose your decision procedure in a context where you don't have to compare remotely commensurable utilities. Learning about the behavior of your preferences at such an extreme can help illuminate the right thing to do in more plausible contexts. (Thinking through Torture vs. Dust Specks helped mold my thinking on public policy, where it's very tempting to weigh the salience of a large benefit to a few people against a small cost to everyone.)
EDIT: It's the same heuristic that mathematicians often use when we're pondering a conjecture— we try it in extreme or limiting cases to see if it breaks.
seems to me that ESR is basically right, except, I'm not sure Dennet would even disagree. Maybe he'll reply in a comment?
Yup. I get all that. I still want to go for the specs.
Perhaps it has to do with the fact that 3^^^3 is way more people than could possibly exist. Perhaps the specs v. torture hypothetical doesn't actually matter. I don't know. But I'm just not convinced.
Actually, I think your right. The escalation argument has caught me in a contradiction. I wonder why I didn't see it last time around.
I still prefer the specs though. My prior in favor of the specs is strong enough that I have to conclude that there's something wrong with the escalation argument that I'm not presently clever enough to find. It's a bit like reading a proof that 2+2 = 5. You know you've just read a proof, and you checked each step, but you still, justifiably, don't believe it. It's far more likely that the proof fooled you in some subtle way than it is that arithmetic is actually inconsistent.
the right answer is |U(3^^^3 + 1dustspecs) - U(3^^^3 dustspecs)| < |U(1 dustspec) - U(0 dustspecs)|, and U(any number of dustspecs) < U(torture)
There is no additivity axiom for utility.
I don't think it's an exact quote of anything on OB or LW. If it is then my subconscious has a much better memory than I do. I was just attempting to relate the Bourdain quote to OBLW terminology.
Yea, but then it wouldn't be a quote anymore!
"I don't, I've come to believe, have to agree with you to like you, or respect you."
--Anthony Bourdain.
Never forget that your opponents are not evil mutants. They are the heroes of their own stories, and if you can't fathom why they do what they do, or why they believe what they believe, that's your failing not theirs.
If anyone guesses above 0, anyone guessing 0 will be beaten by someone with a guess between 0 and the average.
if the average is less than 3/4 then the zeros will still win
you are confusing wanting "truth" with wanting the beliefs you consider to be true.
What a presumptuous, useless thing to say. Why don't you explain how you've deduced my confusion from that one sentence.
Apparently you think I've got a particular truth in mind and I'm accusing those who disagree with me of deprioritizing truth. Even if I was, why does that indicate confusion on my part? If I wanted to accuse them of being wrong because they were stupid, or of being wrong because they lacked the evidence, I would have said so. I'm accusing...
thanks! I haven't seen that one before.
I'm working on a post on this topic, but I don't think I can really adequately address what I don't like about how Jayne's presents the foundations of probability theory without presenting it myself the way I think it ought to be. And to do that I need to actually learn some things I don't know yet, so it's going to be a bit of a project.
Interestingly, those goals I described us in terms of -- wanting truth, wanting to avoid deluding ourselves -- are not really what separates "us" from "them".
I'm not sure if that's true. Everyone says they want the truth, but often reveal though their actions that it's pretty low on the priority list. Perhaps we should say that we want truth more than most people. Or that we don't believe we can get away with deceiving ourselves without paying a terrible price.
What about all the angst people had over things like irrational numbers ,infinitesimals, non-smooth functions, infinite cardinalities, non-euclidian geometries?
I think what you're saying about needing some way to change our minds is a good point though. And I certainly wouldn't say that every single object-level belief I hold is more secure than every meta belief. I'll even grant you that for certain decisions, like how to set public health policy, some sort of QALY-based shut up and calculate approach is the right way to go.
But I don't think that... (read more)