What issues does your best atheist theory have?
My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it's been a while since I last read the sequence), but I end up applying it in the other direction. I don't think a computer program can have any moral value, therefore, without the presence of a soul, people ...
It's about how, if you're attacking somebody's argument, you should attack all of the bad points of it simultaneously, so that it doesn't look like you're attacking one and implicitly accepting the others. With any luck, it'll be up tonight.
Hi, I've been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I'm planning out an actual post at the moment, I figured I should tell people where I'm coming from.
I'm a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it's happened and I couldn't ask for a better job.
Like many people, I came to Less Wrong from TVTropes vi...
Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider "decent people" to do today)...
If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won't say "unselfish" for reasons discussed elsewhere in t...
Thinking about this in commonsense terms is misleading, because we can't imagine the difference between 8x utility and 16x utility
I can't even imagine doubling my utility once, if we're only talking about selfish preferences. If I understand vNM utility correctly, then a doubling of my personal utility is a situation which I'd be willing to accept a 50% chance of death in order to achieve (assuming that my utility is scaled so that U(dead) = 0, and without setting a constant level, we can't talk about doubling utility). Given my life at the moment (apar...
"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."
Sorry if this question has already been answered (I've read the comments but probably didn't catch all of it), but...
I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you...
Perfect decision-makers, with perfect information, should always be able to take the optimal outcome in any situation. Likewise, perfect decision-makers with limited information should always be able to choose the outcome with the best expected payoff under strict Bayesian reasoning.
However, when the actor's decision-making process becomes part of the situation under consideration, as happens when Katemega scrutinises Joe's potential for leaving her in the future, then the perfect decision-maker is only able to choose the optimal outcome if he is also capa...
It's an interesting situation, and I can see the parallel to Newcombe's Problem. I'm not certain that it's possible for a person to self-modify to the extent that he will never leave his wife, ever, regardless of the very real (if small) doubts he has about the relationship right now. I don't think I could ever simultaneously sustain the thoughts "There's about a 10% chance that my marriage to my wife will make me very unhappy" and "I will never leave her no matter what". I could make the commitment financially - that, even if the marri...
Talking with people that do not agree with you as though they were people. That is taking what they say seriously and trying to understand why they are saying what they say. Asking questions helps. Also, assume that they have reasons that seem rational to them for what they say or do, even if you disagree.
I think this is a very important point. If we can avoid seeing our political enemies as evil mutants, then hopefully we can avoid seeing our conversational opponents as irrational mutants. Even after discounting the possibility that you, personally, mi...
I don't know how to port this strategy over to verbal acuity for rationality.
Perhaps by vocalising simple logic? When you make a simple decision, such as "I'm going to walk to work today instead of catching the bus", go over your logic for the decision, even after you've started walking, as if you're explaining your decision to someone else. I often do this (not out loud, but as a mental conversation), just for something to pass the time, and I find that it actually helps me organise my thoughts and explain my logic to other real people.
Sexual Weirdtopia:
The government takes a substantial interest in people's sex lives. People are expected to register their sexual preferences with government agencies. A certain level of sexual education and satisfaction is presumed to be a basic right of humanity, along with health care and enough income to live on. Workers are entitled to five days' annual leave for seeking new or maintaining old romantic and sexual relationships, and if your lover leaves you because you're working too hard, you can sue your employer and are likely to win. Private prosti...
I think I've started to do this already for Disputing Definitions, as has my girlfriend, just from listening to me discussing that article without reading it herself. So that's a win for rationality right there.
To take an example that comes up in our household surprisingly often, I'll let the disputed definition be " steampunk ". Statements of the form "X isn't really steampunk!" come up a lot on certain websites, and arguments over what does or doesn't count as steampunk can be pretty vicious. After reading "Disputing Definitions&...
A person who can kill another person might well want 5$, for whatever reason. In contrast, a person who can use power from beyond the Matrix to torture 3^^^3 people already has IMMENSE power. Clearly such a person has all the money they want, and even more than that in the influence that money represents. They can probably create the money out of nothing. So already their claims don't make sense if taken at face value.
Ah, my mistake. You're arguing based on the intent of a legitimate mugger, rather than the fakes. Yes, that makes sense. If we let f(N) b...
This is a very good point - the higher the number chosen, the more likely it is that the mugger is lying - but I don't think it quite solves the problem.
The probability that a person, out to make some money, will attempt a Pascal's Mugging can be no greater than 1, so let's imagine that it is 1. Every time I step out of my front door, I get mobbed by Pascal's Muggers. My mail box is full of Pascal's Chain Letters. Whenever I go online, I get popups saying "Click this link or 3^^^^3 people will die!". Let's say I get one Pascal-style threat every ...
This problem sounds awfully similar to the halting problem to me. If we can't tell whether a Turing machine will eventually terminate without actually running it, how could we ever tell if a Turing machine will experience consciousness without running it?
Has anyone attempted to prove the statement "Consciousness of a Turing machine is undecideable"? The proof (if it's true) might look a lot like the proof that the halting problem is undecideable. Sadly, I don't quite understand how that proof works either, so I can't use it as a basis for the con...
Do we even need the destination? When you consider "fun" as something that comes from a process, from the journey of approaching a goal, then wouldn't it make sense to disentangle the journey and the goal? We shouldn't need the destination in order to make the journey worthwhile. I mean, if the goal were actually important, then surely we'd just get our AI buddies to implement the goal, while I was off doing fun journey stuff.
For a more concrete example:
I like baking fruitcakes. (Something I don't do nearly often enough these days.) Mixing the ra...
What really struck me with this parable is that it's so well-written that I felt genuine horror and revulsion at the idea of an AI making heaps of size 8. Because, well... 2!
So, aside from the question of whether an AI would come to moral conclusions such as "heaps of size 8 are okay" or "the way to end human suffering is to end human life", the question I'm taking away from this parable is, are we any more enlightened than the Pebblesorters? Should we, in fact, be sending philosophers or missionaries to the Pebblesorter planet to explain to them that it's wrong to murder someone just because they built a heap of size 15?
If I actually trust the lottery officials, that means that I have certain knowledge of the utility probabilities and costs for each of my choices. Thus, I guess I'd choose whichever option generated the most utility, and it wouldn't be a matter of "intuition" any more.
Applying that logic to the initial Mugger problem, if I calculated, and was certain of, there being at least a 1 in 3^^^^3 chance that the mugger was telling the truth, then I'd pay him. In fact, I could mentally reformulate the problem to have the mugger saying "If you don't g...
I can see that I'm coming late to this discussion, but I wanted both to admire it and to share a very interesting point that it made clear for me (which might already be in a later post, I'm still going through the Metaethics sequence).
This is excellent. It confirms, and puts into much better words, an intuitive response I keep having to people who say things like, "You're just donating to charity because it makes you feel good." My response, which I could never really vocalise, has been, "Well, of course it does! If I couldn't make it feel ...
I have to say that the sequence on Quantum Mechanics has been awfully helpful so far, especially the stuff on entanglement and decoherence. Bell's Theorem makes a lot more sense now.
Perhaps one helpful way to get around the counterintuitive implications of entanglement would be to say that when one of the experimenters "measures the polarisation of photon A", they're really measuring the polarisation of both A and B? Because A and B are completely entangled, with polarisations that must be opposite no matter what, there's no such thing as "m...
It does seem that the probability of someone being able to bring about the deaths of N people should scale as 1/N, or at least 1/f(N) for some monotonically increasing function f. 3^^^^3 may be a more simply specified number than 1697, but it seems "intuitively obvious" (as much as that means anything) that it's easier to kill 1697 people than 3^^^^3. Under this reasoning, the likely deaths caused by not giving the mugger $5 are something like N/f(N), which depends on what f is, but it seems likely that it converges to zero as N increases.
It is a...
I agree, intuition is very difficult here. In this specific scenario, I'd lean towards saying yes - it's the same person with a physically different body and brain, so I'd like to think that there is some continuity of the "person" in that situation. My brain isn't made of the "same atoms" it was when I was born, after all. So I'd say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn't 100% sure.
However, if the original brain and body weren't destroyed, and we now had two ap... (read more)