Comment author: [deleted] 05 January 2013 08:00:23PM *  0 points [-]

Interesting article! What do you mean, though? Are you saying that Knuth's triple up-arrow is uncomputable (I don't see why that would be the case, but I could be wrong.)?

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: Solvent 06 January 2013 12:48:25AM 1 point [-]

Basically, the busy beaver function tells us the maximum number of steps that a Turing machine with a given number of states and symbols can run for. If we know the busy beaver of, for example, 5 states and 5 symbols, then we can tell you if any 5 state 5 symbol Turing machine will eventually halt.

However, you can see why it's impossible to in general find the busy beaver function- you'd have to know which Turing machines of a given size halted, which is in general impossible.

Comment author: [deleted] 05 January 2013 02:19:38AM *  0 points [-]

"This Turing machine won't halt in 3^^^3 steps" is a falsifiable prediction. Replace 3^^^3 with whatever number is enough to guarantee whatever result you need.

Edit: But you're right.

In response to comment by [deleted] on Second-Order Logic: The Controversy
Comment author: Solvent 05 January 2013 09:34:27AM *  5 points [-]

Are you aware of the busy beaver function? Read this.

Basically, it's impossible to write down numbers large enough for that to work.

In response to 2012: Year in Review
Comment author: Solvent 03 January 2013 09:49:23AM 10 points [-]

The most upvoted post of all time on LW is Holden's criticism of SI. How many pageviews has that gotten?

Comment author: Eugine_Nier 28 December 2012 11:58:55PM 1 point [-]

Well, even Eliezer's version of consequentialism isn't simple utilitarianism for starters.

Comment author: Solvent 29 December 2012 12:02:18AM 0 points [-]

It's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.

Comment author: Eugine_Nier 28 December 2012 11:12:33PM 1 point [-]

I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey.

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.

That is most definitely not the main point of that post.

Comment author: Solvent 28 December 2012 11:47:36PM 0 points [-]

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?

Comment author: ArisKatsaris 28 December 2012 01:06:48AM *  1 point [-]

But at least, you should know that most people on LW disagree with you on this intuition.

[citation needed]

Comment author: Solvent 28 December 2012 04:27:58AM 0 points [-]

I edited my comment to include a tiny bit more evidence.

Comment author: buybuydandavis 27 December 2012 11:14:48PM 1 point [-]

Thank you, that's a good start.

Yes, I had concluded that EY was anti retribution. Hadn't concluded that he had carried the day on that point.

Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

I don't think vengeance and retribution are "ideas" that people had to come up with - they're central moral motivations. "A social preference for which we punish violators" gets at 80% of what morality is about.

Some may disagree about the intuition, but I'd note that even EY had to "renounce" all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place.

This seems like it has makings of an interesting poll question.

Comment author: Solvent 28 December 2012 04:27:19AM 0 points [-]

This seems like it has makings of an interesting poll question.

I agree. Let's do that. You're consequentialist, right?

I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."

How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."

I'll make a Discussion post about this after I get your refinement of the question?

Comment author: buybuydandavis 27 December 2012 08:09:48PM 0 points [-]

You think they'd prefer that the guy that caused everyone else in the universe to suffer didn't suffer himself?

Comment author: Solvent 27 December 2012 10:38:43PM *  2 points [-]

Here's an old Eliezer quote on this:

4.5.2: Doesn't that screw up the whole concept of moral responsibility?

Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even Adolf Hitler. Pain is bad; if it's ultimately meaningful, it's almost certainly as a negative goal. Nothing any human being can do will flip that sign from negative to positive.

So why do we throw people in jail? To discourage crime. Choosing evil doesn't make a person deserve anything wrong, but it makes ver targetable, so that if something bad has to happen to someone, it may as well happen to ver. Adolf Hitler, for example, is so targetable that we could shoot him on the off-chance that it would save someone a stubbed toe. There's never a point where we can morally take pleasure in someone else's pain. But human society doesn't require hatred to function - just law.

Besides which, my mind feels a lot cleaner now that I've totally renounced all hatred.

It's pretty hard to argue about this if our moral intuitions disagree. But at least, you should know that most people on LW disagree with you on this intuition.

EDIT: As ArisKatsaris points out, I don't actually have any source for the "most people on LW disagree with you" bit. I've always thought that not wanting harm to come to anyone as an instrumental value was a pretty obvious, standard part of utilitarianism, and 62% of LWers are consequentialist, according to the 2012 survey. The post "Policy Debates Should Not Appear One Sided" is fairly highly regarded, and it esposes a related view, that people don't deserve harm for their stupidity.

Also, what those people would prefer isn't nessecarily what our moral system should prefer- humans are petty and short-sighted.

Comment author: Qiaochu_Yuan 23 December 2012 02:29:45AM *  28 points [-]

Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.

In particular his estimate of the likelihood of a story like Flamel's is way off. Moreover, the value of additional relevant information seems extremely high to me, so he really should ask Dumbledore about it as soon as possible. Horcruxes too.

Edit: And then he learns that Dumbledore is keeping a Philosopher's Stone in Hogwarts without using it and promptly attempts a citizen's arrest on him for both child endangerment and genocide...

Comment author: Solvent 27 December 2012 12:10:56PM 2 points [-]

Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.

I disagree. It seems to me that individual spells and magical items work in the way neurotypical humans expect them to work. However, I don't think that we have any evidence that the process of creating new magic or making magical discoveries works in an intuitive way.

Consider by analogy the Internet. It's not surprising that there exist sites such as Facebook which are really well designed and easy to use for humans, rendering in pretty colors instead of being plain HTML. However, these websites were created painstakingly by experts dealing with irritating low level stuff. It would be surprising that the same website had a surpassingly brilliant data storage system as well as an ingenius algorithm for something else.

Comment author: buybuydandavis 27 December 2012 08:37:29AM *  1 point [-]

Nope. Even if one grants objective meaning to a unique interpersonal aggregate of suffering (and I don't), it's just wrong.

Sometimes you want people to suffer. For example, if one fellow caused all the suffering of the rest, moving him to less suffering than everyone else would be a move to a worse universe.

EDIT: I didn't mean "you" to indicate everyone. Sometimes I want people to suffer, and think that in my hypothetical, the majority of mankind would feel the same, and choose the same, if it were in their power.

Comment author: Solvent 27 December 2012 12:02:20PM -1 points [-]

Yeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.

View more: Prev | Next