Comment author: pnrjulius 09 June 2012 02:40:08AM -1 points [-]

Why would being good make you unsafe?

Comment author: Ben_Welchner 09 June 2012 03:16:02AM *  0 points [-]

Caledonian hasn't posted anything since 2009, if you said that in hopes of him responding.

Comment author: katydee 07 June 2012 04:17:26PM 0 points [-]

Wrong input > no input? I'm not so sure.

Comment author: Ben_Welchner 07 June 2012 04:39:46PM 4 points [-]

Depends on if you're hallucinating everything or your vision has at least some bearing in the real world. I mean, I'd rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.

Comment author: ChristianKl 06 June 2012 04:17:44PM *  1 point [-]

Do you mean:

1) Because journals are really careful about proof-reading and there are no errors in journal articles?

2) Because journals are really careful about proof-reading, they delete every sentence where a scientist says that "I've been wrong in the past"?

3) Some other way in which careful proof-reading removes the possibility that "I've been wrong in the past" appears in a journal article?

Comment author: Ben_Welchner 06 June 2012 04:21:08PM 5 points [-]

It was grammar nitpicking. "The authors where wrong".

In response to comment by [deleted] on Rationality Quotes June 2012
Comment author: MarkusRamikin 04 June 2012 05:00:15PM *  -2 points [-]

Last I checked that was a fallacy...

I mean what about truth of the matter? Accuracy? Is there no difference between possible definitions in how well they carve reality, or how deep an understanding they reflect?

Or is it that anything goes, and we can define it however we please and might as well choose whatever is most beneficial.

Comment author: Ben_Welchner 05 June 2012 02:25:02PM 0 points [-]

Unless you expect some factual, objective truth to arise about how one should define oneself, it seems fair game for defining in the most beneficial way. It's physics all the way down, so I don't see a factual reason not to define yourself down to nothing, nor do I see a factual reason to do so.

Comment author: pnrjulius 03 April 2012 02:54:29AM -1 points [-]

I know I'll probably trigger a flamewar...

But I actually don't think cryonics is worth the cost. You could be using that money to cure diseases in the Third World, or investing in technology, or even friendly-AI research if that's your major concern, and you almost certainly will achieve more good according to what I assume is your own utility function (as long as it doesn't value a 1/1 billion chance of you being revived as exactly you over say the lives of 10,000 African children). Also, transhumans will presumably judge the same way, and decide that it's not worth it to research reviving you when they could be working on a Dyson Sphere or something.

Frankly, from what we know about cognitive science, most of the really useful information about your personality is going to disappear upon freezing anyway. You are a PROCESS, not a STATE; as such, freezing you will destroy you, unless we've somehow kept track of all the motions in your brain that would need to be restarted. (Assuming that Penrose is wrong and the important motions are not appreciably quantum. If quantum effects matter for consciousness, we're really screwed, because of the Uncertainty Principle and the no-cloning theorem.) Preserving a human consciousness is like trying to freeze a hurricane.

TLDR with some rhetoric: I've seen too many frozen strawberries to believe in cryonics.

Comment author: Ben_Welchner 03 April 2012 03:42:06AM *  5 points [-]

I know I'll probably trigger a flamewar...

Nitpick: LW doesn't actually have a large proportion of cryonicists, so you're not that likely to get angry opposition. As of the 2011 survey, 47 LWers (or 4.3% of respondents) claimed to have signed up. There were another 583 (53.5%) 'considering it', but comparing that to the current proportion makes me skeptical they'll sign up.

Comment author: Alex_Altair 17 March 2012 12:39:20AM 0 points [-]

I spent the first several seconds trying to figure out the tree diagram at the top. What does it represent?

Comment author: Ben_Welchner 17 March 2012 12:56:30AM *  0 points [-]

A decision tree (the entirety of my game theory experience has been a few online videos, so I likely have the terminology wrong), with decision 1 at the top and the end outcomes at the bottom. The sections marked 'max' have the decider trying to pick the highest-value end outcome, and the sections marked 'min' have the decider trying to pick the lowest-value end outcome. The numbers in every line except the bottom propagate up depending on which solution will be picked by whoever is currently doing the picking, so if Max and Min maximize and minimize properly the tree's value is 6. I don't quite remember how the three branches being pruned off work.

Comment author: [deleted] 12 March 2012 02:13:33AM 1 point [-]

Personally, I think that behavior should be rewarded.

Thank you, and I share that view. Why don't we see everyone doing it? Why, I would be overjoyed if everyone was so firmly trained in Rat101 that comments like these were not special.

But now I am deviating into a should-world + diff.

In response to comment by [deleted] on Rationally Irrational
Comment author: Ben_Welchner 12 March 2012 02:36:02AM *  1 point [-]

I'm pretty sure we do see everyone doing it. Randomly selecting a few posts, in The Fox and the Low-Hanging Grapes the vast majority of comments received at least one upvote, the Using degrees of freedom to change the past for fun and profit thread have slightly more than 50% upvoted comments and the Rationally Irrational comments also have more upvoted than not.

It seems to me that most reasonably-novel insights are worth at least an upvote or two at the current value.

EDIT: Just in case this comes off as disparaging LW's upvote generosity or average comment quality, it's not.

Comment author: Ben_Welchner 06 March 2012 03:03:55AM 4 points [-]

He also notes that the experts who'd made failed predictions and employed strong defenses tended to update their confidence, while the experts who'd made failed predictions but didn't employ strong defenses did update.

I assume there's a 'not' missing in one of those.

Comment author: xxd 27 January 2012 06:07:11PM *  0 points [-]

This is a cliche and may be false but it's assumed true: "Power corrupts and absolute power corrupts absolutely".

I wouldn't want anybody to have absolute power not even myself, the only possible use of absolute power I would like to have would be to stop any evil person getting it.

To my mind evil = coercion and therefore any human who seeks any kind of coercion over others is evil.

My version of evil is the least evil I believe.

EDIT: Why did I get voted down for saying "power corrupts" - the corrollary of which is rejection of power is less corrupt whereas Eliezer gets voted up for saying exactly the same thing? Someone who voted me down should respond with their reasoning.

Comment author: Ben_Welchner 28 January 2012 02:32:57AM 1 point [-]

Given humanity's complete lack of experience with absolute power, it seems like you can't even take that cliche for weak evidence. Having glided through the article and comments again, I also don't see where Eliezer said "rejection of power is less corrupt. The bit about Eliezer sighing and saying the null-actor did the right thing?

(No, I wasn't the one who downvoted)

Comment author: ahartell 27 January 2012 03:46:51AM 2 points [-]

Yeah, but maybe it would have been better as a footnote. And would newer readers know what "EY" meant?

Comment author: Ben_Welchner 27 January 2012 06:52:00AM 2 points [-]

And would newer readers know what "EY" meant?

Given it's right after an anecdote about someone whose name starts with "E", I think they could make an educated guess.

View more: Prev | Next