Comment author: Grognor 26 November 2011 08:31:52AM 16 points [-]

Re: this image

Fucking brilliant.

Comment author: Bongo 29 November 2011 11:30:28PM 0 points [-]

It's also another far-mode picture.

Comment author: [deleted] 16 November 2011 05:51:12PM 1 point [-]

I have ten tabs open right now.

<making excuses>This is probably because of my habit to opening almost all links in a new tab (because it's easier to get back to where I came from) and can't be bothered to close a tab unless I really have too many tabs open.</making excuses>

Comment author: Bongo 22 November 2011 01:59:37AM 0 points [-]

73 tabs, 4 windows.

In response to Existential Risk
Comment author: Gedusa 15 November 2011 04:04:01PM 22 points [-]

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

In response to comment by Gedusa on Existential Risk
Comment author: Bongo 16 November 2011 12:45:46PM 3 points [-]

Also, I'd say both of those pictures seem to have the effect of inducing far mode.

Comment author: Bongo 26 October 2011 11:38:16AM 4 points [-]

Given any problem, one should look at it, and pick the course that maximising one's expectation. ... what if my utility is non-linear

You're confusing expected outcome and expected utility. Nobody thinks you should maximize the utility of the expected outcome; rather you should maximize the expected utility of the outcome.

Lets now take another example: I am on Deal or No Deal, and there are three boxes left: $100000, $25000 and $.01. The banker has just given me a deal of $20000 (no doubt to much audience booing). Should I take that? Expected gains maximisation says certainly not!

Yes, and expected gains maximization, which nobody advocates, is stupid, unlike expected utility maximization, which will take into account the fact that your utility function is probably not linear on money.

Comment author: Bongo 24 October 2011 06:23:37AM 2 points [-]

Is there a video of the full lecture?

Comment author: timtyler 18 October 2011 01:45:48PM *  5 points [-]

The paper gives what it describes as the “AGI Apocalypse Argument” - which ends with the following steps:

_12. For almost any goals that the AGI would have, if those goals are pursued in a way that would yield an overwhelmingly large impact on the world, then this would result in a catastrophe for humans.

_13. Therefore, if an AGI with almost any goals is invented, then there will be a catastrophe for humans.

_14. If humans will invent an AGI soon and if an AGI with almost any goals is invented, then there will be a catastrophe for humans, then there will be an AGI catastrophe soon.

_15. Therefore, there will be an AGI catastrophe soon.

It is hard to tell whether anyone took this seriously - but it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes. The “AGI Apocalypse Argument” as stated thus appears to be rather silly.

If the stated aim was: "to convince my students that all of us are going to be killed by an artificial intelligence" - why start with such a flawed argument?

Comment author: Bongo 19 October 2011 10:34:58PM *  3 points [-]

it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes.

More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.

Comment author: Gedusa 16 October 2011 12:55:33PM 1 point [-]

Here maybe?

Comment author: Bongo 16 October 2011 09:35:33PM *  3 points [-]

It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)

Comment author: MichaelVassar 05 October 2011 02:23:17PM 14 points [-]

Worse, you can simply let people catch you, then get angry with them and bully them into accepting your claims not to have lied out of a mix of imperfect certainty and conflict avoidance. By doing this you condition them to accept the radical form of dominance where they have the authority to tell you what you are morally entitled to believe.

Comment author: Bongo 07 October 2011 09:14:38AM *  2 points [-]

By doing this you condition them to accept the radical form of dominance where they have the authority to tell you what you are morally entitled to believe.

*where you have the authority to tell them (?)

Comment author: wedrifid 05 October 2011 09:44:16AM 4 points [-]

I am not convinced that the rationality level has perceptibly increased over the years, judging from the comment threads then and now

My estimation is a decrease, mostly due to dilution of the seed polulation from Eliezer-era OvercomingBias. This isn't necessarily a bad thing. Remaining a bastion of the already reasonably rational wouldn't have made lesswrong particularly useful. I'm willing to put up with a somewhat lower standard if it benefits others.

Comment author: Bongo 07 October 2011 09:11:00AM *  2 points [-]

My impression is that the level went up and then down:

  • OB-era comment threads were bad.
  • During the first year of LW the posts were good.
  • Nowadays the posts are bad again.
Comment author: Jayson_Virissimo 29 September 2011 11:03:45AM 16 points [-]

CARSON (turning to KEITH): Keith, would you like a cigarette? Here, this is a particularly rational brand.

KEITH (a bit bemused): "Rational...?" (A slight pause) Oh, I'm sorry, thank you. I don't smoke.

(Exclamations of disapproval from JONATHAN and GRETA.)

GRETA (lashing out): You don't smoke! Why not?

KEITH (taken back): Well, uh... because I don't like to.

CARSON (in scarcely-controlled fury): You don't like to! You permit your mere subjective whims, your feelings (this word said with utmost contempt) to stand in the way of reason and reality?

-Mozart Was a Red: A Morality Play In One Act, by Murray Rothbard

Comment author: Bongo 03 October 2011 11:09:05PM *  1 point [-]

View more: Prev | Next