You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Neil deGrasse Tyson on Cryonics - Less Wrong Discussion

6 Post author: bekkerd 09 May 2012 03:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (106)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 09 May 2012 04:21:46PM 4 points [-]

I wonder what will we hear from non-LW rationalists about the SIAI when it gains enough prominence. I think its pretty easy to predict...

Do we need to predict? http://rationalwiki.org/wiki/LessWrong & http://rationalwiki.org/wiki/Thread:User_talk:WaitingforGodot/Criticisms_of_LessWrong - and keep in mind this is with David Gerard watering it down.

Comment author: David_Gerard 09 May 2012 04:27:11PM *  4 points [-]

And Tetronian.

I'll note yet again that, in the general case, if you're worrying about your image on RationalWiki, then you're bottoming out the obscurity scale.

Comment author: gwern 09 May 2012 04:54:20PM 1 point [-]

I'll note yet again that, in the general case, if you're worrying about your image on RationalWiki, then you're bottoming out the obscurity scale.

You're missing the point...

Comment author: David_Gerard 09 May 2012 05:51:46PM *  1 point [-]

Well, it is a reaction. I'm cautioning against overtraining on a single datum.

Comment author: [deleted] 09 May 2012 04:37:29PM *  1 point [-]

Agreed. Even on RationalWiki, there are no more than 5 people who care enough about LessWrong to talk about it regularly, excluding you and me.

Comment author: billswift 09 May 2012 05:34:08PM *  2 points [-]

I don't feel like going looking for the original post, someone can move this there if they want, but responding to Godot's complaint about "cached thoughts", it is now apparent that they should more accurately be called "habitual thoughts", thoughts that automatically re-occur in response to a particular stimulus.

Copied with some editing to the Sequence Rerun of Cached Thoughts post

Comment author: David_Gerard 09 May 2012 05:54:39PM 1 point [-]

It helps to keep in mind that the sequences are not polished works of brilliance, but things written as first drafts for a book as part of a two-year blog-a-day marathon and that will never be revised. So as long as that "sequences" link is up there, we're stuck with the unpolished bits.

Comment author: billswift 09 May 2012 05:58:24PM 0 points [-]

Of course. That is why I wrote "now apparent"; it didn't occur to me very long ago, largely as a result of some research I did on habits a few months ago.

Comment author: private_messaging 09 May 2012 06:10:49PM *  2 points [-]

You didn't yet gain enough prominence...

By non-LW rationalists I mean the people whom promote science, for instance.

edit: On the rationality, the issue is that IMO breaking down the improvement into two sub improvements of 'having the most unbiased selection of propositions' and 'performing most accurate Bayesian updates on them' simply doesn't result in most win for computationally bounded agents compared to the status quo of trying to generate more of most useful hypotheses (at the expense of not generating less useful ones), and propagating certainty between hypotheses in such a way that the biases arising from cherry-picked selection of hypotheses (as consequence of pruning) are not too harmful. I'd dare to guess that if you do generate hypotheses as usual (with the usual pruning) and then do updates on them in a new way you'd probably self-sabotage (you end up updating on N propositions that support or nonsupport proposition A , and then superfluously get very confident in A or ~A because N is a small, biased sample out of M>>>N hypotheses). Roko incident looks like rather amusing instance of such.