Mercurial comments on Where do I most obviously still need to say "oops"? - Less Wrong

5 Post author: lukeprog 22 November 2011 01:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread.

Comment author: Mercurial 22 November 2011 04:10:13AM 6 points [-]

Luke, I don't feel I know you well enough to help you with your quest to locate any lingering wrongness in you. From what I've seen of your writing and what I've heard from people who have met you, you're doing a really amazing job of walking the rationalist talk. The fact that you even ask the community here this question is quite a testament to your taking this stuff seriously and actually using it. I think I should be asking you this question!

But your asking this makes me think of something. If you, or Eliezer, or someone else of that calibre of rational competence pointed out to me an area where I need to say "Oops" (or otherwise direct rational attention), I'd like to think that I'd take that seriously. I suspect I'd take it even more seriously if there were some avenue for me to ask such people for that help the way you've asked the whole Less Wrong community here.

So I wonder: Might it be a good move to set up something like that? We might not yet have a good metric in place for what constitutes someone's degree of rationality, but I'd imagine if two or three black-belt Bayesians all agree that someone is wrong about something, that should still count for something and is probably a reasonable direction to consider in the absence of a more objective metric. So if there were something set up where people could actively ask for that feedback from known people of skilled rationality (or people designated by those with known impressive levels of rationality), I wonder if that would be useful. What do you think? Or would that just be redundant with respect to the Rationality Dojos you mentioned are coming?

Comment author: lukeprog 22 November 2011 05:38:00AM 6 points [-]

If this could be arranged in the future, we'd want to involve top-level non-SIAI rationalists like Julia Galef to avoid results dictated by the SIAI memeplex rather than by rationality skills. (By "top-level" I don't mean "popular" but "seriously skilled in rationality".)

Comment author: XiXiDu 22 November 2011 09:42:15AM 3 points [-]

The questions that I really care about, the important questions, are either not solved yet or it is everyone versus the "SIAI memeplex".

All of the following people would disagree with the Singularity Institute on some major issues: Douglas Hofstadter, Steven Pinker, Massimo Pigliucci, Greg Egan, Holden Karnofsky, Robin Hanson, John Baez, Katja Grace, Ben Goertzel...just to name a few. Even your top donor thinks that the Seaststeading Institute deserves more money than the Singularity Institute. And in the end they would either be ignored, downvoted, called stupid or their disagreement a result of motivated cognition.

Comment author: wedrifid 22 November 2011 11:43:08AM *  -1 points [-]

Comment author: XiXiDu 22 November 2011 11:59:28AM *  0 points [-]

...my downvotes of Ben Goertzel and suggestions that he was stupid are not based on motivated cognition...

Never said that.

The impressiveness of his name (that thing which causes you to refer to him as an authority)...

WTF? I don't think that he has any authority. This essay pretty much shows that he is a dreamer. Two quotes from the essay:

“Of course, this faith placed in me and my team by strangers was flattering. But I felt it was largely justified. We really did have a better idea about how to make computers think. We really did know how to predict the markets using the news.”

or

“We AI folk were talking so enthusiastically, even the businesspeople in the company were starting to get excited. This AI engine that had been absorbing so much time and money, now it was about to bear fruit and burst forth upon the world!”

Pfft.

Comment author: Mercurial 23 November 2011 04:40:28AM 0 points [-]

That would be awesome, for sure! But I'd also prefer not to see this get frozen in planning just because there's a theoretical possibility of making it better. I'd still consider SIAI-biased advice to be vastly better than no advice at all.