Daniel_Burfoot comments on Closet survey #1 - Less Wrong

53 [deleted] 14 March 2009 07:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (653)

You are viewing a single comment's thread.

Comment author: Daniel_Burfoot 14 March 2009 11:36:10AM *  11 points [-]

I disagree with Eliezer on the possibility of Oracular AI (he thinks it's impossible).

Other moderately iconoclastic statements:

  • The computer is a terrible metaphor for the brain.
  • In the ultimate theory of AI, logic and deduction will be almost irrelevant. AI will use large scale induction, statistics, and memorization.
  • In order to achieve AI, it is just as important to study the real world as it is to study algorithms. To succeed AI must become an empirical science.
  • AI is a pre-paradigm discipline.
  • Rodney Brooks is a great philosopher of AI (I have no comment regarding his technical contributions).
  • Large scale brain simulation will not succeed.
  • Evolutionary psychology, while interesting from the perspective of explaining human behavior, is irrelevant for AI.
  • Computer science, with its emphasis on logic, deduction, formal proof, and technical issues, is nearly the worst possible type of background from which to approach AI.
Comment author: MichaelHoward 14 March 2009 12:31:50PM 7 points [-]

I disagree with Eliezer on the possibility of Oracular AI (he thinks it's impossible).

I think it's more that he doesn't think it's a good solution to Friendliness.

Comment author: Vladimir_Nesov 14 March 2009 05:26:05PM 2 points [-]

Daniel, why do you consider these things crazy enough to qualify for the poll? I think many of them are quite reasonable and defendable.

Comment author: Vladimir_Golovin 14 March 2009 08:13:28PM *  2 points [-]

I think it would be a good idea to create a sister website on the same codebase as LW specifically for discussing this topic.

Comment author: AnnaSalamon 18 March 2009 07:33:33AM *  5 points [-]

Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.

Comment author: CarlShulman 18 March 2009 08:22:58AM *  3 points [-]

I endorse this proposal.

Comment author: MichaelHoward 18 March 2009 07:48:18PM 1 point [-]

But then, in the still-censored site, we still wouldn't be able to mention AGI/singularity in a response, even if it would be highly relevant.

A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...

  • Are worth discussing
  • Are likely to be, fairly frequently
  • Lots of people would really rather they weren't

...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent's flags by default.

Possible flaggable topics:

  • Friendly AI/Singularitarianism
  • Libertarian politics
  • Simulism
  • Meta-discussion about possible LW changes
Comment author: Nick_Tarleton 18 March 2009 08:31:54PM 5 points [-]

Another idea, more generally applicable: the ability to reroot comment threads under a different post, leaving a link to the new location.

Comment author: CarlShulman 18 March 2009 07:58:32PM 5 points [-]

My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.

Comment author: Eliezer_Yudkowsky 18 March 2009 08:29:17PM 1 point [-]

I thought the same.

Comment author: MichaelHoward 18 March 2009 08:51:16PM *  2 points [-]

My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.

I thought that was what was planned already (after May). I was responding to AnnaSalamon:

If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could...

I took that to mean keeping LW separate from AGI/singularity discussion, or why say 'even after May'? Someone please explain if I misunderstood as I'm now most confused!

Comment author: CarlShulman 19 March 2009 03:18:18AM 2 points [-]

I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that's a good plan for SIAI.

Comment author: Vladimir_Nesov 19 March 2009 12:24:39AM 1 point [-]

Does the ban apply to Newcomb-like problems with simplifying Omegas?

Comment author: Eliezer_Yudkowsky 14 March 2009 06:20:08PM 1 point [-]

Thank you for stating your disagreement, but topics like these aren't supposed to be discussed until May. This thread should go no further, because people could list AI "disagreements" all day and really not come any closer to the spirit of the original post.

Comment author: timtyler 14 March 2009 07:03:50PM 3 points [-]

There a "LessWrong" schedule?!?

Comment author: John_Maxwell_IV 14 March 2009 07:08:54PM 4 points [-]

I think that in this case, Eliezer specifically requested that everyone refrain from posting on AI after his AI-related Overcoming Bias posting spree.

Comment author: timtyler 14 March 2009 07:50:51PM 3 points [-]

I reread the "About page" and it currently contains:

"To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009: The Singularity Artificial General Intelligence"

Forbidden topics!