Daniel_Burfoot comments on Closet survey #1 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (653)
I disagree with Eliezer on the possibility of Oracular AI (he thinks it's impossible).
Other moderately iconoclastic statements:
I think it's more that he doesn't think it's a good solution to Friendliness.
Daniel, why do you consider these things crazy enough to qualify for the poll? I think many of them are quite reasonable and defendable.
I think it would be a good idea to create a sister website on the same codebase as LW specifically for discussing this topic.
Strikes me as an idea worth considering. If we had a sister website where AGI/singularity could be talked about, we could keep a separate rationalist community even after May. The AGI/singularity-allowed sister site could take OB and LW discussion as prerequisite material that commenters could be expected to have read, but not vice versa.
I endorse this proposal.
But then, in the still-censored site, we still wouldn't be able to mention AGI/singularity in a response, even if it would be highly relevant.
A possible solution could be to have click-setable topic flags on posts and comments when bringing up topics that...
...and readers can switch topics off in Options, boosting signal/noise ratio for the uninterested while allowing the interested to discuss freely. Comments would inherit parent's flags by default.
Possible flaggable topics:
Another idea, more generally applicable: the ability to reroot comment threads under a different post, leaving a link to the new location.
My conception of the proposal was that the LW ban could be relaxed enough to allow use of relevant examples for rationality discussions, but not non-rationality posts about AI and the like.
I thought the same.
I thought that was what was planned already (after May). I was responding to AnnaSalamon:
I took that to mean keeping LW separate from AGI/singularity discussion, or why say 'even after May'? Someone please explain if I misunderstood as I'm now most confused!
I think Anna wants to use the LW codebase to create a group blog to examine AGI/Singularity/FAI issues of concern to SIAI, even if they are not directly rationality-related. I think that's a good plan for SIAI.
Does the ban apply to Newcomb-like problems with simplifying Omegas?
Thank you for stating your disagreement, but topics like these aren't supposed to be discussed until May. This thread should go no further, because people could list AI "disagreements" all day and really not come any closer to the spirit of the original post.
There a "LessWrong" schedule?!?
I think that in this case, Eliezer specifically requested that everyone refrain from posting on AI after his AI-related Overcoming Bias posting spree.
I reread the "About page" and it currently contains:
"To prevent topic drift while this community blog is being established, please avoid mention of the following topics on Less Wrong until the end of April 2009: The Singularity Artificial General Intelligence"
Forbidden topics!