Comment author: Jayson_Virissimo 17 January 2012 07:12:07AM 3 points [-]

So, did anyone actually save Roko's comments before the mass deletion?

Comment author: Humbug 17 January 2012 04:41:39PM 2 points [-]

So, did anyone actually save Roko's comments before the mass deletion?

Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won't remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.

Comment author: Humbug 29 October 2011 07:25:13PM *  4 points [-]

None of the simulation projects have gotten very far...this looks to me like it is a very long way out, probably hundreds of years.

Couldn't you say the same about AGI projects? It seems to me that one of the reasons that some people are being relatively optimistic about computable approximations to AIXI, compared to brain emulations, is that progress on EM's is easier to quantify.

Comment author: Vladimir_Nesov 15 September 2011 03:02:11PM *  -1 points [-]

(Since the linked article doesn't at a first glance talk about AI researchers, the title should be justified.)

Comment author: Humbug 15 September 2011 03:34:37PM 12 points [-]

In statements posted on the Internet, the ITS expresses particular hostility towards nano­technology and computer scientists. It claims that nanotechnology will lead to the downfall of mankind, and predicts that the world will become dominated by self-aware artificial-intelligence technology. Scientists who work to advance such technology, it says, are seeking to advance control over people by 'the system'.

Comment author: RichardKennaway 15 September 2011 03:00:24PM 6 points [-]

On the other hand, the mission of the SIAI is founded on the belief that if anyone succeeds at AGI without solving the Friendliness problem, they will destroy the world. Eliezer has said in an interview a year or two back that he does not think that anyone currently working on AGI has any chance of succeeding. But if not now, then some day the question will have to be faced:

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Comment author: Humbug 15 September 2011 03:30:06PM 13 points [-]

What do you do if you really believe that someone's research has a substantial chance of destroying the world?

Go batshit crazy.

Comment author: NancyLebovitz 12 July 2011 03:01:35PM 1 point [-]

Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

Comment author: Humbug 12 July 2011 03:21:24PM 1 point [-]

...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?

One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.

Comment author: Multiheaded 11 July 2011 07:45:26PM 0 points [-]

Nope, not like that at all. What he's talking about is knowledge that's objectively harmful for someone to have.

Comment author: Humbug 12 July 2011 02:11:51PM *  1 point [-]

What he's talking about is knowledge that's objectively harmful for someone to have.

Someone should make a list of knowledge that is objectively harmful. Could come in handy if you want to avoid running into it accidentally. Or we just ban the medium that is used to spread it, in this case natural language.

Comment author: Pavitra 11 July 2011 07:41:03PM -1 points [-]

You're being facetious. No one is seriously disputing where the boundary between basilisk and non-basilisk lies, only what to do with the things on the basilisk side of the line.

Comment author: Humbug 11 July 2011 08:00:41PM *  3 points [-]

No one is seriously disputing where the boundary between basilisk and non-basilisk lies...

This assumes that everyone knows where the boundary lies. The original post by Manfred either crossed the boundary or it didn't. In the case that it didn't, it only serves as a warning sign of where not to go. In the case that it did, how is your knowledge of the boundary not a case of hindsight bias?

Comment author: Pavitra 11 July 2011 06:46:24PM 2 points [-]

It would be more sensible to check with other people, rather than assuming it's safe, before exposing the public to something that you know that a lot of people believe to be dangerous.

Comment author: Humbug 11 July 2011 07:16:24PM 1 point [-]

...before exposing the public to something that you know that a lot of people believe to be dangerous.

The pieces of the puzzle that Manfred put together can all be found on lesswrong. What do you suggest, that research into game and decision theory be banned?

Comment author: Manfred 11 July 2011 06:31:30PM *  5 points [-]

I obviously think it's safe. Nobody's actually told me what they think though.

Comment author: Humbug 11 July 2011 06:56:43PM 8 points [-]

I obviously think it's safe.

Be careful to trust Manfred, he is known to have destroyed the Earth on at least one previous occasion.

View more: Prev