Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: spuckblase 28 September 2013 07:52:32PM 0 points [-]

(2) looks awfully hard, unless we can find a powerful IA technique that also, say, gives you a 10% chance of cancer. Then some EAs devoted to building FAI might just use the technique, and maybe the AI community in general doesn’t.

Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.

Comment author: spuckblase 18 September 2013 08:18:11AM 1 point [-]

Risky Machines: Artificial Intelligence as a Danger to Mankind

Comment author: gwern 15 July 2013 12:36:39AM *  6 points [-]

Unless you can do that with the raw poll data, but that just confused me.

Thankfully, the data is not quite that crippled! The data is reported in a... 'long' format, I think the term is, where each row is a single poll item response with a unique ID for the respondent. If you want to look at that sort of question, it's up to you to aggregate the data correctly (eg with grep). You can see this by looking at particular unique IDs, say that of Leonhart and anonymous respondent 11:

$ grep Leonhart poll.csv
"Leonhart","538","0","2013-07-14T21:05:29.027196"
"Leonhart","539","0","2013-07-14T21:05:29.118328"
"Leonhart","540","1","2013-07-14T21:05:29.292160"
"Leonhart","541","1","2013-07-14T21:05:29.244125"
"Leonhart","542","3","2013-07-14T21:05:29.178701"
$ grep \"11\" poll.csv
"11","538","0","2013-07-14T21:05:25.150240"
"11","539","2","2013-07-14T21:05:25.302881"
"11","540","0","2013-07-14T21:05:25.533486"
"11","541","1","2013-07-14T21:05:25.458408"
"11","542","2","2013-07-14T21:05:25.398273"

There's 5 entries for each, since there were 5 poll items, and and each poll item has its own unique ID as well. So if you wanted to know the relationship of an answer on poll item #538 and #541 based on how subjects answered #538, you'd get a list of everyone answered "0" in #538, and pull out their answer for #541 as well. That sort of thing.


(And now that I'm the topic, I wonder where my own writings fall, and how I would even know if I were insufficiently writing like Eliezer/Luke/Yvain.)

Comment author: spuckblase 16 July 2013 08:50:52AM 2 points [-]

I like the your non-fiction style a lot (don't know your fictional stuff). I often get the impression you're in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.

Comment author: Eliezer_Yudkowsky 13 September 2012 01:57:48PM 3 points [-]

That one's in progress, I think.

Also, to reply to a comment elsewhere in thread, obviously penalties are not going to be charged retrospectively if an ancestor later goes to -3. Nobody has proposed this. Navigating the LW rules is not intended to require precognition.

Comment author: spuckblase 13 September 2012 02:35:47PM 3 points [-]

Navigating the LW rules is not intended to require precognition.

Well, it was required when (negative) karma for Main articles increased tenfold.

Comment author: spuckblase 11 August 2012 12:33:16PM 1 point [-]

I'll be there!

Comment author: lessdazed 02 January 2012 06:05:47PM 1 point [-]

What I want to know is whether you are one of those who thinks no superintelligence could talk them out in two hours, or just no human. If not with a probability of literally zero (or perhaps one for the ability of a superintelligence to talk its way out), approximately what.

Regardless, let's do this some time this month. As far as betting is concerned, something similar to the original seems reasonable to me.

Comment author: spuckblase 13 January 2012 07:12:34PM 1 point [-]

Do you still want to do this?

Comment author: lessdazed 02 January 2012 06:05:47PM 1 point [-]

What I want to know is whether you are one of those who thinks no superintelligence could talk them out in two hours, or just no human. If not with a probability of literally zero (or perhaps one for the ability of a superintelligence to talk its way out), approximately what.

Regardless, let's do this some time this month. As far as betting is concerned, something similar to the original seems reasonable to me.

Comment author: spuckblase 05 January 2012 11:16:48AM 1 point [-]

To be more specific:

I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I'm a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.

Comment author: Jack 03 January 2012 11:30:08AM 6 points [-]

Disagree with the premise. New movies tend to have more plot holes, less characterization and worse writing. Improved effects only rarely make up that margin. I also find the following stories just as plausible as yours: "New movies are over-represented on the IMDB top 250 because they get bolstered by excited fans who just saw the film and haven't yet taken the time to digest the movie or see how it dates and who, often, haven't seen the old movies on the list." The Return of the King is not better than Blade Runner.

/done with my silly arguing for the day.

Comment author: spuckblase 03 January 2012 01:17:25PM 0 points [-]

I agree in large parts, but it seems likely that value drift plays a role, too.

Comment author: lessdazed 02 January 2012 06:05:47PM 1 point [-]

What I want to know is whether you are one of those who thinks no superintelligence could talk them out in two hours, or just no human. If not with a probability of literally zero (or perhaps one for the ability of a superintelligence to talk its way out), approximately what.

Regardless, let's do this some time this month. As far as betting is concerned, something similar to the original seems reasonable to me.

Comment author: spuckblase 03 January 2012 07:28:40AM *  1 point [-]

Well, I'm somewhat sure (80%?) that no human could do it, but...let's find out! Original terms are fine.

Comment author: lessdazed 28 December 2011 02:31:51AM *  12 points [-]

there may be a way to constrain a superhuman AI such that it is useful but not dangerous...Can a superhuman AI be safely confined, and can humans managed to safely confine all superhuman AIs that are created?

Does anyone think that no AI of uncertain Friendliness could convince them to let it out of its box?

I'm looking for a Gatekeeper.

Why doesn't craigslist have a section for this in the personals? "AI seeking human for bondage roleplay." Seems like it would be a popular category...

Comment author: spuckblase 01 January 2012 11:32:43AM 1 point [-]

I'd bet up to fifty dollars!?

View more: Next