Comment author: SimonF 05 November 2012 02:10:46PM 2 points [-]

I will do 20, too!

Comment author: IlyaShpitser 04 August 2012 10:36:10PM *  3 points [-]

Saying that you save the world by exploring many unusual and controversial ideas is like saying you save the world by eating ice cream and playing video games.

Comment author: SimonF 04 August 2012 11:19:05PM *  1 point [-]

Isn't "exploring many unusual and controversial ideas" what scientists usually do? (Ok, maybe sometimes good scientist do it...) Don't you think that science could contribute to saving the world?

Comment author: SimonF 16 July 2012 01:45:28PM *  0 points [-]

This is a basic strategy in (and may be practiced by playing) the game of Hex.

Comment author: MaoShan 16 May 2012 02:57:41AM *  4 points [-]

Just some minor text corrections for you:

From 3.1

The utility function picture of a rational agent maps perfectly onto the Orthogonality thesis: here have the goal structure, the utility fu...

...could be "here we have the...

From 3.2

Human minds remain our only real model of general intelligence, and this strongly direct and informs...

this strongly directs and informs...

From 4.1

“All human-designed rational beings would follow the same morality (or one of small sets of moralities)” sound plausible; in contract “All human-designed superefficient

I think it would be sounds since the subject is the argument, even though the argument contains plural subjects, and I think you meant "in contrast", but I may be mistaken.

Comment author: SimonF 16 May 2012 12:48:55PM *  4 points [-]

From 3.3

To do we would want to put the threatened agent

to do so(?) we would

From 3.4

an agent whose single goal is to stymie the plans and goals of single given agent

of a single given agent

From 4.1

then all self-improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it.

every, or change the rest of the sentence (superintelligences, they were)

From 4.5

There are goals G, such that an entity an entity with goal G

a superintelligence will goal G can exist.

Comment author: gwern 05 May 2012 12:50:12AM *  18 points [-]

Ben:

but he gives no evidence for this assertion. Calculating the decimals of pi may be a fairly simple mathematical operation that doesn’t have any need for superintelligence, and thus may be a really unlikely goal for a superintelligence -- so that if you tried to build a superintelligence with this goal and connected it to the real world, it would very likely get its initial goal subverted and wind up pursuing some different, less idiotic goal.

Yes, it is fairly simple - a line of code. But in the real world, even humans who don't have pi mentioned anywhere in their utility function can happily spend their lives working on mathematics - like pi. Pi is endlessly interesting: finding sequences in it (or humorous ones), proving properties like transcendentalness (or dare I say, normality?), coming up with novel algorithms and proving convergence, golfing short pi-generating programs, testing your routines, building custom supercomputers to calculate it - and think of how many scientific fields you need to build supercomputers!, depicting it as a graphic (entailing the entire field of data visualization, since what property do you want to see?), devising heuristic algorithms (entails much of statistics, since you might want optimal procedures for testing your heuristic pi-generating algorithms on subsequences of pi), writing books on all this, collaborating on all of the above, and silliness like Pi Day... I don't know how one could more conclusively prove that pi is a perfectly doable obsession, given that this isn't even plausible argumentation, it's just pointing out facts about existing humans.

To summarize: http://en.wikipedia.org/wiki/Pi is really long. If you want to try to make an intuition pump argument-from-incredulity - 'oh surely an AI or superintelligence would get bored!' - please pick something else, because pi is a horrible example.

"There are no uninteresting things, there are only uninterested people."

Comment author: SimonF 07 May 2012 07:23:32PM *  -1 points [-]

You're right, but isn't this a needless distraction from the more important point, i.e. that it doesn't matter whether we humans find interesting or valueable what the (unfriendly-)AI does?

Comment author: gwern 14 March 2012 03:29:36PM 1 point [-]

The index wedrifid was alluding to, if anyone cares: http://shityudkowskysays.tumblr.com/

Comment author: SimonF 15 March 2012 02:04:49AM 0 points [-]

Thanks for making me find out what the Roko-thing was about :(

Comment author: SimonF 29 February 2012 01:19:48PM 0 points [-]

Some very small things that caught my attention:

  • On page 6, you mention "Kryder's law" as support for the accelerator of "massive datasets". Clearly larger diskspace enables us to use larger datasets, but how will these datasets be created? Is it obvious that we can create useful, large datasets?

  • On page 10, you write (editability as an AI advantage) "Of course, such possibilities raise ethical concerns.". I'm not sure why this sentence is there, is editability the only thing that raises these concerns? If yes, what are these concerns specifically?

  • On page 13, you cite "Muehlhauser 2011", this should probably be "Muehlhauser 2012"

Comment author: XiXiDu 20 November 2011 10:35:00AM *  10 points [-]

I hadn’t noticed that my worldview already implied intelligence explosion.

I'd like to see a post on that worldview. The possibility of an intelligence explosion seems to be an extraordinary belief. What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?

I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences.

How do you anticipate your beliefs to pay rent? What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?

Comment author: SimonF 20 November 2011 06:34:17PM 1 point [-]

The possibility of an intelligence explosion seems to be an extraordinary belief.

Extraordinary compared to what? We already now that most people are insane, so that belief beeing not shared by almost everybody doesn't make it unlikely a priori. In some ways the intellgence explosion is a straightforward extrapolation of what we know at the moment, so I don't think your critisism is valid here.

What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?

I think one could tell a reasonably competent physicist 50 years prior to Schrödinger how to derive quantum mechanics in one paragraph of natural language. Human language can contain lots of information, especially if speaker and listener already share a lot of concepts.

I'm not sure why you've written your comment, are you just using the opportunity to bring up this old topic again? I find myself irritated by this, even though I probably agree with you :)

Comment author: spencerg 11 November 2011 03:19:51AM 1 point [-]

Thank you for pointing that out, it would have been better if I had spoken more carefully. I definitely don't think that uncertainty is in the territory. Please interpret "there is great uncertainty in X" as "our models of X produce very uncertain predictions."

Comment author: SimonF 11 November 2011 09:38:16AM 0 points [-]

Ok, I'm glad you interpreted my comment as constructive criticism. Thanks for your efforts!

Comment author: SimonF 10 November 2011 08:50:53PM 0 points [-]

I found it incredibly annoying that he seems to think that uncertainty is in the territory.

View more: Prev | Next