Mark_Friedenbach comments on Six Plausible Meta-Ethical Alternatives - Less Wrong

34 Post author: Wei_Dai 06 August 2014 12:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 06 August 2014 05:43:29PM 3 points [-]

That's one reason. As an example, Goertzel seems to fall somewhat in (1) with his cosmist manifesto.

But more importantly I think are issues of hard takeoff timeline and AGI design. The mainstream opinion, I think, is that a hard-takeoff would take years at the minimum, and there would be both sufficient time to recognize what is going on and to stop the experiment. Also MIRI seems for some reason to threat-model its AGI's as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think. Combined with the slow takeoff, projects like OpenCog intend to teach robot children in a preschool like environment, thereby value-loading them in the same way that we value-load our children.

Comment author: torekp 08 August 2014 01:03:07AM 3 points [-]

Also MIRI seems for some reason to threat-model its AGI's as some sort of perfectly rational alien utility-maximizer, whereas real AGIs are implemented with all sorts of heuristic tricks that actually do a better job of emulating the quirky way humans think.

This is extremely important, and I hope you will write a post about it.

Comment author: Nectanebo 06 August 2014 07:21:29PM *  1 point [-]

Yeah, I was thinking of Goertzel as well.

So you don't think MIRI's work is all that useful? What probability would you assign to hard-takeoff happening of the speed they're worried about?

Comment author: [deleted] 07 August 2014 01:19:04AM *  1 point [-]

Indistinguishable from zero, at least with current levels of technology. The mind is an immensely complex machine capable of processing information orders of magnitude faster than the largest HPC clusters. Why should we expect an early dumb intelligence running on mediocre hardware to recursively self-improve so quickly? The burden of proof rests with MIRI, I believe. (And I'm still waiting.)