mej10 comments on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set - Less Wrong

31 Post author: malo 25 July 2012 07:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (231)

You are viewing a single comment's thread. Show more comments above.

Comment author: mej10 26 July 2012 05:33:46PM *  15 points [-]

The "evangelical polyamory" seems like an example of where Rationalists aren't being particularly rational.

In order to get widespread adoption of your main (more important) ideas, it seems like a good idea to me to keep your other, possibly alienating, ideas private.

Being the champion of a cause sometimes necessitates personal sacrifice beyond just hard work.

Comment author: Jack 26 July 2012 05:57:48PM 21 points [-]

Probably another example: calling themselves "Rationalists"

Comment author: private_messaging 28 July 2012 10:15:47AM *  -2 points [-]

Yeah.

Seriously, why should anyone think that SI is anything more than "narcissistic dilettantes who think they need to teach their awesome big picture ideas to the mere technicians that are creating the future", to paraphrase one of my friends?

This is pretty damn illuminating:

http://lesswrong.com/lw/9gy/thesingularityinstitutesarroganceproblem/5p6a

re: sex life, nothing wrong with it per se, but consider that there's things like psychopathy checklist where you score points for basically talking people into giving you money, for being admired beyond accomplishments, and for sexual promiscuity also. On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

Comment author: Risto_Saarelma 28 July 2012 12:14:33PM 1 point [-]

On top of that most people will give you fuzzy psychopathy point for believing the AI to be psychopathic, because typical mind fallacy. Not saying that it is solid science, it isn't, just outlining how many people think.

This doesn't seem to happen when people note that when you look at corporations as intentional agents, they behave like human psychopaths. The reasoning is even pretty similar to the case for AIs, corporations exhibit basic rational behavior but mostly lack whatever special sauce individual humans have that makes them be a bit more prosocial.

Comment author: private_messaging 28 July 2012 01:01:18PM -2 points [-]

Well, the intelligence in general can be much more alien than this.

Consider an AI that, given any mathematical model of a system and some 'value' metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that's necessary including figuring out how to simulate those equations efficiently.

Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.

That's the sort of thing that scientists tend to see as 'intelligent'.

The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.

Comment author: Risto_Saarelma 28 July 2012 02:05:38PM 1 point [-]

Those are narrow AIs. Their behavior doesn't involve acquiring resources from the outside world and autonomously developing better ways to do that. That's the part that might lead to psychopath-like behavior.

Comment author: private_messaging 28 July 2012 02:31:21PM *  1 point [-]

Specializing the algorithm to outside world and to particular philosophy of value does not make it broader, or more intelligent, only more anthropomorphic (and less useful, if you dont believe in friendliness).

Comment author: Risto_Saarelma 28 July 2012 02:48:02PM 2 points [-]

The end value is still doing the best possible optimization for the parameters of the mathematical system. There are many more resources to be used for that in the outside world than what is probably available for the algorithm when it starts up. So the algorithm that can interact effectively with the outside world may be able to satisfy whatever alien goal it has much better than one who doesn't.

(I'm a bit confused if you want the Omohundro Basic AI Drives stuff explained to you here or if you want to be disagreeing with it.)

Comment author: private_messaging 28 July 2012 03:02:37PM *  1 point [-]

Having specific hardware that is computing an algorithm actually display the results of computation in specific time is outside the scope of 'mathematical system'.

Furthermore, the decision theories are all built to be processed using the above mentioned mathematics-solving intelligence to attain real world goals, except defining real world goals proves immensely difficult. edit: also, if the mathematics solving intelligence was to have some basic extra drives to resist being switched off and such (so that it could complete its computations), then the FAI relying on such mathematics solving subcomponent would be impossible. The decision theories presume absence of any such drives inside their mathematics processing component.

Omohundro Basic AI Drives stuff

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

Comment author: fubarobfusco 28 July 2012 06:28:50PM 1 point [-]

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

That sentence is magical thinking. You're equating the meaning of the word "magic" in Clarke's Law and in the expression "magical thinking", which do not refer to the same thing.

Comment author: Risto_Saarelma 28 July 2012 03:21:59PM 1 point [-]

If the sufficiently advanced technology is indistinguishable from magic, the arguments about "sufficiently advanced AI system" in absence of actual definition what it is, are indistinguishable from magical thinking.

Ok, then, so the actual problem is that the people who worry about AIs that behave psychopatically have such a capable definition for AI that you consider them basically speaking nonsense?