Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Venu20

I came to this post via a Google search (hence this late comment). The problem that Cyan's pointing out - the lack of calibration of Bayesian posteriors - is a real problem, and in fact something I'm facing in my own research currently. Upvoted for raising an important, and under-discussed, issue.

Venu20

The default case of FOOM is an unFriendly AI Before this, we also have: "The default case of an AI is to not FOOM at all, even if it's self-modifying (like a self-optimizing compiler)." Why not anti-predict that no AIs will FOOM at all?

This AI becomes able to improve itself in a haphazard way, makes various changes that are net improvements but may introduce value drift, and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever). Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?

Venu20

@Don: Eliezer says in his AI risks paper , criticising Bill Hibbard, that one cannot use supervised learning to specify the goal system for an AI. And although he doesn't say this in the AI risks paper (contra what I said in my previous comment), I remember him saying somewhere (was it in a mailing list?) that supervised learning as such is not a reliable component to include in a Friendly AI. (I may be wrong in attributing this to him however.) I feel this criticism is misguided as any viable proposal for (Friendly or not) AI will have to be built out of modules which are not smart enough to be Friendly themselves. And supervised learning sure seems like a handy module to have - it clusters highly variable lower level sensory input into more stable higher level objects, and its usefulness has been demonstrated by the heavy use of it by Thrun's team.

Venu40

I don't get this post. There is no big mystery to asynchronous communication - a process looks for messages whenever it is convenient for it to do so, very much like we check our mail-boxes when it is convenient for us. Although it is not clear to me how asynchronous communication helps in building an AI, I don't see any underspecification here. And if people (including Brooks) have actually used the architecture for building robots, that at least must be clear proof that there is a real architecture here.

Btw, from my understanding, Thrun's team made heavy use of supervised learning - the same paradigm that Eliezer knocked down as being unFriendly in his AI risks paper.

Venu30

I am interested in what Scott Aaronson says to this.

I am unconvinced, and I agree with both the commenters g and R above. I would say Eliezer is underestimating the number of problems where the environment gives you correlated data and where the correlation is essentially a distraction. Hash functions are, e.g., widely used in programming tasks and not just by cryptographers. Randomized algorithms often are based on non-trivial insights into the problem at hand. For example, the insight in hashing and related approaches is that "two (different) objects are highly unlikely to give the exact same result when (the same) random function (from a certain class of functions) is applied to both of them, and hence the result of this function can be used to distinguish the two objects."

Venu10

To me it seems like this post evades what is to me the hard question of morality. If my own welfare often comes in conflict with the welfare of others, then how much weight should I attach to my own utility in comparison to the utility of other humans? This post seems to say I should look into the mirror to get my answer - but that answer is too crude - in the sense that I know I should care for others, but how much?

I think there is definitely a role for external influence here. My reading OB for the last year or more has made me consciously think of myself as a rationalist, and this has pushed me to behave in a manner consistent with my self-labelling as a rationalist. In a similar fashion, if I start thinking of myself as an altruist (having come under some external influence), I am quite sure it will push me to behave in a manner more consistent with that labelling. It is trivial/wrong to then say that this altruism was "latent" in me all along.

Venu70

Ayn Rand? Aleister Crowley? How exactly do you get there? What Rubicons do you cross? It's not the justifications I'm interested in, but the critical moments of thought.

My guess is that Ayn Rand at least applied a "reversed stupidity = intelligence" heuristic. She saw examples of ostensible altruists committing great evil - and from there generalized to the opposite extreme - since altruism leads to evil, the only good must come from selfishness.

(Just to be clear, I am not defending Rand here.)

Venu30

"There are no-free-lunch theorems in computer science - in a maxentropy universe, no plan is better on average than any other. " I don't think this is correct - in this form, the theorem is of no value, since we know the universe is not max-entropy. No-free-lunch theorems say that no plan is better on average than any other, when we consider all utility functions. Hence, we cannot design an intelligence that will maximize all utility functions/moralities.

Venu00

@billswift: I do not want to divert the thread onto the topic of animal rights. It was only an example in any case. See Paul Gowder's comment previous to mine for a more detailed (and different) example of how empirical knowledge can affect our moral judgements.

Venu00

A few processes to explain moral progress (but probably not all of it): a) Acquiring new knowledge (e.g. the knowledge that chimps and humans are, on an evolutionary scale, close relatives), which leads us to throw away moral judgements that make assumptions which are inconsistent with such knowledge. b) Morality is only one of the many ends that we pursue, and as an end it becomes easier to pursue once you are amply fed, watered and clothed. In other words, improvements in material conditions enable improvements in morality. c) Conquest of one culture by another means the morals of the conquerors get transferred to the conquered (to some extent). Similarly, migration and higher levels of general exposure between cultures means practices that are viewed as immoral by much of the rest of the world are under much pressure to be abolished.

Load More