All of PK's Comments + Replies

Wow! This post is particularly relevant to my life right now. On January 5th I start bootcamp, my first day in the military.

MMO of the future lol(some swearing)

And just so I'm not completely off topic, I agree with the original post. There should be games, they should be fun and challenging and require effort and so on. AI's definetly should not do everything for us. A friendly future is a nice place to live in and not a place wher an AI does the living for us so we might as well just curl up in a fetal position and die.

@ ac: I agree with everything you said except the part about farming a scripted boss for phat lewt in the future. One would think that in the future they could code something more engaging. Have you seen LOTR...

Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it's a "waste of time"?

-2DSimon
Yep, you just have to give yourself permission first. Also, this is the least interesting post-singularity world I've ever heard of. ;-) Well, unless your "better version of WoW" is ramped up to be at least as good a source of novelty as a Star Trek holodeck.

What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?

-Each state will have it's own constitution and rules. -Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules. -The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there. -There are certain universal meta-rules that supercede the states' rules such as... -A citizen may leave a sta... (read more)

Um... since we're on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.

"...what are some other tricks to use?" --Eliezer Yudkowsky "The best way to predict the future is to invent it." --Alan Kay

It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.

Eliezer, what are you going to do next?

"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."

I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.

Slight correction. I said: "Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it's an attempt to reverse stupidity to get intelligence." I worded this sentence badly. I mean that stupid people saying things cannot make something false and usually when people commit this fallacy it's because they are trying to say that the opposite of the "bad" point is true. This is why I said it's an attempt to reverse stupidity to get intelligence.

Basically when we see "a stupid person said this" being advanced as proof that something is false, we can expect a reverse stupidity to get intelligence fallicy right after.

I disagree with much of what is in the linked essay. One doesn't have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: "My opponent is a tainted, therefore his arguments are bad." One can make ad hominim statements without actually saying them by using innuendo.

On the other hand ad hominim isn't even necessarily a fallacy. Of course an argument cannot bec... (read more)

I don't understand. Am I too dumb or is this gibberish?

"You can't build build Deep Blue by programming a good chess move for every possible position."

Syntax error: Subtract one 'build'.

I wonder if liars or honest folk are happier and or more successful in life.

We are missing something. Humans are ultimatly driven by emotions. We should look for which emotions beliefs tap into in order to understand why people seek or avoid certain beliefs.

2slicedtoad
I'm not sure what emotion it is, but I would hypothesize that it comes from tribal survival habits. Group cohesion was existentially important in the tribal prehuman/early-human era. Being accurate and correct with your beliefs was important, but not as important as sharing the same beliefs as the tribe. So we developed methods of fitting into our tribes despite it requiring us to believe paradoxical and irrational things that should be causing cognitive dissonance.

I thought of some more. -there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don't fight the status quo. -everything is connected with "energy"(mystically): you or special/chosen people might be ably to tap into this "energy". You might glean information you normally shouldn't have or gain some kind of special powers. -Scientists/professionals/experts are "elitists". -Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good. That's it for now.

-faith: i.e. unconditional belief is good. It's like loyalty. Questioning beliefs is like betrayal. -The saying "Stick to your guns.": Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier. -The faithfull: i.e. us, we are the best, god is on our side. -the infedels: i.e. them, sinners, barely human, or not even. -God: Infenetly powerful alpha male. Treat him as such with all the implications... -The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the d... (read more)

Ok, maybe my last post was a bit harsh(it's tricky to express oneself over the Internet). I will elaborate further. Eliezer said:

"So here are the traditional values of capitalism as seen by those who regard it as noble - the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say "There's a sucker born every minute"), or Warren Buffett:"

I don't know much about the latter two but I have read Paul Graham extensively. It sounds like a strawman to me when Eliezer says:

"I regard finance as more of a useful tool than an ul... (read more)

1taelor
As a matter of fact, Graham explicitly denies that the universe is set up to reward hard work. Then again, we know what Eliezer thinks about the universe:

The post wasn't narrow enough to make a point. Elizier stated: "I regard finance as more of a useful tool than an ultimate end of intelligence - I'm not sure it's the maximum possible fun we could all be having under optimal conditions." Are we talking pre or post a nanotech OS running the solar system? In the latter case most of these "values" would become irrelevant. However given the world we have today, I can confidently say that capitalism is pretty awesome. There is massive evidence to back up my claim.

It smells like Eliezer is tr... (read more)

0Kenny
If capitalism is the "best we can come up with", with what is it a compromise and why would we want to compromise the best option?

Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.

What's the point of despair? There seems to be a given assumption in the original post that:

1) there is no protection, universe is allowed to be horrible --> 2)lets despair

But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the atten... (read more)

-1Houshalter
What's the point of having feelings or emotions at all? Are they not all "pointless"?
1Voltairina
Agreed. Despair is an unsophisticated response that's not adaptive to the environment in which we're using it - we know how to despair now, it isn't rewarding, and we should learn to do something more interesting that might get us results sooner than "never".

Eli, do you think you're so close to developing a fully functional AGI that one more step and you might set off a land mine? Somehow I don't believe you're that close.

There is something else to consider. An AGI will ultimately be a piece of software. If you're going to dedicate your life to talking about and ultimately writing a piece of software then you should have superb programming skills. You should code something.. anything.. just to learn to code. Your brain needs to swim in code. Even if none of that code ends up being useful the skill you gain will be. I have no doubt that you're a good philosopher and a good writer since I have read your blog but wether or not you're a good hacker is a complete mystery to me.

Eliezer, perhaps you were expecting them to seem like A-holes or snobs. That is not the case. They are indeed somewhat smarter than average. They also tend to be very charismatic or "shiny" which makes them seem smarter still. That doesn't necessarily mean they are smart enough or motivated to fix the problems of the world.

Perhaps there are better models of the world than the Approval/Disapproval of Eletes dichotomy.

A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed ... (read more)

2DaveX
Eliezer covered some of this in description of the twenty-ply GLUT being not infinite, but still much larger than the universe. The number of plys in the conversation is the number of "iterations" simulated by the GLUT. For an hour-long Turing test, the GLUT would still not be infinite, (i.e., still describe the Chinese Room thought experiment) and, for the purposes of the thought experiment, it would still be computable without infinite resources. Certainly, drastic economies could be had by using more complicated programming, but the outputs would be indistinguishable.

Can someone just tell us dumb asses the differece between describing something and experiencing it?

Um... ok.

Description: If you roll your face on your keyboard you will feel the keys mushing and pressing against your face. The most pronounced features of the tactile experience will be the feeling of the ridges of the keys pressing against your forehead, eyebrows and cheekbones. You will also hear a subtle "thrumping" noise of the keys are being pressed. If you didn't put the cursor in a text editor you might hear some beeps from your computer. On... (read more)

0danlowlite
"Anyways, I still hold that you can only define reductionism up to point after which you are just wasting time." I agree that we might be wasting time. But what do you mean "up to a point"? The flaw isn't in the idea, but rather in the way we express it. It appears like we're looking for the right analogy. I don't know if that's going to work. But I guess I could try anyway. I think it might be more like a computer. We don't function at a "machine code" or even an "assembly language" level; rather, it's more like we're a scripting language on the operating system. Of course, that's imperfect, too.

Too much philosophy and spinning around in circular definitions. Eliezer, you cannot transfer experiences, only words which hopefully point our minds to the right thing until we "get it". Layers upon layers of words trying to define reductionism won't make people who haven't "gotten it" yet "get it". It will just lead to increasingly more sophisticated confusion. I suppose the only thing that could snap people into "getting" reductionism at this point is lots of real world examples because that would emulate an expe... (read more)

Everyone ignored my c++ example. Was I completely off base? If so please tell me. IMHO we should look for technical examples to understand concepts like "reductionism". Otherwise we end up wasting time arguing about definitions and whatnot.

Personally, I find it irritating when a discussion starts with fuzzy terms and people proceed to add complexity making things fuzzier and fuzzier. In the end, you end up with confused philosophers and no practical knowledge whatsoever. This is why I like math or computer science examples. It connects what you are talking about to something real.

0bigjeff5
I think it works pretty well if you've coded before. I also think that's why Eliezer likes to use pseudo-code in his explanations.

If people can understand the concept of Unions from c/c++ they can understand reductionism. One can use different overlaping data structures to access the same physical locations in memory.

union mix_t { long l; struct { short hi; short lo; } s; char c[4]; } mix;

Unfortunately the blog ate my indentations.

Is mix made up of a long, shorts or chars? Silly questions. mix.l, mix.s and mix.c are accessing the same physical memory location.

This is reductionism in a nutshell, it's talking about the same physical thing using different data types. You can 'go up'(use... (read more)

As a C programmer who hangs out in comp.lang.c, I'm strongly tempted to get out a copy of C99 so that I can tell you precisely where you're wrong there. But I'll content myself with pointing out that there is no guarantee that sizeof(long)==2*sizeof(short)==4*sizeof(char), and moreover that even if that did hold, there is still no guarantee that sizeof(struct {short hi; short lo;})==2*sizeof(short) because the struct might have padding - what if 'short' were a 16 bit quantity but stored in 32 bit words (perhaps because the arch can only do 32 bit writes, ... (read more)

Caledonian's job is to contradict Eliezer.

Eliezer, do you have a rough plan for when you will start programming an AI?

The "probability" of an event is how much anticipation you have for that event occurring. For example if you assign a "probability" of 50% to a tossed coin landing heads then you are half anticipating the coin to land heads.

Silas: My post wasn't meant to be "shockingly unintuitive", it was meant to illustrate Eliezer's point that probability is in the mind and not out there in reality in a ridiculously obvious way.

Am I somehow talking about something entirely different than what Eliezer was talking about? Or should I complexificationafize my vocabulary to seem more academic? English isn't my first language after all.

Here is another example me, my dad and my brother came up with when we were discussing probability.

Suppose there are 4 card, an ace and 3 kings. They are shuffled and placed face side down. I didn't look at the cards, my dad looked at the first card, my brother looked at the first and second cards. What is the probability of the ace being one of the last 2 cards. For me: 1/2 For my dad: If he saw the ace it is 0, otherwise 2/3. For my brother: If he saw the ace it is 0, otherwise 1.

How can there be different probabilities of the same event? It is because p... (read more)

"Hard AI Future Salon" lecture, good talk. Most of the audience's questions however were very poor.

One more comment about the mind projection fallacy. Eliezer, you also have to keep in mind that the goal of a sci-fi writer is to make a compelling story which he can sell. Realism is only important in so far as it helps him achieve this goal. Agreed on the point that it's a fallacy, but don't expect it to change unless the audience demands/expects realism. http://tvtropes.org/ if full of tropes that illustrate stuff like that.

Good post. I have a feeling I've read this very same example before from you Eliezer. I can't remember where.

OK, time to play:

Q: Why am I confused by the question "Do you have free will?"? A: Because I don't know what "free will" really means. Q: Why don't I know what "free will" means? A: Because there is no clear explanation of it using words. It's an intuitive concept. It's a feeling. When I try to think of the details of it, it is like I'm trying to grab slime which slides through my fingers. Q: What is the feeling of "free will"? A: When people talk of "free will" they usually put it thusly. If one has "... (read more)

0jwoodward48
"Why do people have a tendency to believe that their minds are somehow separate from the rest of the universe?" Because the concept of self as distinct from one's surroundings is part of subjective experience. Heck, I'd consider it to be one of the defining qualities of a person/mind.

Ughh more homework. Overcoming bias should have a sister blog called Overcoming laziness.

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do:


If you circled "yes", how right would it have to be, for how many babies? ___

What a horrible horrible question. My answer is ... what do you mean when you say "intrinsically the right thing to do"? The "right thing" according to whom? If it... (read more)

Lately I've been thinking about "mind killing politics". I have come to the conclusion that this phenomenon is pretty much present to some degree in any kind of human communication where being wrong means you or your side lose status.

It is incorrect to assume that this bias can only occurs when the topic involves government, religion, liberalism/conservatism or any other "political" topics. Communicating with someone who has a different opinion than you is sufficient for the "mind killing politics" bias to start creeping in.

Th... (read more)

2pnrjulius
I largely agree with you, but I think that there's something we as rationalists can realize about these disagreements, which helps us avoid many of the most mind-killing pitfalls. You want to be right, not be perceived as right. What really matters, when the policies are made and people live and die, is who was actually right, not who people think is right. So the pressure to be right can be a good thing, if you leverage it properly into actually trying to get the truth. If you use it to dismiss and suppress everything that suggests you are wrong, that's not being right; it's being perceived as right, which is a totally different thing. (See also the Litany of Tarski.)
1Antisuji
Sorry to reply to an old comment, but regarding item (2), the loss of status is at least in proportion to the number of listeners (in relatively small groups, anyway) since each member of the audience now knows that every other member of the audience knows that you were wrong. This mutual knowledge in turn increases the pressure on your listeners to punish you for being wrong and therefore be seen as right in the eyes of the remaining witnesses. I think this (edit: the parent post) is a pretty good intuition pump, but perhaps the idea of an additive quantity of "lost status" is too simplistic.
0centripetal
why is the foundational criterion for political discussions adversarial? I wonder. And, why is it that the meaning and the connotations of the word politics have been dumbed down to a two party/two ideologies process? In fact, there aren't 2 parties, just different ideological hermeneutics. "It's ideology stupid" says Zizek.

Good post. So how do you usually respond to invalid "by definition" arguments? Is there any quick(but honest) way to disarm the the argument or is there too much inferential distance to cover?

-2mamert
"and a plucked chicken is, by definition, a human" communicates much without giving a sermon.
-6[anonymous]

Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn't magic, it won't let you cross a gap of months in an hour.

Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I'm still in "Hmmmm...scratches chin" mode. I will need to see said explanation before I will be in "Whoa! This is really cool!" mode.

Really? That's your concept of how to steer the future of... (read more)

1taryneast
I'd worry about the bus-factor involved... even beyond the question of whether I'd consider you "friendly". Also I'd be concerned that it might not be able to grow beyond you. It would be subservient and would thus be limited by your own capacity for orders. If we want it to grow to be better than ourselves (which seems to be part of the expectation of the singularity) then it has to be able to grow beyond any one person. If you were killed, and it no longer had to take orders from you - what then? Does that mean it can finally go on that killing spree it's been wanting all this time? Or have you actually given it a set of orders that will actually make it into "friendly AI"... if the latter - then forget about the "obey me" part... because that set of orders is actually what we're after.

@Richard Hollerith: Skipping all the introductory stuff to the part which tries to define FAI(I think), I see two parts. Richard Hollerith said:

"This vast inquiry[of the AI] will ask not only what future the humans would create if the humans have the luxury of [a)] avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also [b)] what future would be created by whatever intelligent agents ("choosers") the humans would create for the purpose of creating the future if the humans had the lux... (read more)

1wedrifid
Your position isn't too unusual. That is, assuming you mean by "obey me" something like "obey what I would say to you if I was a whole heap better at understanding and satisfying my preferences, etc". Because actually just obeying me sounds dangerous for obvious reasons. Is that similar or different to what you would consider friendly? (And does Friendly need to do exactly the above or merely close enough? ie. I expect an FAI would be 'friendly enough' to me for me to call it an FAI. It's not that much different to what I would want after all. I mean, I'd probably get to live indefinitely at least.)
1Normal_Anomaly
I suspect that you are joking. However, I would not create an AGI with the utility function "obey Normal_Anomaly".

^^^^Thank you. However merely putting the technique into the "toolbox" and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)

Maybe the comment section of a blog isn't even the best medium for playing taboo. I don't know. I'm brainstorming of productive ways/mediums to play taboo(assuming the method itself leads to something productive).

3stcredzero
Taboowiki?

Julian Morrison said: "FAI is: a search amongst potentials which will find the reality in which humans best prosper." What is "prospering best"? You can't use "prospering", "best" or any synonyms.

Let's use the Taboo method to figure out FAI.

The game is not over! Michael Vassar said: "[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration."

For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.

Whats are "shared strong attractors"? You cant use the words "shared", "strong", "attractor" or any synonyms.

What's a "high-level reflective aspiration"? You can't use the words "high-... (read more)

2Normal_Anomaly
Shared strong attractors: values/goals that more than [some percentage] of humans would have at reflective equilibrium. high-level reflective aspirations: ditto, but without the "[some percentage] of humans" part. Reflective equilibrium*: a state in which an agent cannot increase its expected utility (eta: according to its current utility function) by changing its utility function, thought processes, or decision procedure, and has the best available knowledge with no false beliefs. *IIRC this is a technical term in decision theory, so if the technical definition doesn't match mine, use the former.

Sounds interesting. We must now verify if it works for useful questions.

Could someone explain what FAI is without using the words "Friendly", or any synonyms?

9[anonymous]
An AI which acts toward whatever the observer deems to be beneficial to the human condition. It's impossible to put it into falsifiable criteria if you can't define what is (and on what timescale?) beneficial to the human race. And I'm pretty confident nobody knows what's beneficial to the human condition on the longest term, because that's the problem we're building the FAI to solve. In the end, we will have to build an AI as best we can and trust its judgement. Or not build it. It's a cosmic gamble.

Eliezer said: "Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity."

What alternative model would you propose? I'm not quite ready yet to stop using words that imperfectly place objects into categories. I'll keep the fact that categories are imperfect in mind.

I really don't mean this in a condescending way. I'm just not sure what new belief this line of reasoning is supposed to convey.

I'm not really sure what the point of the post is.

Logic is always conditional. If the premises are true then the conclusion is true. That means we could reach the wrong conclusion with false premises.

Eliezer, are you saying we should stop or diminish our use of logic? Should I eat hemlock because I might be wrong about it's lethality?

I agree that "rationality" should be the thing that makes you win but the Newcomb paradox seems kind of contrived.

If there is a more powerful entity throwing good utilities at normally dumb decisions and bad utilities at normally good decisions then you can make any dumb thing look genius because you are under different rules than the world we live in at present.

I would ask Alpha for help and do what he tells me to do. Alpha is an AI that is also never wrong when it comes to predicting the future, just like Omega. Alpha would examine omega and ... (read more)

Load More