MMO of the future lol(some swearing)
And just so I'm not completely off topic, I agree with the original post. There should be games, they should be fun and challenging and require effort and so on. AI's definetly should not do everything for us. A friendly future is a nice place to live in and not a place wher an AI does the living for us so we might as well just curl up in a fetal position and die.
What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?
-Each state will have it's own constitution and rules. -Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules. -The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there. -There are certain universal meta-rules that supercede the states' rules such as... -A citizen may leave a sta...
"...what are some other tricks to use?" --Eliezer Yudkowsky "The best way to predict the future is to invent it." --Alan Kay
It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.
"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."
I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.
Slight correction. I said: "Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it's an attempt to reverse stupidity to get intelligence." I worded this sentence badly. I mean that stupid people saying things cannot make something false and usually when people commit this fallacy it's because they are trying to say that the opposite of the "bad" point is true. This is why I said it's an attempt to reverse stupidity to get intelligence.
Basically when we see "a stupid person said this" being advanced as proof that something is false, we can expect a reverse stupidity to get intelligence fallicy right after.
I disagree with much of what is in the linked essay. One doesn't have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: "My opponent is a tainted, therefore his arguments are bad." One can make ad hominim statements without actually saying them by using innuendo.
On the other hand ad hominim isn't even necessarily a fallacy. Of course an argument cannot bec...
I thought of some more. -there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don't fight the status quo. -everything is connected with "energy"(mystically): you or special/chosen people might be ably to tap into this "energy". You might glean information you normally shouldn't have or gain some kind of special powers. -Scientists/professionals/experts are "elitists". -Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good. That's it for now.
-faith: i.e. unconditional belief is good. It's like loyalty. Questioning beliefs is like betrayal. -The saying "Stick to your guns.": Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier. -The faithfull: i.e. us, we are the best, god is on our side. -the infedels: i.e. them, sinners, barely human, or not even. -God: Infenetly powerful alpha male. Treat him as such with all the implications... -The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the d...
Ok, maybe my last post was a bit harsh(it's tricky to express oneself over the Internet). I will elaborate further. Eliezer said:
"So here are the traditional values of capitalism as seen by those who regard it as noble - the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say "There's a sucker born every minute"), or Warren Buffett:"
I don't know much about the latter two but I have read Paul Graham extensively. It sounds like a strawman to me when Eliezer says:
"I regard finance as more of a useful tool than an ul...
The post wasn't narrow enough to make a point. Elizier stated: "I regard finance as more of a useful tool than an ultimate end of intelligence - I'm not sure it's the maximum possible fun we could all be having under optimal conditions." Are we talking pre or post a nanotech OS running the solar system? In the latter case most of these "values" would become irrelevant. However given the world we have today, I can confidently say that capitalism is pretty awesome. There is massive evidence to back up my claim.
It smells like Eliezer is tr...
What's the point of despair? There seems to be a given assumption in the original post that:
1) there is no protection, universe is allowed to be horrible --> 2)lets despair
But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the atten...
Eli, do you think you're so close to developing a fully functional AGI that one more step and you might set off a land mine? Somehow I don't believe you're that close.
There is something else to consider. An AGI will ultimately be a piece of software. If you're going to dedicate your life to talking about and ultimately writing a piece of software then you should have superb programming skills. You should code something.. anything.. just to learn to code. Your brain needs to swim in code. Even if none of that code ends up being useful the skill you gain will be. I have no doubt that you're a good philosopher and a good writer since I have read your blog but wether or not you're a good hacker is a complete mystery to me.
Eliezer, perhaps you were expecting them to seem like A-holes or snobs. That is not the case. They are indeed somewhat smarter than average. They also tend to be very charismatic or "shiny" which makes them seem smarter still. That doesn't necessarily mean they are smart enough or motivated to fix the problems of the world.
Perhaps there are better models of the world than the Approval/Disapproval of Eletes dichotomy.
A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed ...
Can someone just tell us dumb asses the differece between describing something and experiencing it?
Um... ok.
Description: If you roll your face on your keyboard you will feel the keys mushing and pressing against your face. The most pronounced features of the tactile experience will be the feeling of the ridges of the keys pressing against your forehead, eyebrows and cheekbones. You will also hear a subtle "thrumping" noise of the keys are being pressed. If you didn't put the cursor in a text editor you might hear some beeps from your computer. On...
Too much philosophy and spinning around in circular definitions. Eliezer, you cannot transfer experiences, only words which hopefully point our minds to the right thing until we "get it". Layers upon layers of words trying to define reductionism won't make people who haven't "gotten it" yet "get it". It will just lead to increasingly more sophisticated confusion. I suppose the only thing that could snap people into "getting" reductionism at this point is lots of real world examples because that would emulate an expe...
Everyone ignored my c++ example. Was I completely off base? If so please tell me. IMHO we should look for technical examples to understand concepts like "reductionism". Otherwise we end up wasting time arguing about definitions and whatnot.
Personally, I find it irritating when a discussion starts with fuzzy terms and people proceed to add complexity making things fuzzier and fuzzier. In the end, you end up with confused philosophers and no practical knowledge whatsoever. This is why I like math or computer science examples. It connects what you are talking about to something real.
If people can understand the concept of Unions from c/c++ they can understand reductionism. One can use different overlaping data structures to access the same physical locations in memory.
union mix_t { long l; struct { short hi; short lo; } s; char c[4]; } mix;
Unfortunately the blog ate my indentations.
Is mix made up of a long, shorts or chars? Silly questions. mix.l, mix.s and mix.c are accessing the same physical memory location.
This is reductionism in a nutshell, it's talking about the same physical thing using different data types. You can 'go up'(use...
As a C programmer who hangs out in comp.lang.c, I'm strongly tempted to get out a copy of C99 so that I can tell you precisely where you're wrong there. But I'll content myself with pointing out that there is no guarantee that sizeof(long)==2*sizeof(short)==4*sizeof(char), and moreover that even if that did hold, there is still no guarantee that sizeof(struct {short hi; short lo;})==2*sizeof(short) because the struct might have padding - what if 'short' were a 16 bit quantity but stored in 32 bit words (perhaps because the arch can only do 32 bit writes, ...
Silas: My post wasn't meant to be "shockingly unintuitive", it was meant to illustrate Eliezer's point that probability is in the mind and not out there in reality in a ridiculously obvious way.
Am I somehow talking about something entirely different than what Eliezer was talking about? Or should I complexificationafize my vocabulary to seem more academic? English isn't my first language after all.
Here is another example me, my dad and my brother came up with when we were discussing probability.
Suppose there are 4 card, an ace and 3 kings. They are shuffled and placed face side down. I didn't look at the cards, my dad looked at the first card, my brother looked at the first and second cards. What is the probability of the ace being one of the last 2 cards. For me: 1/2 For my dad: If he saw the ace it is 0, otherwise 2/3. For my brother: If he saw the ace it is 0, otherwise 1.
How can there be different probabilities of the same event? It is because p...
"Hard AI Future Salon" lecture, good talk. Most of the audience's questions however were very poor.
One more comment about the mind projection fallacy. Eliezer, you also have to keep in mind that the goal of a sci-fi writer is to make a compelling story which he can sell. Realism is only important in so far as it helps him achieve this goal. Agreed on the point that it's a fallacy, but don't expect it to change unless the audience demands/expects realism. http://tvtropes.org/ if full of tropes that illustrate stuff like that.
OK, time to play:
Q: Why am I confused by the question "Do you have free will?"? A: Because I don't know what "free will" really means. Q: Why don't I know what "free will" means? A: Because there is no clear explanation of it using words. It's an intuitive concept. It's a feeling. When I try to think of the details of it, it is like I'm trying to grab slime which slides through my fingers. Q: What is the feeling of "free will"? A: When people talk of "free will" they usually put it thusly. If one has "...
This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:
Would you kill babies if it was intrinsically the right thing to do? Yes/No
If you circled "no", explain under what circumstances you would not do the right thing to do:
If you circled "yes", how right would it have to be, for how many babies? ___
What a horrible horrible question. My answer is ... what do you mean when you say "intrinsically the right thing to do"? The "right thing" according to whom? If it...
Lately I've been thinking about "mind killing politics". I have come to the conclusion that this phenomenon is pretty much present to some degree in any kind of human communication where being wrong means you or your side lose status.
It is incorrect to assume that this bias can only occurs when the topic involves government, religion, liberalism/conservatism or any other "political" topics. Communicating with someone who has a different opinion than you is sufficient for the "mind killing politics" bias to start creeping in.
Th...
Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn't magic, it won't let you cross a gap of months in an hour.
Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I'm still in "Hmmmm...scratches chin" mode. I will need to see said explanation before I will be in "Whoa! This is really cool!" mode.
Really? That's your concept of how to steer the future of...
@Richard Hollerith: Skipping all the introductory stuff to the part which tries to define FAI(I think), I see two parts. Richard Hollerith said:
"This vast inquiry[of the AI] will ask not only what future the humans would create if the humans have the luxury of [a)] avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also [b)] what future would be created by whatever intelligent agents ("choosers") the humans would create for the purpose of creating the future if the humans had the lux...
^^^^Thank you. However merely putting the technique into the "toolbox" and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)
Maybe the comment section of a blog isn't even the best medium for playing taboo. I don't know. I'm brainstorming of productive ways/mediums to play taboo(assuming the method itself leads to something productive).
The game is not over! Michael Vassar said: "[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration."
For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.
Whats are "shared strong attractors"? You cant use the words "shared", "strong", "attractor" or any synonyms.
What's a "high-level reflective aspiration"? You can't use the words "high-...
Eliezer said: "Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity."
What alternative model would you propose? I'm not quite ready yet to stop using words that imperfectly place objects into categories. I'll keep the fact that categories are imperfect in mind.
I really don't mean this in a condescending way. I'm just not sure what new belief this line of reasoning is supposed to convey.
I'm not really sure what the point of the post is.
Logic is always conditional. If the premises are true then the conclusion is true. That means we could reach the wrong conclusion with false premises.
Eliezer, are you saying we should stop or diminish our use of logic? Should I eat hemlock because I might be wrong about it's lethality?
I agree that "rationality" should be the thing that makes you win but the Newcomb paradox seems kind of contrived.
If there is a more powerful entity throwing good utilities at normally dumb decisions and bad utilities at normally good decisions then you can make any dumb thing look genius because you are under different rules than the world we live in at present.
I would ask Alpha for help and do what he tells me to do. Alpha is an AI that is also never wrong when it comes to predicting the future, just like Omega. Alpha would examine omega and ...
Wow! This post is particularly relevant to my life right now. On January 5th I start bootcamp, my first day in the military.