All of Jonii's Comments + Replies

Jonii00

Try as I might, I cannot find any reference to what's canonical way of building such counterfactual scenarios. Closest I could get was in http://lesswrong.com/lw/179/counterfactual_mugging_and_logical_uncertainty/ , where Vladimir Nesov seems to simply reduce logical uncertainty to ordinary uncertainty, but this does not seem to have anything to do with building formal theories and proving actions or any such thing.

To me, it seems largely arbitrary how agent should do when faced with such a dilemma, all dependent on actually specifying what it means to tes... (read more)

0IlyaShpitser
I am not sure there is a clean story yet on logical counterfactuals. Speaking for myself only, I am not yet convinced logical counterfactuals are "the right approach."
Jonii00

I asked about these differences in my second post in this post tree, where I explained how I understood these counterfactuals to work. I explained as clearly as I could that, for example, calculators should work as they do in real world. I did this explaining in hopes of someone voicing disagreement if I had misunderstood how these logical counterfactuals work.

However, modifying any calculator would mean that there can not be, in principle, any "smart" enough ai or agent that could detect it was in counterfactual. Our mental hardware that checks ... (read more)

2Vladimir_Nesov
Calculators are not modified, they are just interpreted differently, so that when trying to answer the question of what happens in a certain situation (containing certain calculators etc.) we get different answers depending on what the assumptions are. The situation is the same, but the (simplifying) assumptions about it are different, and so simplified inferences about it are different as well. In some cases simplification is unavoidable, so that dependence of conclusions on assumptions becomes an essential feature.
1cousin_it
My current understanding of logical counterfactuals is something like this: if the inconsistent formal theory PA+"the trillionth digit of pi is odd" has a short proof that the agent will take some action, which is much shorter than the proof in PA that the trillionth digit of pi is in fact even, then I say that the agent takes that action in that logical counterfactual. Note that this definition leads to only one possible counterfactual action, because two different counterfactual actions with short proofs would lead to a short proof by contradiction that the digit of pi is odd, which by assumption doesn't exist. Also note that the logical counterfactual affects all calculator-like things automatically, whether they are inside or outside the agent. That's an approximate definition that falls apart in edge cases, the post tries to make it slightly more exact.
Jonii00

Well, to be exact, your formulation of this problem has pretty much left this counterfactual entirely undefined. Naive approximation, that the world is just like ours, and Omega just lies in counterfactual, would not contain such weird calculators which give you wrong answers. If you want to complicate problem by saying that some specific class of agents have a special class of calculators that one would usually think to work in certain way, but actually they work in a different way, well, so be it. That's however just a free-floating parameter you have left unspecified and that, unless stated otherwise, should be assumed not to be the case.

0cousin_it
Hmm, no, I assumed that Omega would be using logical counterfactuals, which are pretty much the topic of the post. In logical counterfactuals, all calculators behave differently ;-) But judging from the number of people asking questions similar to yours, maybe it wasn't a very transparent assumption...
Jonii00

Yes, those agents you termed "stupid" in your post, right?

2cousin_it
The smart ones too, I think. If you have a powerful calculator and you're in a counterfactual, the calculator will give you the wrong answer.
Jonii20

After asking about this on #LW irc channel, I take back my initial objection, but I still find this entire concept of logical uncertainty kinda suspicious.

Basically, if I'm understanding this correctly, Omega is simulating an alternate reality which is exactly like ours, and where the only difference is that Omega says something like "I just checked if 0=0, and turns out it's not. If it was, I would've given you moneyzzz(iff you would give me moneyzzz in this kind of situation), but now that 0!=0, I must ask you for $100." Then the agent notices,... (read more)

2cousin_it
Note that the agent is not necessarily able to detect that it's in a counterfactual, see Nesov's comment.
Jonii10

You lost me at part

In Counterfactual Mugging with a logical coin, a "stupid" agent that can't compute the outcome of the coinflip should agree to pay, and a "smart" agent that considers the coinflip as obvious as 1=1 should refuse to pay.

The problem is that, I see no reason why smart agent should refuse to pay. Both stupid and smart agent know it as logical certainty that they just lost. There's no meaningful difference between being smart and stupid in this case, that I can see. Both however like to be offered such bets, where lo... (read more)

2cousin_it
Note that there's no prior over Omega saying that it's equally likely to designate 1=1 or 1≠1 as heads. There's only one Omega, and with that Omega you want to behave a certain way. And with the Omega that designates "the trillionth digit of pi is even" as heads, you want to behave differently.
Jonii30

This actually was one of the things inspiring me to write this post. I was wondering if I could make use of LW community to run such tests, because it would be interesting to get to practice these skills with consent, but trying to devise such tests stumped me. It's actually pretty difficult to come up with a goal that's actually difficult to achieve in any not-overtly-hostile social context. Laborious, maybe, but that's not the same thing. I just kinda generalized from this, that it should actually be pretty easy to run with any consciously named goal and... (read more)

4A1987dM
Have you heard about the AI box experiment? Eliezer won it twice, but ISTR he said he felt terrible afterwards because he wasn't sure the other party's consent was informed enough.
Jonii-30

That's a nice heuristic, but unfortunately, it's easy to come up with cases where this heuristic is wrong. Say, people want to play a game, I'll use chess for availability, not because it best exemplifies this problem. If you want to have a fun game of chess, ideally you'd hope you did have roughly equal matches. If 9 out of 10 players are pretty weak, just learning the rules, and want to play and have fun with it, you, the 10th player, a strong club player, being an outlier, cannot partake because you are too good(with chess, you could maybe try giving yo... (read more)

4Decius
If your goal is to win at chess, then by all means dominate the noob chess league. If your goal is to play challenging games, find a group of people at your level or somewhat better than you. If your goal is to make friends, the chess is incidental.
drethelin230

In general, the the very skilled player would have gotten that way by being smart AND smashing a ton of less skilled players. Trying to say: "I can't go to chess club because I would just defeat everyone and it wouldn't be fair" is ridiculous, and even more so when you've never actually won a tournament. You never hear the story "I was a social butterfly, the most popular person in school, but then I decided that was abusing my powers and now I'm alone. Yay!" On the other hand "I was alone and sad and nerdy, but then I practiced social skills and now I have a ton of friends and am the most popular person in school. Yay!" is, if not very common, a story that I've heard way more than once.

Jonii20

Oh, yes, that is basically my understanding: We do social manipulation to the extent it is deemed "fair", that is, to the point it doesn't result in retaliation. But at some point it starts to result in such retaliation, and we have this "fairness"-sensor that tells us when to retaliate or watch out for retaliation.

I don't particularly care about manipulation that results in obtaining salt shaker or a tennis partner. What I'm interested in is manipulation you can use to form alliances, make someone liable to help you with stuff you want... (read more)

0OrphanWilde
Manipulation of the kind you're talking about is going to involve flexibility of self - you have to be capable of being the person they would consider a friend, a lover, a confidant. This is significantly harder than it sounds, especially over long periods of time, and you run the very real risk of becoming the thing you only intended to pretend to be. This should be a matter of concern in serious matters - the necessity to be the person they need you to be means you are manipulated by them as a necessary element to manipulating them. There's a reason countries tend to monitor the mental health of their spies pretty closely.
Jonii-20

This I agree with completely. However, it sounding like power fantasy doesn't mean it's wrong or mistaken.

6buybuydandavis
But one should be question the truth of self flattering rationalizations. I think, in general, it is just false that most of us currently possess such social skill. Have you, in fact, demonstrated such social skill on demand to test that theory?
Jonii-10

True. However, it's difficult to construct culturally neutral examples that are not obvious. The ones that pop to my mind are the kind of "it's wrong to be nice to an old, really simple-minded lady because that way you can make her rewrite her will to your benefit", or "It's allright to try to make your roommate do the dishes as many times as you possibly can, as long as you're both on equal footing on this "competition" of "who can do the least dishes"".

I'm not sure how helpful that kind of examples are.

Jonii10

This strikes to me as massively confused.

Keeping track of cancelled values is not required as long as you're working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there's no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.

http://en.wikipedia.org/wiki/Group_%28mathematics%29 <- you can check group axioms here, I won't list them here.

Then again,... (read more)

Jonii30

Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing.

Jonii20

Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough.

Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of ye... (read more)

Jonii20

I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with " utilitarianism", " design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: ". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.

Jonii10

yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.

Jonii70

Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a... (read more)

3[anonymous]
Excellent explanation, thank you. I've been telling everyone I know about your resolution to my worry. I believe in math again. Maybe you can solve my similarly dumb worry about ethics: If the best life is the life of ethical action (insofar as we do or ought to prefer to do the ethically right thing over any other comforts or pleasures), and if ethical action consists at least largely in providing and preserving the goods of life for our fellow human beings, then if someone inhabited the limit case of the best possible life (by permanently providing immortality, freedom, and happiness for all human beings), wouldn't they at the same time cut everyone else off from the best kind of life?
0Watercressed
I think you mean x + 0 = x
Jonii70

My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.

Jonii20

Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what's true and what's not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you're doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.

8David_Gerard
That's asserting the thing that the original question asked to examine: how do we know that this is a genuinely useful manual, rather than something that reads like the manual and makes you think "gosh, this is the manual!" but following it doesn't actually get you anywhere much? What would the world look like if it was? What would the world look like if it wasn't? Note that there are plenty of books (particularly in the self-help field) that have been selected by the market for looking like the manual to life, at the expense of actually being the manual to life. This whole thread is about reading something and going "that's brilliant!" but actually it doesn't do much good.
0HBDfan
[struck]
Jonii10

Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.

4cousin_it
Maybe I'm missing something, but the formalization looks easy enough to me... def tdt_utility(): if tdt(tdt_utility) == 1: box1 = 1000 box2 = 1000000 else: box1 = 1000 box2 = 0 if tdt(tdt_utility) == 1: return box2 else: return box1+box2 def your_utility(): if tdt(tdt_utility) == 1: box1 = 1000 box2 = 1000000 else: box1 = 1000 box2 = 0 if you(your_utility) == 1: return box2 else: return box1+box2 The functions tdt() and you() accept the source code of a function as an argument, and try to maximize its return value. The implementation of tdt() could be any of our formalizations that enumerate proofs successively, which all return 1 if given the source code to tdt_utility. The implementation of you() could be simply "return 2".
Jonii20

I got similar results when I tried the more nondescript "focus on your breathing, if you get lost in your thoughts, go back to breathing, try to observe what happens in your mind" style meditation. Also, I got intense feeling of euphoria on my third try, and feelings of almost passing out under the storm of weird thoughts flowing in and out. That made me a bit scared of meditation, but this post series managed to scare me a whole lot more.

Jonii20

This probably doesn't interest many of you, but I'd be curious to know if I'd hear here any suggestions to inspiring works of fiction with hypercompetent characters in them. Watched the Bourne trilogy in the middle of reading this post, now I want more! :)

My own ideas

Live: -James Bond Casino Royale/Quantum of Solace/Skyfall -House MD -Sherlock

Anime: Death Note Golden Boy

4arundelo
The following all happen to be about hypercompetent thinkers. How inspirational they are varies. * Limitless. If you like the Bourne movies you'll like this. My favorite scene is when Eddie, the main character, is on the phone with his girlfriend while she is being pursued by a bad guy. It is a fun little dramatization of brains being mightier than brawn. (For me the main defect of the movie was that despite his chemically enhanced hyperintelligence Eddie does some stupid things in order to keep the plot wheels turning.) * Understand by Ted Chiang -- available in its entirety online! This novelette is kind of a takeoff on Flowers for Algernon. Unlike in Limitless, the protagonist doesn't do anything stupid, yet the story manages to be interesting. * R. Scott Bakker's Prince of Nothing trilogy. I started this on Yvain's recommendation but somewhere in the second book my interest flagged or I got distracted by other books or whatever. I'd still like to finish it sometime. From what I've read of it, Kellhus (a super-smart rationalist who is also basically a ninja) is kind of an antihero, or at least morally ambiguous. He's very good at achieving his goals, but I don't know whether his goals are worth achieving. Edit -- here a couple other things: * The main character in Frank Conroy's Body and Soul is a musician with a lot of native talent (who also puts in the hours). I recently typed out a favorite passage. * A bit different from the above stuff, but Wodehouse's Jeeves stories are laugh-out-loud funny and feature a hypercompetent valet. (I know these from the stories rather than the TV adaptations, but the latter feature Hugh Laurie and Stephen Fry.)
Jonii20

I do think it is good to have some inspirational posts here than don't rely that much on actual argumentation but rather paint an example picture where you could be when using rationality, what rationality could look like. There are dangers to that, but still, I like these.

Jonii00

I guess the subject is a bit touchy now.

Jonii10

I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.

Upvoted

Jonii110

Is there any data supporting the idea that dvorak/colemak/some other new keyboard layout are actually better than qwerty. Like, actual data collected by doing research on actual people that type stuff, how their layout of choice affects their health and typing speed. I do know that you get figures like "on average your fingers travel twice the amount if you type on qwerty as compared to some other layout", but actual data from actual typists?

Jonii40

I've been practicing dvorak for about a month. Not much since I got above 10wpm(1 hour a day for a week), but I've used it when there has been typing to be done. I've gotten to 40wpm, and I started with 70wpm qwerty speed. Incidentally, I've also forgotten how to type with qwerty.

I'd suggest you find a week when you are free to use about an hour of your time every day to practice dvorak and don't need to type anything really, and then maybe another week when you are not under any stress about your typing speed. After that, you should be able to type well... (read more)

Jonii00

Welcome, its fun to have you here.

So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.

Admittedly, I think that there is no god, but also I'm not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone... (read more)

1TimS
Hmm
Jonii00

"Ylioppilasaukio 5"? I can't find Cafe Picnic at an address like that

Jonii00

I'm interested, and most likely I'll be there.

Jonii00

If you make copy, then inform both original and the copy of their states("You're the original" "You're the first copy"), and then proceed to make new copy of the original, information equivalence exists only between copy number 2 and the original, making it back to 1/2, 1/4, 1/4

0cousin_it
Yes, I know :-)
Jonii70

Even majority of readers participated to these meetups every time, it doesn't matter. Quoting the about-post: ""Promoted" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance."

Meetup-posts do not contain new, important, argumentative content. It's meta-level discussion, meta that it bit by bit trying to take over the whole LW. I don't want LW that exists for posts about LW. Meetup-posts are not the only thing driving LW towards uselessness, but as far as I can tell, having those posts in the front page is by far the most visible and obvious warning sign.

6bisserlis
I disagree. Meetups bring out lurkers and infrequent posters, such as myself, and make LessWrong more than just some intellectual exercise online. At the last Berkeley meetup I was in a discussion that touched on the following two issues with the same group of people. * Most lurkers admit to not reading the discussion session, but number at least half of any meetup group. (Maybe, but doubtfully, this is Berkeley/SF Bay specific) * We were generally interested in some recent comments that had been made about the NYC meetups incorporating an instrumental rationality/support aspect, and some present desired more frequent and more specifically social meetups. I admit that a rather large and disproportionate amount of front page posts are about meetups now, so here's a solution that I think may please everyone. Some designated meetup Super Organizer collects info on meetups planned for the next month, then publishes a monthly front page promoted post with info on all upcoming global meetups. Individual meetups could still be posted in the Discussion section so as to have their own threads, and obviate the need for a designated Super Organizer, instead being handled something like Open Threads where one is posted anew as necessary. Edit: And as an added bonus, no one has to hack any code.
Jonii00

So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.

3TheOtherDave
Can you say more about why you don't think it's good? I can think of several different reasons, some more valid than others, and the context up to this point doesn't quite constrain them.
Jonii10

Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.

Jonii50

I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.

I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell

1Jonii
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.
Jonii30

Yes, but that incomplete-one means that his power can't override powers others have. Even if he could, after paying attention to Allirea, understand her power, it doesn't follow from what we know of his powers up to now that he could pay attention to her any more than any other person there. Even some sort of power-detection field would fail to reveal other than "There's is vampire that diverts attention paid to it in that general direction", if we assume it overrides her ability, which would make Eleazar severely handicapped in a fight anyway.

Yeah, and I wanted to say that you're treating the characters you create in an awful and cruel way. Stop that. They should be happy at least once in a while :p

4Alicorn
Oh yes it does. Everything Bella blocks, she blocks completely, unconsciously, whether or not she knows there's anything to block, one hundred percent of the time - except Eleazar. In Allirea's case, she seems to Eleazar like the least important person there, and would probably compare unfavorably with a squirrel if one should uncharacteristically wander by. But he can notice her, can remember that she is present, and can take actions dependent on that knowledge. And one of the things he can remember about her is what she does, which gives him enough reason to mistrust this evaluation of her that he can clobber her in a fight. (Vampire v. half-vampire = no contest, just no contest, unless the half-vampire is Allirea and her power is in full effect against the vampire, even if the vampire is not very good at fighting.) Society for the Prevention of Cruelty to Fictional Characters, are we now? Sorry, I don't write that way. Happy endings aren't off the menu, necessarily, but happy middles are not my bag.
Jonii10

Chapter 11:

Is Allirea + Eleazar thing canon? It sure doesn't seem to follow from what we've seen before, unless Eleazar lied to Bella.

2Alicorn
Although Nahuel has sisters in canon, their details are made up, including Allirea'a power and therefore how Allirea interacts with Demetri, Eleazar, et al. Note that Eleazar did get a reading off Bella, albeit a brief and incomplete one.
Jonii20

Mind explaining why? I don't see any reason it's any more true than it is false.

0shokwave
Hmm. I was going to say "assign it the value of true, and it returns true. Assign it the value of false, and it returns a contradiction", but on reflection that's not the case. If you assign it the value of false, then the claim becomes ¬(A is true), so it returns false. So I was wrong - the proposition is a null proposition, it simply returns the truth value you assign to it. I don't know if ambiguous is the best way to describe it, but 'true' certainly isn't. edit: perhaps cata's 'trivial' is a good word for it.
0wedrifid
Interesting. If I infer correctly... Tordmor messed up and wrote "This proposition is true" when he probably would have wanted to have referred to "This proposition is false". Shokwave correctly notes that "This proposition is true" isn't ambiguous at all, it essentially returns the value True. Jonii also correctly observes that the person speaking the claim "This proposition is true" could be lying or mistaken (to the extent that the statement has bearing on facts external to the phrase). Apparent disagreement with Shokwave is likely to be due to ambiguity in the casual English representations of logical dereferencing.
Jonii00

This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic.

So the sentence "The sentence 'Everything written on the board in Room 33 is either false or meaningless.' is meaningless" is not true?

0Jack
Sure it's true. Thats just Meaningless(Sliar())... I guess I don't seen why the selected portion would imply otherwise.
Jonii00

Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.

Jonii-30

Becoming a person doesn't seem like something that you can do free of cost. There seems to be a lot of complexity hidden in that "Become a person" part.

Jonii00

Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You'd need to find it in order to grasp what it means to have a being that lacked moral value, and "both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.

0Vladimir_Nesov
This I guessed. Why? "No moral value" has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn't improve on that understanding. Also, the examples of things that have moral value are easy to imagine. This I still don't understand. You'd need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What's the other, and how is it relevant?
Jonii00

But I'd think if I only said "It doesn't have moral value in itself", you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.

0Vladimir_Nesov
What property cluster/why I'd need to find it/which both ideas?
Jonii00

It requires us to know what sort of utility function the other player has, at the very least, and even then the result might be, at best, mutual defect or, against superrational players, mutual co-operation.

4Snowyowl
Cooperation against superrational players is only optimal if you are superrational too, or if they know how you are going to play. If you know they are superrational but they don't know you aren't, you should defect.
Jonii00

And? If you have multiple contradictory wishes what to do next, some of them are bound to be unfulfilled. CEV or negotiation are just ways to decide which ones.

3Perplexed
Yes, and until someone explains how CEV works, I will prefer negotiation. I understand it, I think it generates the best, fairest results, etc. With AI assistance, some of the communication barriers can be lowered and negotiation will become an even better tool. CEV, on the other hand, is a complete mystery to me.
Jonii00

Why do you think I lose?

Because there are a lot more of those with values totally different from yours, which made the CEV optimize a future that you didn't like at all. If you're negotiating will all those people, why would they give in to you any more than CEV would optimize for you?

2Perplexed
Hmmm. That is not the scenario I was talking about. I was imagining that there would be a large number of people who would feel disenfranchised because their values were considered incoherent (or they were worried that their values might be thought incoherent). This coalition would seize political control of the CEV creation bureaucracy, change "coherent extrapolated" to "collective expressed" and then begin the negotiation process.
Jonii00

So you're bound to end up losing in this game, anyway, right? Negotiation in itself won't bring you any additional power over the coherent extrapolated volition of humanity to change the future of the universe. If others think very much unlike you, you need to overpower them to bring your values back to the game or perish in the attempt.

2Perplexed
I don't understand your thinking here at all. Divergent values are not a barrier to negotiation. They are the raw material of negotiation. The barrier to negotiation is communication difficulty and misunderstanding. Why do you think I lose?
Jonii10

The above is a caricature of 'coherence' as presented in the May 2004 document. If someone else can provide a better interpretation, that would be welcome.

It seemed accurate to me. Also, I didn't find any problems from it that would seem frightening or so. Was it supposed to be problematic in some way?

2Perplexed
You mean other than being politically naive and likely to get a lot of people killed? You are asking what I have against it personally, if it should somehow come to pass? Well, to be honest, I'm not sure. I usually try to base my important opinions on some kind of facts, on 'official' explanations. But we don't have those here. So I am guessing. But I do strongly suspect that my fundamental values are very different than those of the author of CEV. Because I am not laboring under the delusion that everyone else is just like me, ... only stupider. I know that human values are diverse, and that any kind of collective set of values must be negotiated, rather than somehow 'extrapolated'.
Jonii40

Just an attempt to make it clear that we're dealing with something like an intelligent calculator here with nothing in it that we'd find interesting or valuable in itself. Setting up this as the true PD.

0JoshuaZ
Is that even well-defined? If I assert that I am a philosophical zombie in every sense of the term (lacking soul, qualia, and whatever other features you find relevant) does that mean you don't care about my losses? Observers aren't ontological fundamental entities which is where you may be running into trouble.
Load More