Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cousin_it 16 September 2013 08:19:53PM *  1 point [-]

My current understanding of logical counterfactuals is something like this: if the inconsistent formal theory PA+"the trillionth digit of pi is odd" has a short proof that the agent will take some action, which is much shorter than the proof in PA that the trillionth digit of pi is in fact even, then I say that the agent takes that action in that logical counterfactual.

Note that this definition leads to only one possible counterfactual action, because two different counterfactual actions with short proofs would lead to a short proof by contradiction that the digit of pi is odd, which by assumption doesn't exist. Also note that the logical counterfactual affects all calculator-like things automatically, whether they are inside or outside the agent.

That's an approximate definition that falls apart in edge cases, the post tries to make it slightly more exact.

Comment author: Jonii 16 September 2013 08:41:19PM 0 points [-]

Try as I might, I cannot find any reference to what's canonical way of building such counterfactual scenarios. Closest I could get was in http://lesswrong.com/lw/179/counterfactual_mugging_and_logical_uncertainty/ , where Vladimir Nesov seems to simply reduce logical uncertainty to ordinary uncertainty, but this does not seem to have anything to do with building formal theories and proving actions or any such thing.

To me, it seems largely arbitrary how agent should do when faced with such a dilemma, all dependent on actually specifying what it means to test a logical counterfactual. If you don't specify what it means, whatever could happen as a result.

Comment author: cousin_it 16 September 2013 07:51:59PM *  0 points [-]

Hmm, no, I assumed that Omega would be using logical counterfactuals, which are pretty much the topic of the post. In logical counterfactuals, all calculators behave differently ;-) But judging from the number of people asking questions similar to yours, maybe it wasn't a very transparent assumption...

Comment author: Jonii 16 September 2013 08:08:34PM *  0 points [-]

I asked about these differences in my second post in this post tree, where I explained how I understood these counterfactuals to work. I explained as clearly as I could that, for example, calculators should work as they do in real world. I did this explaining in hopes of someone voicing disagreement if I had misunderstood how these logical counterfactuals work.

However, modifying any calculator would mean that there can not be, in principle, any "smart" enough ai or agent that could detect it was in counterfactual. Our mental hardware that checks if logical coin should've been heads or tails is a calculator the same as any computer, and again, there does not seem to be any reason to assume Omega leaves some calculators unchanged while changes results of others.

Unless, this thing is just assumed to happen, with some silently assumed cutaway point where calculators become so internal they are left unmodified.

Comment author: cousin_it 16 September 2013 07:16:26PM *  1 point [-]

The smart ones too, I think. If you have a powerful calculator and you're in a counterfactual, the calculator will give you the wrong answer.

Comment author: Jonii 16 September 2013 07:45:40PM 0 points [-]

Well, to be exact, your formulation of this problem has pretty much left this counterfactual entirely undefined. Naive approximation, that the world is just like ours, and Omega just lies in counterfactual, would not contain such weird calculators which give you wrong answers. If you want to complicate problem by saying that some specific class of agents have a special class of calculators that one would usually think to work in certain way, but actually they work in a different way, well, so be it. That's however just a free-floating parameter you have left unspecified and that, unless stated otherwise, should be assumed not to be the case.

Comment author: cousin_it 16 September 2013 07:06:22PM 1 point [-]

Note that the agent is not necessarily able to detect that it's in a counterfactual, see Nesov's comment.

Comment author: Jonii 16 September 2013 07:10:07PM 0 points [-]

Yes, those agents you termed "stupid" in your post, right?

Comment author: cousin_it 16 September 2013 11:46:26AM *  1 point [-]

Note that there's no prior over Omega saying that it's equally likely to designate 1=1 or 1≠1 as heads. There's only one Omega, and with that Omega you want to behave a certain way. And with the Omega that designates "the trillionth digit of pi is even" as heads, you want to behave differently.

Comment author: Jonii 16 September 2013 06:15:47PM 1 point [-]

After asking about this on #LW irc channel, I take back my initial objection, but I still find this entire concept of logical uncertainty kinda suspicious.

Basically, if I'm understanding this correctly, Omega is simulating an alternate reality which is exactly like ours, and where the only difference is that Omega says something like "I just checked if 0=0, and turns out it's not. If it was, I would've given you moneyzzz(iff you would give me moneyzzz in this kind of situation), but now that 0!=0, I must ask you for $100." Then the agent notices, in that hypothetical situation, that actually 0=0, so actually Omega is lying, so he is in hypothetical, and thus he can freely give moneyzzz away to help to real you. Then, because some agents can't tell for all possible logical coins if they are lied to or not, they might have to pay real moneyzzz, while sufficiently intelligent agents might be able to cheat the system if they are able to notice if they are lied to about the state of the logical coin.

I still don't understand why a stupid agent would want to make a smart AI that did pay. Also, there are many complications that restrict decisions of both smart and stupid agents, given argument I've given here, stupid agents still might prefer not paying, and smart agents might prefer paying, if they gain some kind of insght to how Omega chose these logical coins. Also, this logical coin problemacy seems to me like a not-too-special special class of Omega problems where some group of agents is able to detect if they are in counterfactuals

Comment author: Jonii 16 September 2013 08:02:45AM 1 point [-]

You lost me at part

In Counterfactual Mugging with a logical coin, a "stupid" agent that can't compute the outcome of the coinflip should agree to pay, and a "smart" agent that considers the coinflip as obvious as 1=1 should refuse to pay.

The problem is that, I see no reason why smart agent should refuse to pay. Both stupid and smart agent know it as logical certainty that they just lost. There's no meaningful difference between being smart and stupid in this case, that I can see. Both however like to be offered such bets, where logical coin is flipped, so they pay.

I mean, we all agree that a "smart" agent, that refused to pay here, would receive $0 if Omega flipped logical coin of asking if 1st digit of pi was an odd number, while "stupid" agent would get $1,000,000.

Comment author: Michelle_Z 17 June 2013 02:05:33AM 18 points [-]

Test it, then. Run an experiment. Find a group of people (don't use the excuse that finding groups of people is hard), and attempt to do just what you said. If it works, congratulations. You're the next dark lord. If it doesn't work, you're probably wrong. (And don't use the excuse that the people just happened to all be immune to your powers.)

While reading the above, if your brain attempted either of those excuses, you're probably suffering from belief in belief.

Comment author: Jonii 17 June 2013 07:05:03AM 2 points [-]

This actually was one of the things inspiring me to write this post. I was wondering if I could make use of LW community to run such tests, because it would be interesting to get to practice these skills with consent, but trying to devise such tests stumped me. It's actually pretty difficult to come up with a goal that's actually difficult to achieve in any not-overtly-hostile social context. Laborious, maybe, but that's not the same thing. I just kinda generalized from this, that it should actually be pretty easy to run with any consciously named goal and achieve it, but there must be some social inhibition.

The set of things that inspired me was wide and varying. It just may be reflected in how the essay was... Not as coherent as I'd have hoped.

Comment author: Xachariah 16 June 2013 09:55:33PM *  26 points [-]

Bullshit.

Never use "I'm too good at something to win" or "I only lose because other people are so bad". Those sort of explanations are never true. Not ever.

I don't know if there's some kind of word for this fallacy (maybe a relative of the Dunning-Kruger effect), but if your mind ever uses it in the future then you need to give your logic center a curbstomp in the balls. This sort of logic is ego protection bullshit. Hearing this explanation is the number one indicator that a person will never improve in a skillset.

How could they possibly get better if they think they already have the answer and it doesn't involve any work on their part?

Here's my alternate hypothesis. Manipulating people is hard and takes tons of practice. You haven't put in your 10,000 hours.

Edit: Also, you aren't getting downvoted because this belongs in the Open Thread. The downvotes are because you're wrapped in one of the most dangerous self-delusions that exists. It's even more insidious than religion in some ways because it can snake it's way into any thought about any skillset. The good news is that you've given it voice and you can fight it. And I hope you do.

Comment author: Jonii 17 June 2013 12:38:29AM -2 points [-]

That's a nice heuristic, but unfortunately, it's easy to come up with cases where this heuristic is wrong. Say, people want to play a game, I'll use chess for availability, not because it best exemplifies this problem. If you want to have a fun game of chess, ideally you'd hope you did have roughly equal matches. If 9 out of 10 players are pretty weak, just learning the rules, and want to play and have fun with it, you, the 10th player, a strong club player, being an outlier, cannot partake because you are too good(with chess, you could maybe try giving your queen to handicap yourself, or take time handicap, to make games more interesting, but generally I feel that sorta of tricks still make it less for fun for all parties)

While there might be obvious reasons to suspect bias being at play, unless you want to ban ever discussing topics that might involve bias, the best way around it, that I know of, is to actually focus on the topic. Just stating "woah, you probably are biased if you think thoughts like this" is something I did take into consideration. I was still curious to hear LW thoughts on this topic. The actual topic, not on whether LW thinks it's a bias-inducing topic or not. If you want me to add some disclaimer for other people, I'm open to suggestions. I was going to include one myself, that was basically saying "Failing socially in a way described here would at best be very very weak evidence of you being socially gifted, intelligent, or whatever. Reasoning presented here is not peer-reviewed, and might as well contain errors". I did not, because I didn't want to add yet another shiny distraction from the actual point presented. I didn't think it would be needed, either.

Comment author: tim 16 June 2013 10:01:54PM *  1 point [-]

My issue with this argument is that you are implicitly claiming that social interaction --> manipulation. On the face of it this is probably more or less true. Most social interactions do involve (mild) manipulations such as suggesting an activity, asking someone to pass the [object], or telling a story to elicit sympathy/respect. However, you then claim that these types of manipulations are ones intelligent people "feel iffy about."

I'm certainly willing to accept that there are types of manipulation that makes the manipulator feel guilty and could possibly cause socially awkwardness. But I very much doubt the claim that most social interactions consist of these types of manipulation and that this is what leads to the social clumsiness some smart people exhibit.

Also, the evo-psych justification that "evolution has programmed us to have repulsion towards unfairly manipulating others" seems like a big stretch. I would actually expect the opposite to be true to the extent that your manipulations weren't blatant enough to trigger retaliation.

In response to comment by tim on On manipulating others
Comment author: Jonii 16 June 2013 10:32:43PM 2 points [-]

Oh, yes, that is basically my understanding: We do social manipulation to the extent it is deemed "fair", that is, to the point it doesn't result in retaliation. But at some point it starts to result in such retaliation, and we have this "fairness"-sensor that tells us when to retaliate or watch out for retaliation.

I don't particularly care about manipulation that results in obtaining salt shaker or a tennis partner. What I'm interested in is manipulation you can use to form alliances, make someone liable to help you with stuff you want, make them like you, make them think of you as their friend or "senpai" for the lack of better term, or make them fall in love with you. What also works is getting them to have sex with you, to reveal something embarrassing about themselves, or otherwise become part of something they hold sacred. Pretending to be a god would fall into this category. I'm struggling to explain why I think manipulation on those cases is iffy, I think it has to do with that kind of interaction kinda assuming that there are processes involved beyond self-regulation. With manipulation, you could bypass that and in effect you would lie about your alliance.

It is true many social interactions are not about anything deeper than getting the salt shaker. I kind of just didn't think of them while writing this post. I might need to clarify that point.

Comment author: Kawoomba 16 June 2013 07:01:30PM 11 points [-]

This seems more like some power-fantasy, along the lines of a kid standing lonely on the sidelines of a party and telling himself "I could control them all, but I don't, because I need to keep my power under control". There are plenty of intelligent people who socialize just great, and use those relationships to their benefit.

Comment author: Jonii 16 June 2013 07:27:06PM -1 points [-]

This I agree with completely. However, it sounding like power fantasy doesn't mean it's wrong or mistaken.

View more: Next