Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Nornagest on Jokes Thread
Comment author: Alsadius 28 July 2014 03:25:26PM 0 points [-]

Pretty much. I've actually gotten in a debate with a Randian on Facebook about what constitutes evidence. He doesn't seem to like Bayes' Theorem very much - he's busy talking about how we shouldn't refer to something as possible unless we have physical evidence of its possibility, because of epistemology.

In response to comment by Alsadius on Jokes Thread
Comment author: PrometheanFaun 01 August 2014 10:33:34PM *  1 point [-]

That's contrary to my experience of epistimology. It's just a word, define it however you want, but in both epistemic logic and pragmatics-stripped conventional usage, possibility is nothing more than a lack of disproof.

Comment author: ygert 21 January 2014 11:50:16AM *  2 points [-]

Agreed. But here is what I think Hofstadter was saying: The assumption that is used can be weaker than the assumption that the two players have an identical method. Rather, it just needs to be that they are both "smart". And this is almost as strong a result as the true zero knowledge scenario, because most agents will do their best to be smart.

Why is he saying that "smart" agents will cooperate? Because they know that the other agent is the same as them in that respect. (In being smart, and also in knowing what being smart means.)

Now, there are some obvious holes in this, but it does hold a certain grain of truth, and is a fairly powerful result in any case. (TDT is, in a sense, a generalization of exactly this idea.)

Comment author: PrometheanFaun 21 January 2014 10:11:37PM *  0 points [-]

Have you seen this explored in mathematical language? Cause it's all so weird that there's no way I can agree with Hofstadter to that extent. As yet, I don't know really know what "smart" means.

Comment author: jimmy 18 January 2014 12:13:22AM *  6 points [-]

I do stand by my point that regardless of your definition of "terminal goal", I can construct a game in which the optimal move is to change them. I readily admit that under certain definitions of "terminal goal" such games are uncommon.

Agreed.

If it's the branding that's annoying you, see this comment -- it seems my idea of what qualifies as "dark arts" may differ from the consensus.

Although you do explicitly define "dark arts" differently, that doesn't really change my issues with the branding. I hope the next part of the comment will explain why (well, that and the objections other people have raised)

I don't know. I haven't pinpointed the higher order differences that you're trying to articulate.[...]I'm not entirely sure what you mean by getting the same effects without the "darkness". I am quite confident that there are mental states you can only access via first-order self deception, and that it is instrumentally rational to do so. Michael Bloom provides another crisp example of this. I am skeptical that there are ways to attain these gains without self-deception.

That link goes to your previous comment instead of the Michael Blume example. Perhaps you mean his othello role?

I don't think he did anything sketchy there. Since the explicit goal is to pretend to be someone he's not in a well defined context, this is a fairly perverse game which makes it nice and easy to cleanly compartmentalize. In fact, everything said in character could be prefaced with "Lago would say" and it wouldn't even be lying. I'd put this one in the "not really lying because every part of him knows what he's doing" category. There isn't a resisting "but it's not real!" because that's kinda the point. While it's obviously an actual situation he was in, I think most cases aren't this easy.

The other application he mentioned (acting confident for picking up an attractive woman) is more representative of the typical case and more tricky to do right. Say you read a couple posts on LW about how it's okay to deceive the parts of your monkey brain that are getting in your way - and confidence with women is explicitly mentioned as a good time to do it. So you self deceive to think that you're super attractive and what not without thinking too much about the risks.

Now, what if "confidence" isn't your only problem? If you were lacking social intelligence/skills before, you're still lacking them when you're playing "confident man" - only now you're going to ignore the rational uncertainty over how a certain social move will be received. This means you end up doing things that are socially miscalibrated and you end up being the creepy guy. And since "I'm creeping this girl out" is incongruent with "I'm the attractive guy that all women want", you either keep plowing ahead or dismiss the rejection as "her loss!". Either way your behavior is not good, and furthermore you're giving up the chance to analyze your feedback and actually develop your social skills.

And of course, that would be stupid. People like MBlume know better than to disappear down this rabbit hole. But plenty of people actually do fall down that hole (hence the stink around "PUA")

It doesn't have to be that blatant though. Even if you know to snap out of it and analyze the feedback when you get a "back off creep", there are going to be more subtle signs that you don't pick up on because you're playing confident - heck, there are plenty of subtle signs that people miss just because they're subtle. I've seen a therapist miss these signs badly and go on to advertise the demo on youtube as a successful provocative therapy session - and this is a guy who trains people in provocative therapy! I don't want to make it any harder for myself to notice when I'm screwing up.

To give a real life example that actually happened to me/someone I know, I taught self hypnosis to a friend and she ended up spraining her ankle. Since she doesn't have the heuristic to be very very cautious with dark arts, she used self hypnosis to numb the pain. I consider that to be equivalent to compartmentalizing the belief "My ankle isn't sprained!" because the end state is the same. Once it didn't hurt anymore, she brilliantly decided to keep running on it... aaaand she ended up regretting that decision.

Since I do have the heuristic to be very very hesitant to use dark arts, when I sprained my foot... okay, to be honest, I kept running on it too because I'm a stubborn idiot, but I did it despite the pain and if it hurt more I would have stopped. When I decided to do something about the pain I was in, I wanted to take the "clean" and "not dark" approach, so I did my thing where I (again, to give crude and insufficient english pointers) "listen to what the pain has to say". It completely got rid of the suffering (I could still feel the pain sensations, but it wasn't bothersome in the least and didn't demand attention. Quite trippy actually)

But the method I used comes with some caveats. The pain said "Are you sure you weren't doing something you shouldn't have been?", and after thinking about it I was able to to decide that I wasn't. The pain wanted to make sure I took care of myself, and once I agreed to that, there was no more reason to suffer. It wouldn't have worked if I had tried to avoid realizing that I shouldn't have been taking that risk in the first place. It would cease to work the minute I try running on it again. These are nice features :)

The basic idea behind the cleaner way is that all your fears and motivations and the like are the result of nonverbal implicit beliefs. These implicit beliefs may or may not agree with your explicit beliefs, and you may or may not be aware of them. (Empirically, they often have useful information that your explicit beliefs don't, btw). So what you do is to find out where your implicit beliefs are causing issues, what the beliefs actually say, and if they're right or not. If they're right, figure out what you want to do about it. If they're wrong, change them. This is basically coherence therapy

If you were to take a clean approach in the "confidence with women" situation, you'd probably find that some things you were too afraid to do you probably shouldn't be doing while others are easily worth the risk. Fear in the former category feels right - like a fear of picking a fight with mike tyson - you just don't do it and everything is cool. In the latter category it'll turn to excitement (which you can change cleanly if it's an issue). Since you're aware that it might not go well and you've accepted that possibility, you don't have to fear it. Awareness without fear allows you to look hard for things you're doing wrong without coming off as "not confident".

The other downside of the dark approach is that if you have incomplete compartmentalization (which can be good to avoid the first problem), you can have this nagging "but I'm lying to myself!" thought which can be distracting. And if reality smacks you in the face, you're forced to drop your lie and you're stuck with the maladaptive behaviors you were trying to avoid. When done cleanly you're already prepared for things to go poorly so you can respond effectively.

Comment author: PrometheanFaun 21 January 2014 09:34:27PM 0 points [-]

I've never recognised a more effective psychonaut than you. You've probably seen further than I, so I'd appreciate your opinion on a hypo I've been nursing.

You see the way pain reacts to your thoughts. If you respect its qualia, find a way to embrace them, that big semi-cognisant iceberg of You, the Subconscious, will take notice, and it will get out of your way, afford you a little more self control, a little less carrot and stick, a little less confusion, a little closer to the some rarely attained level of adulthood.

I suspect that every part of the subconscious can be made to yield in the same way. I think introspective gains are self-accelerating, you don't just get insights and articulations, you get general introspection skills. I seem to have lost hold of it for now, but I once had what seemed to be an ability to take any vague emotional percept and unravel it into an effective semantic ordinance. It was awesome. I wish I'd been more opportunistic with it.

I get the impression you don't share my enthusiasm for the prospect of developing a culture supportive of deep subconscious integration, or illumination or whatever you want to call it. What have you seen? Found a hard developmental limit? Or, this is fairly cryptic, do tell me if this makes no sense, but are you hostile to the idea of letting your shadow take you by the hand and ferry you over the is-aught divide? I suspect that the place it would take you is not so bad. I think any alternative you might claim to have is bound to turn out to be nothing but a twisted reflection of its territories.

Comment author: satt 18 January 2014 04:35:43PM 1 point [-]

Reading the ensuing disagreement, this seems like a good occasion to ask whether this is a policy suggestion, and if so what it is. [...] it seems like the scope of the disagreement is (at most) pretty much how one should use the phrase 'Prisoner's Dilemma'

Pretty much. Cashing out my disagreement as a policy recommendation: don't call a situation a true PD if that situation's feasible set doesn't include (C, D) & (D, C). Otherwise one might deduce that cooperation is the rational outcome for the one-shot, vanilla PD. It isn't, even if believing it is puts one in good company.

Comment author: PrometheanFaun 21 January 2014 10:11:31AM *  0 points [-]

As I understand it, Hofstadter's advocacy of cooperation was limited to games with some sense of source-code sharing. Basically, both agents were able to assume their co-players had an identical method of deciding on the optimal move, and that that method was optimal. That assumption allows a rather bizarre little proof that cooperation is the result said method arrives at.

And think about it, how could a mathematician actually advocate cooperation in pure, zero knowledge vanilla PD? That just doesn't make any sense as a model of an intelligent human being's opinions.

Comment author: Nisan 04 January 2012 03:54:07AM 2 points [-]

If that is really how you feel, maybe you should do just that. Find a small egalitarian church or pagan group or occult circle or metal scene or art scene and start guiding people through rituals. After you level up your shaman skills, you can take your most impressionable, emotionally vulnerable acolytes and instill them with the virtues of critical thinking and emotional self-reliance. If you're successful, people will love your rituals celebrating sane decision-making, and you can share the fruits of your labor with the global Less Wrong community.

(I'm assuming there are no LWers in Gothenburg.)

Comment author: PrometheanFaun 25 December 2013 10:25:24PM *  -1 points [-]

Sometimes I will stand and look at the church and wonder if today is the day I get desperate enough to go full sociopath, pretend to join the flock, and use the network to start a deviant christianity offshoot.

Comment author: roystgnr 26 September 2011 03:48:05PM 8 points [-]

It's possible to limit the level of time sink involved in playing video games; you just have to pick the right game and the right play schedule.

My best example: Civilization IV multiplayer. Since there's a limited number of things you can do on any given turn, and since the multiplayer can be done asynchronously, everyone can simply agree to a rule like "we play one turn each morning and one each night", and then (after a single synchronous night getting past all the ultra-short initial turns) there's no way to spend more than 30 minutes or so a day on the game. Granted, each game takes a few months...

And Civ IV is at least as interesting and instructive as the Prisoners' Dilemma simulations investigated here recently. It's been years since I played, and I still have fond memories of teaching a friend that "Let's all gang up on the guy in first place" is not a safe strategy to share with someone who's in second place but who's thinking more than one step ahead.

I'd also say that multiplayer games are a good way to socialize, which is important, but there is the caveat that you have to learn to separate your impressions of someone as a person from your impressions of them as a player. I'm far more trustworthy in real life than as a player in games where "betrayal" is a possible strategy, for example, and I've known people for whom the opposite was tragically true.

Comment author: PrometheanFaun 19 December 2013 09:21:59AM 0 points [-]

I don't know Civ, but for practising the kind of strategizing you're describing I'd recommend Neptune's Pride.

and I've known people for whom the opposite was tragically true.

Heh. I'm one of those people. I practically fell in love with my first ally. I'm lucky they were really nice when they broke my lines, essentially throwing me a sword and telling me to defend myself before starting the invasion. I'd have been heartbroken otherwise. I guess to an extent I thought they were damning us both to death by zombie bot rush by breaking our alliance, but their judgement was apt, after crippling me they proceeded to conquer the galaxy, barely worse for wear.

It was from this game that I learned the reason I have an intermittent habit of falling head over heels in love with friends probably has more to do with diplomacy than anything else. I can rapidly build unreasonably strong alliances from nothing this way, at the cost of forming a few confusing, inconvenient bonds when I hit the wrong target. It's always nice to learn that the quirks of your mechanism serve a purpose.

Comment author: MarkusRamikin 29 September 2011 03:50:55PM *  1 point [-]

Hm... so who else plays Go and is familiar with the KGS server?

I've not played in a while, and I never was good, but I wouldn't mind meeting some Lesswrongians and having them trounce me (and maybe teach me something useful).

Also, is there some place Lesswrongians go for real-time chat?

Comment author: PrometheanFaun 19 December 2013 08:48:16AM 0 points [-]

Also, is there some place Lesswrongians go for real-time chat?

IRC channel, #lesswrong on irc.freenode.net

Comment author: Larks 29 July 2013 12:59:35PM 1 point [-]

I have not given a reason why torturing babies or racism is bad or wrong. I'm hoping that the vast majority of people will share that intuition/value of mine, that they want to be the sort of person who would have been amongst those challenging racist or sexist prejudices, had they lived in the past.

In the past, the arguments against sexism and racism were things like "they're human too", "they can write poetry too", "God made all men equal" and "look how good they are at being governesses". None of these apply to animals; they're not human, they don't write human, God made them to serve us, and they're not very good governesses. Indeed, you seem to think all these are irrelevant criteria.

Speaking as a 21st century person in a liberal, western country, I believe sexism and racism are wrong basically because other people told me they were, who believed that because ... who believed that because they were convinced by argumentum ad governess. But now I've just discovered that argumentum ad governess is invalid. Should I not withdraw my belief that sexism and racism are wrong, which apparently I have in some sense been fooled into, and adopt the traditional, time-honoured view that they are not?

Comment author: PrometheanFaun 05 November 2013 04:59:41AM -2 points [-]

But now I've just discovered that argumentum ad governess is invalid

Where was the argument for that? Non-humans attaining rights by a different path does not erase all other paths.

Comment author: Xodarap 03 November 2013 01:48:27PM 1 point [-]

But you've stated that the lack of an edge from A to B says nothing about whether A < B, now you're talking like if the premises don't conclude that A < B they must conclude ¬(A < B), which is kinda affirming the consequent.

This is a good point, what I was trying to say is slightly different. Basically, we know that (A < B) ==> (f(A) < f(B)), where f is our order embedding. So it is indeed true that f(A) > f(B) ==> ¬(A < B), by modus tollens.

just as the premises support (A < B) ⇒ (utility(A) < utility(B)), they also support (A < B) ⇒ (normalizedU(A)) < normalizedU(B))), such that normalizedU(World) = sum(log(utility(life))

Yeah, that's a pretty clever way to get around the constraint. I think my claim "If the inequitable society has greater total utility, it must be at least as good as the equitable one" would still hold though, no?

Comment author: PrometheanFaun 05 November 2013 01:17:48AM 0 points [-]

"If the inequitable society has greater total utility, it must be at least as good as the equitable one" would still hold though, no?

Well... .... yeah, technically. But for example in the model ( worlds={A, B}, f(W)=sum(log(felicity(e)) for e in population(W)) ), such that world A=(2,2,2,2), and world B=(1,1,1,9). f(A) ≥ f(B), IE ¬(f(A) < f(B)), so ¬(A < B), IE, the equitable society is also at least as good as the inequitable, higher sum utility one. So if you want to support all embeddings via summation of an increasing function of the units' QoL.. I'd be surprised if those embeddings had anything in common aside from what the premises required. I suspect anything that agreed with all of them would require all worlds the original premises don't relate to be equal, IE, ¬(A<B) ∧ ¬(B<A).

... looking back, I'm opposed to your implicit definition of a " "baseline" ", the original population partial ordering premises are the baseline, here, not total utilitarianism.

Comment author: Desrtopa 10 September 2012 04:57:45AM 2 points [-]

Well, if it's for-profit venture, then the point isn't to get them an education, the point is to prepare them for lucrative careers, in which case social capital is of high importance.

Comment author: PrometheanFaun 01 November 2013 09:44:00PM 0 points [-]

I propose a new term for what we're trying to do here, not for-profit, nor not-for-profit, but for-results.

View more: Next