Comment author: shminux 16 June 2013 06:14:07PM 0 points [-]

A couple of concrete examples on either side of the acceptability boundary would be useful: one where "manipulation" is "Treating other people as pawns in your plan", and another where "manipulation" (now termed as "influence" to avoid negative connotations) is perfectly "fair".

Comment author: Jonii 16 June 2013 06:31:16PM 0 points [-]

True. However, it's difficult to construct culturally neutral examples that are not obvious. The ones that pop to my mind are the kind of "it's wrong to be nice to an old, really simple-minded lady because that way you can make her rewrite her will to your benefit", or "It's allright to try to make your roommate do the dishes as many times as you possibly can, as long as you're both on equal footing on this "competition" of "who can do the least dishes"".

I'm not sure how helpful that kind of examples are.

On manipulating others

-4 Jonii 16 June 2013 05:44PM

I recently had a discussion with a friend of mine on the topic of reading others, socially. What they want, what they think, where are they going, etc. During this discussion, I verbalized my intuition on the topic of manipulating others how you think they should act, and what I said had me puzzled for the next few days. So, after much thinking I came to a conclusion, but I want to see what LW thinks of my pondering.

Basically, the idea is that, social clumsiness many very intelligent people face is actually very much self-imposed, a handicap placed upon themselves because we feel iffy about consciously manipulating others as pawns in our grander schemes.

Basically, the reasoning of mine was this: Treating other people as pawns in your plan, rather than actual people, is wrong. You should not strip others of their power to decide for themselves. But say, you are more intelligent than others, and could with planning lead others to do things you want them to. This power over others presents you with an unfair advantage, and this unfair advantage presents you with an iffy ethical dilemma. If you can force other people to do what you will, regardless of their initial disposition, aren't you treating them as pawns rather than autonomous human beings? If you strip them of power to have their initial disposition affect their decisions, aren't you doing wrong? Of course, it's usually very difficult to get people to do what you want. Two equals discussing, both may try this, but both may fail, and even if another succeeds, it's still considered "fair game" by all parties. But more easily this manipulating happens, the more of your brain you need to shut down to make the discussion "fair". At some point, expressing any opinion and leading other people at all seems risky and iffy.

So how do people cope? My theory is this: They stop interacting. Voicing their own opinion, asking other people for things, or even having any goal other than following directions laid out by others becomes off-limits. If they do any of that, it opens an ugly, ethical box of worms of the shape "Should I make them do this?"

So basically, my hypothesis is, the reason intelligent people are so often socially clumsy is because it's a facade, a self-imposed handicap they keep up because evolution has programmed us to have repulsion towards unfairly manipulating others. Because they can make others do anything, they choose to do nothing. This manifests as being easily led, a kind of "doormat", lacking their own will or ego, even.

It's simplistic, there are complications I can readily see that make the whole picture more complicated, but this stripped down dynamic of being more intelligent forcing you to feign helplessness is what I'm interested in, so that's what I presented. Is there any reason to think a mechanic like this actually exists? Is it widespread? Has there been actual study on this mechanic already?

There are aplenty of interesting-looking areas of study if this dynamic is actually a real thing. Say, PUA could look a bit different when aimed at doormat-style people. Aesthetically it would provide more interesting explanation for why smart people are not too social, and it also leads to advice that differs a lot from advice given from stand-point of "You need to learn this". It makes several "is it okay to manipulate others" -type of questions relevant for practical ethics study. Of course, it most likely is not a real thing.

 

Edit: Also, I was a bit hesitant if I should post this under discussion or wait for that Open Thread to pop up. It's quite lengthy, so I felt discussion post could be appropriate, but dunno, I could and maybe should take this down and wait for Open Thread.

Comment author: Yvain 06 May 2013 09:25:27PM *  14 points [-]

Imagine someone makes the following claims:

  • I've invented an immortality drug
  • I've invented a near-light-speed spaceship
  • The spaceship has really good life support/recycling
  • The spaceship is self-repairing and draws power from interstellar hydrogen
  • I've discovered the Universe will last at least another 3^^^3 years

Then they threaten, unless you give them $5, to kidnap you, give you the immortality drug, stick you in the spaceship, launch it at near-light speed, and have you stuck (presumably bound in an uncomfortable position) in the spaceship for the 3^^^3 years the universe will last.

(okay, there are lots of contingent features of the universe that will make this not work, but imagine something better. Pocket dimension, maybe?)

If their claims are true, then their threat seems credible even though it involves a large amount of suffering. Can you explain what you mean by life-centuries being instantiated by causal nodes, and how that makes the madman's threat less credible?

Comment author: Jonii 15 May 2013 02:11:06PM *  2 points [-]

Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing.

Comment author: Benja 06 May 2013 03:53:26PM *  9 points [-]

I don't at all think that this is central to the problem, but I do think you're equating "bits" of sensory data with "bits" of evidence far too easily. There is no law of probability theory that forbids you from assigning probability 1/3^^^3 to the next bit in your input stream being a zero -- so as far as probability theory is concerned, there is nothing wrong with receiving only one input bit and as a result ending up believing a hypothesis that you assigned probability 1/3^^^3 before.

Similarly, probability theory allows you to assign prior probability 1/3^^^3 to seeing the blue hole in the sky, and therefore believing the mugger after seeing it happen anyway. This may not be a good thing to do on other principles, but probability theory does not forbid it. ETA: In particular, if you feel between a rock and a bad place in terms of possible solutions to Pascal's Muggle, then you can at least consider assigning probabilities this way even if it doesn't normally seem like a good idea.

Comment author: Jonii 15 May 2013 01:56:40PM 1 point [-]

Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough.

Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of years worth of R&D on your priors, to get them straight. However, the gap these evolution-set priors would have to cross to get even close to that absurd 1/3^^^3... It's a theoretical possibility that's by no stretch a realistic one.

Comment author: peter_hurford 22 April 2013 12:00:30AM *  4 points [-]

A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.

Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I could mold the site as I grew.

Now after >2 years of blogging about basically the same thing, I think my blog will always be about utilitarianism (both practical and philosophical), lifestyle design (my quest to make myself more productive and frugal, mainly so I can be a better utilitarian), political commentary (from a utilitarian perspective), and psychology (of morality and community and that which basically underlies practical utilitarianism).

I probably would want to talk about religion/atheism from time to time, which used to be my biggest interest, but I can already tell it's moderately unpopular with my current readership (yawnnn... we really have to go over why the Bible has errors again?) and I'm already personally getting increasingly bored with it, so I can do away with discussing atheism if I needed to keep to a "topic"-focused blog.

Basically, at this point, I think I stand to gain more by making my blog and domain name more descriptive than I stand to lose by risking my interests shifting away from utilitarianism (or at least the public discussion thereof). But the big question... what should I name my blog?

Option #1: Keep with Greatplay.net: There will be costs with shifting to a new domain name. The monetary cost is mostly insignificant (<$20/yr for a new domain name), but it will take a moderate amount of time to move all the archives over and make sure all the new hyperlinks on the site work. Also, there will be confusion among the readership, and everyone who was linking to my site externally would now be linking to dead stuff. So, if I've misestimated the benefits of moving, I might want to stick with the current name and not incur the costs.

Option #2: Go to PeterHurford.com: I already use this site as an online résumé of sorts, so I wouldn't need to get the domain. This also seems the most descriptive of what the site would be about (a personal blog, about me) and fits in with what the cool kids are doing. However, some of my opinions are controversial relative to the mainstream and I don't know what I'll be doing in my future. Keeping my real name hidden from my website might be an asset (so I don't lose opportunities because of association with unpopular mainstream opinions), though it might also be a drawback (I think I have gotten some recognition and opportunity from those who share my unpopular mainstream opinions).

Option #3: A new name: If Option #1 and #2 don't work, I'd want to just rename the blog to something descriptive of a blog about utilitarianism. Some ideas I've come up with:

  • A Shallow Pond
  • The Everyday Utilitarian
  • Everyday Utilitarianism
  • Commonsense Utilitarianism
  • A Utilful Mind (credit to palladias)

Though feel free to suggest your own!

Comment author: Jonii 25 April 2013 08:08:42PM 2 points [-]

I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with "<brand name> utilitarianism", "<brand name> design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: <stuff name>". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.

Comment author: Watercressed 18 April 2013 10:22:39PM 0 points [-]

x + 0 = 0

I think you mean x + 0 = x

Comment author: Jonii 19 April 2013 11:39:44AM 1 point [-]

yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.

Comment author: [deleted] 16 April 2013 03:11:36AM 0 points [-]

Thanks, that's helpful. But I guess my point is that it seems to me to be a problem for a system of mathematics that one can do operations which, as you say, delete the data. In other words, isn't it a problem that it's even possible to use basic arithmetical operations to render my data meaningless? If this were possible in a system of logic, we would throw the system out without further ado.

And while I can construct a proof that 2=1 (what I called a contradiction, namely that a number be equal to its sucessor) if you allow me to divide by zero, I cannot do so with multiplications. So the cases are at least somewhat different.

In response to comment by [deleted] on Open Thread, April 15-30, 2013
Comment author: Jonii 18 April 2013 08:06:51PM *  6 points [-]

Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that a*x = 1. Our notation for x here would be 1/a, and it's easy to see why a * 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.

But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use "0" to signify additive neutral element you get from adding together an element and its additive inverse, that is, "a + (-a) = 0"), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.

Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen: 0x = (0 + 0)x
= 0x + 0x So 0
x = 0x + 0x.

We subtract 0x from both sides, leaving us with 0x = 0.

Doesn't matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that's the case, also, 0 = 1 is the only number in the entire zero ring), you can't get a number such that 0*x = 1. Lacking inverse elements, there's no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it's separate from the division defined for other numbers.

And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that's just your own clumsiness.

Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 => 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.

Comment author: Jonii 13 February 2013 11:53:52PM 6 points [-]

My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.

Comment author: David_Gerard 04 August 2012 07:26:33AM 5 points [-]

OK ... If someone asked you "So, there's a million words of these Sequences that you think I should read. What do I get out of reading them?" then what's the answer to that? You seem to be saying "we don't think they actually do anything much." Surely that's not the case.

Comment author: Jonii 04 August 2012 04:10:31PM 2 points [-]

Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what's true and what's not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you're doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.

Comment author: Jonii 23 May 2012 09:32:40PM 1 point [-]

Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.

View more: Prev | Next