Comment author: TheOtherDave 01 March 2014 09:30:38PM 2 points [-]

I understand this to mean that the only value you see to non-brevity is its higher success at manipulation.

Is that in fact what you meant?

Comment author: alicey 14 March 2014 11:49:31PM *  0 points [-]

-

Comment author: ThrustVectoring 02 March 2014 10:57:28AM 1 point [-]

I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.

Comment author: alicey 14 March 2014 11:42:11PM 0 points [-]

revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

Comment author: jamesf 02 March 2014 03:27:05AM *  0 points [-]

What does brevity offer you that makes it worthwhile, even when it impedes communication?

Predicting how communication will fail is generally Really Hard, but it's a good opportunity to refine your models of specific people and groups of people.

Comment author: alicey 14 March 2014 11:29:40PM 0 points [-]

improving signal to noise, holding the signal constant, is brevity

when brevity impedes communication, but only with a subset of people, then the reduced signal is because they're not good at understanding brief things, so it is worth not being brief with them, but it's not fun

Comment author: TheOtherDave 01 March 2014 04:40:02PM 2 points [-]

Well, you describe the problem as terseness.
If that's true, it suggests that one set of improvements might involve explaining your ideas more fully and providing more of your reasons for considering those ideas true and relevant and important.

Have you tried that?
If so, what has the result been?

Comment author: alicey 01 March 2014 05:58:28PM *  0 points [-]

-

Comment author: 7EE1D988 01 March 2014 10:58:35AM 12 points [-]

I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.

Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).

I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.

Comment author: alicey 01 March 2014 04:28:32PM *  4 points [-]

i tend to express ideas tersely, which counts as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me

i have mostly stopped posting or commenting on lesswrong and stackexchange because of this

like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."

revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is

for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)

Comment author: Kaj_Sotala 18 February 2014 07:06:38AM *  3 points [-]

naturalized!Cai

I'm not sure that using this notation is a good idea, given that at least some of the readers unfamiliar with it are likely to initially parse it as "naturalized not-Cai". Even I did for a brief moment, because I was parsing the writing using my logic!brain rather than my fanfiction!brain.

Comment author: alicey 19 February 2014 05:18:38AM *  0 points [-]

this is why i like ¬

script your keyboard! make it so that the chords ~1 and 1~ output a '¬'! or any other chord, really

if this actually sounds interesting and you use windows you can grab my script at https://github.com/alice0meta/userscripts/tree/master/ahk

Comment author: jpaulson 16 February 2014 07:09:20PM 1 point [-]

Most of your post is not arguments against curing death.

People being risk-averse has nothing to do with anti-aging research and everything to do with individuals not wanting to die...which has always been true (and becomes more true as life expectancy rises and the "average life" becomes more valuable). The same is true for "we should risk more lives for science".

I agree that people adapt OK to death, but I think you're poking a strawman; the reason death is bad is because it kills you, not because it makes your friends sad.

I think "death increases diversity" is a good argument. On the other hand, most people who present that argument are thrilled that life expectancy has increased to ~70 from ~30 in ancient history. Why stop at 70?

Comment author: alicey 16 February 2014 10:48:03PM *  2 points [-]

note: "life expectancy used to be ~30" is a common misconception (it's being skewed by infant mortality) (life expectancy has gone up a lot, just not that much)

(as far as i know. i've been told that it's a common misconception that this is a common misconception, but they refused to cite sources)

Comment author: alicey 16 February 2014 03:08:36PM *  -1 points [-]

short response is "yeah, sure, sorta ... but only if you're a stupid group. we can do better."

edit: http://lesswrong.com/lw/jop/a_defense_of_senexism_deathism/akk3 is the longer version of this response

Comment author: dspeyer 24 January 2014 04:51:11AM 0 points [-]

One small (hopefully not too obvious) addition: the cluster-nature of thing-space is dependent on the distance function, and there is no single obviously corrent one. Is a penguin more like an eagle or a salmon? Depends on what you mean by "more like". It's perfectly reasonable to say "right now, the most useful concept of 'more like' is 'last common ancestor' so penguins are more like eagles and 'birds' is a cluster' and then as your needs change to say "right now, the most useful concept of 'more like' is similarity of habitat so penguins are more like salmon and 'sealife' is a cluster."

Comment author: alicey 24 January 2014 07:16:29AM 0 points [-]

why yes

clusters can overlap, and the word "more like" uses different clusters of clusters depending on context

Comment author: Viliam_Bur 16 January 2014 11:44:58AM *  34 points [-]

If you find yourself on a playing field where everyone else is a TrollBot (players who cooperate with you if and only if you cooperate with DefectBot) then you should cooperate with DefectBots and defect against TrollBots.

An example from real life: DefectBot = God, TrollBots = your religious neighbors. God does not reward you for your prayers, but your neighbors may punish you socially for lack of trying. You defect against your neighbors by secretly being a member of an atheist community, and generally by not punishing other nonbelievers.

I wonder what techniques could we use to make the compartmentalization stronger and easy to turn off when it's no longer needed. Clear boundaries. A possible solution would be to use the different set of beliefs only while wearing a silly hat. Not literally silly, because I might want to use it in public without handicapping myself. But some environmental reminder. An amulet, perhaps?

Comment author: alicey 19 January 2014 12:20:14PM *  11 points [-]

she who wears the magic bracelet of future-self delegation http://i.imgur.com/5Bfq4we.png prefers to do as she is ordered

View more: Prev | Next