Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 16 June 2015 08:38:20AM 0 points [-]

…and she didn’t. She wasn’t a vegetarian. Duhhh... I knew that. We’d made some ground beef together the day before.

So what was I thinking? Why did I say “I’ll go vegetarian” as an attempt to appeal to her values?

(I’ll invite you to take a moment to come up with your own model of why that happened. You don't have to, but it can be helpful for evading hindsight bias of obviousness.)

Maybe you tried to strawman her argument?

Comment author: JenniferRM 17 June 2015 05:22:25PM 0 points [-]

Plausibly, something like pattern botching is "where straw man tactics come from" or at least is causally related?

System 1 is pretty good at leaping to conclusions based on sparse data (often to a conclusion that implies a known solution). That is sort of what system 1 is for. But if it does so improperly (and the error is detected) it is likely to be named as an error. I think Malcom is calling out the subset of such errors where not only is system 1 making a detectable error, but system 2 can trivially patch the problem.

A "straw man argument" is mostly referring to error in the course of debate where a weak argument is presumed by one's interlocutor. Why do they happen?

Maybe sometimes you're genuinely ignorant about your interlocutor's position and are assuming it is a dumb position wrongly. People normally argument "X, Y, Z" (where Z is a faulty conclusion) but you know Q which suggests (P & ~Z). So someone might say "X", and you say "but ~Z because Q!" and they say "of course Q, and also Y and P, where did Z come from, I haven't even said Z". And then you either admit to your mistaken assumption or get defensive and start accusing them of secretly believing Z.

The initial jump there might have have had bayesian justification because "X, Y, Z" could be a very common verbal trope. The general "guessing what people are about to say" process probably also tends to make most conversations more efficient. However it wouldn't be pattern botching until you get defensive and insist on sticking to your system 1 prediction even after you have better knowledge.

"Sticking with system 1 when system 2 knows better" (ie pattern botching) might itself have a variety of situation dependent causes.

Maybe one of the causes is involve in the specific process of sticking with theories about one's interlocutor that are false and easy to defeat? Like maybe it is a social instinct, or maybe works often enough in real discourse environments to gain benefits that it gets intermittent positive reinforcement?

If pattern botching happens at random (including sometimes in arguments about what the other person is trying to say) then it would be reasonably to treat pattern botching as a possible root cause.

If situation dependent factors cause pattern botching on people's arguments a lot more than in some other subject area, it would be reasonable to say that pattern botching is not the root cause. In that case, the key thing to focus on is probably the situation dependent causal factors. However, pattern botching might still be an intermediate cause, and the root causes might be very hard to fix while pattern botching can maybe be treated easily, and so attributing the cause to pattern botching might be functional from a pragmatic diagnostic perspective, depending on what your repair tools are like.

Personally, my suspicion is that dealing with people and ideas makes pattern botching much more likely.

Comment author: gwern 11 June 2015 01:43:13AM *  5 points [-]

Lots of things are simple. If the world is not simple, inference is impossible. Many things turn out to be straightforward; as complex and intricate a phenomenon as AIDS is, 'HIV causes AIDS' is much more accurate than 'AIDS is not determined by any one factor but by a combination of genetic, hormonal, and environmental influences; in recent years, biologically-based theories have been favored by experts...' In statistical modeling, it's far from surprising to discover that a few variables have most of the predictive value and that it's only the last few percent which require extreme complexity to predict or model.

Comment author: JenniferRM 11 June 2015 10:53:30AM *  5 points [-]

If the world is not simple, inference is impossible.

You have explained why inference is hard in biology :-)

A technical term for the "problem" is pleiotropy.

Many small scale biological features are re-used over and over, if they break, many things can break a bit. Primary ciliary dyskinesia is an example if this. In the meantime, many complex adaptive structures (like "the ability to hear") are caused by more than one subcomponent, so any of several different subcomponents breaking can produce a symptomatically similar disruption of the complex structure.

Biological causes and biological outcomes are in a many-to-many relationship, with lots of "best effort" failover systems as backups. The amount of effort to put into fixing up a structure is itself something that most of the animal kingdom has optimized a bit, for example via the poorly named "heat shock proteins" that suppress mutational expression in good times but reveal the mutations in bad times.

In the case of male homosexuality, one cause that I recall hearing debate about was that a male child causes a mother's body to change (current best guess is something immunological), such that later male fetuses appear to have their sexual development mildly disrupted. If I recall correctly, the process looks probabilistically cumulative, so that there's something like a 1/3 chance of homosexuality by the time you get to the fifth or sixth male child from the same mother. Again, if I recall correctly, with modern demography this effect might be enough to account for ~20% of gay men? This is somewhat controversial, but le wik has some of the debate.

Comment author: JenniferRM 22 May 2015 07:31:40AM 5 points [-]

If you'll permit a restatement... it sounds like you surveyed the verbal output of the big names in the transhumanist/singularity space and classified them in terms of seeming basically "correct" or "mistaken".

Two distinguishing features seemed to you to be associated with being mistaken: (1) a reliance on philosophy-like thought experiments rather than empiricism and (2) relatedness to the LW/MIRI cultural subspace.

Then you inferred the existence of an essential tendency to "thought-experiments over empiricism" as a difficult to change hidden variable which accounted for many intellectual surface traits.

Then you inferred that this essence was (1) culturally transmissible, (2) sourced in the texts of LW's founding (which you have recently been reading very attentively), and (3) an active cause of ongoing mistakenness.

Based on this, you decided to avoid the continued influence of this hypothetical pernicious cultural transmission and therefore you're going to start avoiding LW and stop reading the founding texts.

Also, if the causal model here is accurate... you presumably consider it a public service to point out what is going on and help others avoid the same pernicious influence.

My first question: Am I summarizing accurately?

My second question assumed a yes and seeks information relevant to repair: Can you spell out the mechanisms by which you think mistake-causing reliance on thought experiment is promoted and/or transmitted? Is it an explicit doctrine? Is it via social copying of examples? Is it something else?

Comment author: Mark_Friedenbach 22 May 2015 06:12:35AM 4 points [-]

无根据

Comment author: JenniferRM 22 May 2015 07:06:33AM 3 points [-]

滑稽

In response to The ganch gamble
Comment author: JenniferRM 19 May 2015 05:55:49AM *  3 points [-]

I don't think cooperate/defect are good action labels because it doesn't seem much like a standard prisoner's dilemma. It is not quite a game of Chicken either, but it is closer to Chicken than PD.

The mirror/mirror outcome in Ganch is like getting in a car crash in Chicken, because it is the worst for everyone and anyone who swerves unilaterally will make both happier with the outcome, and that outcome potentially functions as a threat to use against the other player to get your way if a conflict comes up.

Ganch is unlike Chicken in that Chicken's swerve/swerve outcome gives an honorable tie, and the other person playing a different move causes you to lose outright, which is something you'd like to prevent if you can.

Contrast this with the carve/carve outcome in Ganch, which is worse than having the carving role in a carve/mirror outcome, because playing carve against mirror at least the room looks OK but playing carve against carve leaves you with an artistic mess and both of you look bad. So in Ganch (unlike Chicken) the players have the same worst and second worst outcomes and these outcomes are in a diagonal relationship to each other.

Basically, Ganch is a kind of coordination game and the closest "famously named" coordination game I know to Ganch is the 2x2 Battle Of The Sexes.

This name goes back at least as far as Duncan and Raiffa's 1957 book "Games and Decisions: Introduction and Critical Survey", reviewed here.

The basic dynamic in BotS is that you are playing a coordination game, but not a coordination game with perfect alignment of goals. It is generally silly to play coordination games without talking first, but in this variant the conversations are more delicate than otherwise because there's some of self-interest and meta-game-fairness issues that come up when picking which of the two "basically acceptable" Nash equilibriums to aim for.

Comment author: gjm 15 May 2015 04:08:14PM 3 points [-]

It's alleged that the number of people donating zero is large, and more generally I would expect people to round off their donation amounts when reporting. Ages are clearly also quantized. So there may be lots of points on top of one another. Is it easy to jitter them in the plot, or something like that, to avoid this source of visual confusion?

Comment author: JenniferRM 16 May 2015 02:35:02AM 2 points [-]

Just eyeballing the charts without any jitter, it kinda looks like Effective Altruists more often report precise donation quantities, while non-EAs round things off in standard ways, producing dramatic orange lines perpendicular to the Y axis and more of a cloud for the blues. Not sure what to make of this, even if true.

Comment author: JenniferRM 12 May 2015 05:21:54PM *  0 points [-]

Having thought about this for a while, I think a moderately safe thing to do (when it is actually possible to control the biological outcomes with some vague notion as to the actually likely long term phenotypic results) is to offer financial subsidies to help improve the future generations of those least well off in the current regime, especially for any of a large list of pro-social improvements (that parents get to choose between and will hopefully choose in different ways to prevent a monoculture).

Also, figuring out the phenotypic results is likely to involve mistakes that cost entire human lives in forgone potential, and there should probably be a way to protect against this downside in advance, like with bonds/insurance purchased prior to birth to make sure we have budgeted to properly care for people who might have been amazing but turned out to be slightly handicapped.

Comment author: JenniferRM 06 May 2015 07:53:35AM 13 points [-]

Upvoted! Not necessarily for the policy conclusions (which are controversial), but especially for the bibliography, attempt to engage different theories and scenarios, and the conversation it stirred up :-)

Also, this citation (which I found a PDF link for) was new to me, so thanks for that!

McClelland, J.L., Rumelhart, D.E. & Hinton, G.E. (1986) The appeal of parallel distributed processing. In D.E. Rumelhart, J.L. McClelland & G.E. Hinton and the PDP Research Group, “Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1.” MIT Press: Cambridge, MA.

In response to The Stamp Collector
Comment author: JenniferRM 06 May 2015 07:34:42AM *  3 points [-]

This is the first time I've seen anyone on LW make this point with quite this level of explicitness, and I appreciated reading it.

Part of why it might be a useful message around these parts is that it has interesting implications for simulationist ethics, depending on how you treat simulated beings.

Caring about "the outer world" in the context of simulation links naturally to thought experiments like Nozick's Experience Machine but depending on one's approach to decision theory, it also has implications for simulated torture threats.

The decision theoretic simulated torture scenario (where your subjective experience of making the decision has lots of simulation measure via your opponent having lots of CPU, with the non-complying answer causing torture in all simulated cases) has been kicking around since at least 2006 or so. My longstanding position (if I were being threatened with simulated torture) has always been to care, as a policy, about only "the substrate" universe.

In terms of making it emotionally plausible that I would stick to my guns on this policy, I find it helpful to think about all my copies (in the substrate and in the simulation) being in solidarity with each other on this point, in advance.

Thus, my expectations are that when I sometimes end up experiencing torture for refusing to comply with such a threat, I will get the minor satisfaction of getting a decisive signal that I'm in a simulation, and "taking one for the team". Conversely, when I sometimes end up not experiencing simulated torture it increments my belief that I'm "really real" (or at least in a sim where the Demon is playing a longer and more complex game) and should really keep my eye on the reality ball so that my sisters in the sims aren't suffering in vain.

The only strong argument against doing this is for the special case where I'm being simmed with relatively high authenticity for the sake of modeling what the real me is likely to do in a hostile situation in the substrate... like a wargame sort of thing... and in that case, it could be argued that acting "normally" so the simulation is very useful is a traitorous act to the version of me that "really matters" (who all my reality-focused-copies, in the sim and in the real world, would presumably prefer to be less predictable).

For the most part I discount the wargame possibility in practice, because it is such a weirdly paranoid setup that it seems to deserve to be very discounted. (Also, it would be ironic if telling me that I might be in an enemy run wargame sim makes the me that counts the most act erratically in case she might be in the sim!)

I feel like the insight that "the outer world matters" has almost entirely healthy implications. Applying the insight to simulationist issues is fun but probably not that pragmatically productive except possibly if one is prone to schizophrenia or some such... and this seems like more of an empiric question that could be settled by psychiatrists than by philosophers ;-)

However, the fact that reality-focused value systems and related decision theories are somewhat determinative for the kinds of simulations that are worth running (and hence somewhat determinative of which simulations are likely to have measure as embeddings within larger systems) seems like a neat trick. Normally the metaphysicians claim to be studying "the most philosophically fundamental thing", but this perspective gives reason to think that the most fundamental thing (even before metaphysics?) might be how decisions about values work :-)

Comment author: JenniferRM 15 April 2015 06:06:09AM *  4 points [-]

It feels like there's a never-directly-claimed but oft-implied claim lurking in this essay.

The claim goes: the reason we can't consciously control our perception of the color of the sky is because if we could then human partisanship would ruin it.

The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible.

If you squint, and treat partisanship as an ontologically basic thing that could exert evolutionary pressure, it almost seems plausible that avoidance of partisanship failure modes might actually be the cause of the wiring of the occipital cortex :-)

However, I don't personally think that "avoiding the ability of partisanship to ruin vision" is the reason human vision is wired up so that we can't see whatever we consciously choose to see.

Part of the reason I don't believe this is that the second half of the implication is simply not universally true. I know people who report having the ability to modify their visual sensorium at will, so for them, it seems to actually be the case that they could choose to do all sorts of things to their visual world model if they put some creativity and effort into it. Also: synesthesia is a thing, and can probably be cultivated...

But even if you skip over such issues as non-central outliers...

It makes conceptual sense to me that there is probably something like a common cortical algorithm (though maybe not exactly like the precise algorithmic sketch being discussed under that name) that actually happens in the brain. Coarsely: it probably has to do with neuron metabolism and how neurons measure and affect each other. Separately from this, there are lots of processes for controlling which neurons are "near" to which other neurons.

My personal guess is that in actual brains, the process mixes sparse/bayesian/pooling/etc perception with negative feedback control... and of course "maybe other stuff too". But fundamentally I think we start with "all the computing elements potentially measuring and controlling their neighbors" and then when that causes terrible outcomes (like nearly instantaneous subconscious wireheading by organisms with 3 neurons) evolution prunes that particular failure mode out, and then iterates.

However, sometimes top down control of measurement is functional. It happens subconsciously in ancient and useful ways in our own brain, as when afferent cochlear enervation projects something like "expectations about what is to be heard" that make the cochlea differentially sensitive to inputs, effectively increasing the dynamic range in what sounds can be neurologically distinguished.

This theory predicts new wireheading failures at every new level of evolved organization. Each time you make several attempts to build variations on a new kind of measuring and optimizing process/module/layer, some of those processes will use their control elements to manipulate their perception elements, and sometimes they will do poorly rather than well, with wireheading as a large and probably dysfunctional attractor.

"Human partisanship" does seem to be an example of often-broken agency in an evolutionarily recent context (ie the context of super-Dunbar socially/verbally coordinated herds of meme-infected humans) and human partisanship does seem pretty bad... but as far as I can see, partisanship is not conceptually central here. And it isn't even the conceptually central negative thing.

The central negative thing, in my opinion, is wireheading.

View more: Next