Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JenniferRM 22 May 2015 07:31:40AM 4 points [-]

If you'll permit a restatement... it sounds like you surveyed the verbal output of the big names in the transhumanist/singularity space and classified them in terms of seeming basically "correct" or "mistaken".

Two distinguishing features seemed to you to be associated with being mistaken: (1) a reliance on philosophy-like thought experiments rather than empiricism and (2) relatedness to the LW/MIRI cultural subspace.

Then you inferred the existence of an essential tendency to "thought-experiments over empiricism" as a difficult to change hidden variable which accounted for many intellectual surface traits.

Then you inferred that this essence was (1) culturally transmissible, (2) sourced in the texts of LW's founding (which you have recently been reading very attentively), and (3) an active cause of ongoing mistakenness.

Based on this, you decided to avoid the continued influence of this hypothetical pernicious cultural transmission and therefore you're going to start avoiding LW and stop reading the founding texts.

Also, if the causal model here is accurate... you presumably consider it a public service to point out what is going on and help others avoid the same pernicious influence.

My first question: Am I summarizing accurately?

My second question assumed a yes and seeks information relevant to repair: Can you spell out the mechanisms by which you think mistake-causing reliance on thought experiment is promoted and/or transmitted? Is it an explicit doctrine? Is it via social copying of examples? Is it something else?

Comment author: Mark_Friedenbach 22 May 2015 06:12:35AM 4 points [-]

无根据

Comment author: JenniferRM 22 May 2015 07:06:33AM 2 points [-]

滑稽

In response to The ganch gamble
Comment author: JenniferRM 19 May 2015 05:55:49AM *  2 points [-]

I don't think cooperate/defect are good action labels because it doesn't seem much like a standard prisoner's dilemma. It is not quite a game of Chicken either, but it is closer to Chicken than PD.

The mirror/mirror outcome in Ganch is like getting in a car crash in Chicken, because it is the worst for everyone and anyone who swerves unilaterally will make both happier with the outcome, and that outcome potentially functions as a threat to use against the other player to get your way if a conflict comes up.

Ganch is unlike Chicken in that Chicken's swerve/swerve outcome gives an honorable tie, and the other person playing a different move causes you to lose outright, which is something you'd like to prevent if you can.

Contrast this with the carve/carve outcome in Ganch, which is worse than having the carving role in a carve/mirror outcome, because playing carve against mirror at least the room looks OK but playing carve against carve leaves you with an artistic mess and both of you look bad. So in Ganch (unlike Chicken) the players have the same worst and second worst outcomes and these outcomes are in a diagonal relationship to each other.

Basically, Ganch is a kind of coordination game and the closest "famously named" coordination game I know to Ganch is the 2x2 Battle Of The Sexes.

This name goes back at least as far as Duncan and Raiffa's 1957 book "Games and Decisions: Introduction and Critical Survey", reviewed here.

The basic dynamic in BotS is that you are playing a coordination game, but not a coordination game with perfect alignment of goals. It is generally silly to play coordination games without talking first, but in this variant the conversations are more delicate than otherwise because there's some of self-interest and meta-game-fairness issues that come up when picking which of the two "basically acceptable" Nash equilibriums to aim for.

Comment author: gjm 15 May 2015 04:08:14PM 3 points [-]

It's alleged that the number of people donating zero is large, and more generally I would expect people to round off their donation amounts when reporting. Ages are clearly also quantized. So there may be lots of points on top of one another. Is it easy to jitter them in the plot, or something like that, to avoid this source of visual confusion?

Comment author: JenniferRM 16 May 2015 02:35:02AM 2 points [-]

Just eyeballing the charts without any jitter, it kinda looks like Effective Altruists more often report precise donation quantities, while non-EAs round things off in standard ways, producing dramatic orange lines perpendicular to the Y axis and more of a cloud for the blues. Not sure what to make of this, even if true.

Comment author: JenniferRM 12 May 2015 05:21:54PM *  0 points [-]

Having thought about this for a while, I think a moderately safe thing to do (when it is actually possible to control the biological outcomes with some vague notion as to the actually likely long term phenotypic results) is to offer financial subsidies to help improve the future generations of those least well off in the current regime, especially for any of a large list of pro-social improvements (that parents get to choose between and will hopefully choose in different ways to prevent a monoculture).

Also, figuring out the phenotypic results is likely to involve mistakes that cost entire human lives in forgone potential, and there should probably be a way to protect against this downside in advance, like with bonds/insurance purchased prior to birth to make sure we have budgeted to properly care for people who might have been amazing but turned out to be slightly handicapped.

Comment author: JenniferRM 06 May 2015 07:53:35AM 12 points [-]

Upvoted! Not necessarily for the policy conclusions (which are controversial), but especially for the bibliography, attempt to engage different theories and scenarios, and the conversation it stirred up :-)

Also, this citation (which I found a PDF link for) was new to me, so thanks for that!

McClelland, J.L., Rumelhart, D.E. & Hinton, G.E. (1986) The appeal of parallel distributed processing. In D.E. Rumelhart, J.L. McClelland & G.E. Hinton and the PDP Research Group, “Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1.” MIT Press: Cambridge, MA.

In response to The Stamp Collector
Comment author: JenniferRM 06 May 2015 07:34:42AM *  2 points [-]

This is the first time I've seen anyone on LW make this point with quite this level of explicitness, and I appreciated reading it.

Part of why it might be a useful message around these parts is that it has interesting implications for simulationist ethics, depending on how you treat simulated beings.

Caring about "the outer world" in the context of simulation links naturally to thought experiments like Nozick's Experience Machine but depending on one's approach to decision theory, it also has implications for simulated torture threats.

The decision theoretic simulated torture scenario (where your subjective experience of making the decision has lots of simulation measure via your opponent having lots of CPU, with the non-complying answer causing torture in all simulated cases) has been kicking around since at least 2006 or so. My longstanding position (if I were being threatened with simulated torture) has always been to care, as a policy, about only "the substrate" universe.

In terms of making it emotionally plausible that I would stick to my guns on this policy, I find it helpful to think about all my copies (in the substrate and in the simulation) being in solidarity with each other on this point, in advance.

Thus, my expectations are that when I sometimes end up experiencing torture for refusing to comply with such a threat, I will get the minor satisfaction of getting a decisive signal that I'm in a simulation, and "taking one for the team". Conversely, when I sometimes end up not experiencing simulated torture it increments my belief that I'm "really real" (or at least in a sim where the Demon is playing a longer and more complex game) and should really keep my eye on the reality ball so that my sisters in the sims aren't suffering in vain.

The only strong argument against doing this is for the special case where I'm being simmed with relatively high authenticity for the sake of modeling what the real me is likely to do in a hostile situation in the substrate... like a wargame sort of thing... and in that case, it could be argued that acting "normally" so the simulation is very useful is a traitorous act to the version of me that "really matters" (who all my reality-focused-copies, in the sim and in the real world, would presumably prefer to be less predictable).

For the most part I discount the wargame possibility in practice, because it is such a weirdly paranoid setup that it seems to deserve to be very discounted. (Also, it would be ironic if telling me that I might be in an enemy run wargame sim makes the me that counts the most act erratically in case she might be in the sim!)

I feel like the insight that "the outer world matters" has almost entirely healthy implications. Applying the insight to simulationist issues is fun but probably not that pragmatically productive except possibly if one is prone to schizophrenia or some such... and this seems like more of an empiric question that could be settled by psychiatrists than by philosophers ;-)

However, the fact that reality-focused value systems and related decision theories are somewhat determinative for the kinds of simulations that are worth running (and hence somewhat determinative of which simulations are likely to have measure as embeddings within larger systems) seems like a neat trick. Normally the metaphysicians claim to be studying "the most philosophically fundamental thing", but this perspective gives reason to think that the most fundamental thing (even before metaphysics?) might be how decisions about values work :-)

Comment author: JenniferRM 15 April 2015 06:06:09AM *  4 points [-]

It feels like there's a never-directly-claimed but oft-implied claim lurking in this essay.

The claim goes: the reason we can't consciously control our perception of the color of the sky is because if we could then human partisanship would ruin it.

The sane response, upon realizing that internal color-of-the-sky is determined not by the sky-sensors, but by a tribal monkey-mind prone to politicking and groupthink is to scream in horror and then directly re-attach the world-model-generator to reality as quickly as possible.

If you squint, and treat partisanship as an ontologically basic thing that could exert evolutionary pressure, it almost seems plausible that avoidance of partisanship failure modes might actually be the cause of the wiring of the occipital cortex :-)

However, I don't personally think that "avoiding the ability of partisanship to ruin vision" is the reason human vision is wired up so that we can't see whatever we consciously choose to see.

Part of the reason I don't believe this is that the second half of the implication is simply not universally true. I know people who report having the ability to modify their visual sensorium at will, so for them, it seems to actually be the case that they could choose to do all sorts of things to their visual world model if they put some creativity and effort into it. Also: synesthesia is a thing, and can probably be cultivated...

But even if you skip over such issues as non-central outliers...

It makes conceptual sense to me that there is probably something like a common cortical algorithm (though maybe not exactly like the precise algorithmic sketch being discussed under that name) that actually happens in the brain. Coarsely: it probably has to do with neuron metabolism and how neurons measure and affect each other. Separately from this, there are lots of processes for controlling which neurons are "near" to which other neurons.

My personal guess is that in actual brains, the process mixes sparse/bayesian/pooling/etc perception with negative feedback control... and of course "maybe other stuff too". But fundamentally I think we start with "all the computing elements potentially measuring and controlling their neighbors" and then when that causes terrible outcomes (like nearly instantaneous subconscious wireheading by organisms with 3 neurons) evolution prunes that particular failure mode out, and then iterates.

However, sometimes top down control of measurement is functional. It happens subconsciously in ancient and useful ways in our own brain, as when afferent cochlear enervation projects something like "expectations about what is to be heard" that make the cochlea differentially sensitive to inputs, effectively increasing the dynamic range in what sounds can be neurologically distinguished.

This theory predicts new wireheading failures at every new level of evolved organization. Each time you make several attempts to build variations on a new kind of measuring and optimizing process/module/layer, some of those processes will use their control elements to manipulate their perception elements, and sometimes they will do poorly rather than well, with wireheading as a large and probably dysfunctional attractor.

"Human partisanship" does seem to be an example of often-broken agency in an evolutionarily recent context (ie the context of super-Dunbar socially/verbally coordinated herds of meme-infected humans) and human partisanship does seem pretty bad... but as far as I can see, partisanship is not conceptually central here. And it isn't even the conceptually central negative thing.

The central negative thing, in my opinion, is wireheading.

Comment author: JenniferRM 09 April 2015 07:31:29AM *  5 points [-]

9 . Slowly lower the temperature to -173 centigrade or lower, as you wish.

If I'm reading the chart correctly, the additional cooling would send the ice III through the zone marked as ice II and then... wait for it... into the zone of ice nine!!!

If the secret of eternal life involves the non-fictitious version of ice IX... I mean... that seems like "the author" would be clubbing us over the head with the implication that we're living in a post-modern novel :-P

On a less metaphysical note, it seems like there is a technical question about whether additional cooling might cause problems due to transitions between different kinds of ice? From Le Wik on the real ice IX (not the fictional ice-nine):

Ice IX is a form of solid water stable at temperatures below 140 K and pressures between 200 and 400 MPa. It has a tetragonal crystal lattice and a density of 1.16 g/cm³, 26% higher than ordinary ice. It is formed by cooling ice III from 208 K to 165 K (rapidly—to avoid forming ice II). Its structure is identical to ice III other than being proton-ordered.

It looks like if you were in the ice IX zone, and then heated up from LN2 temperatures, you would necessarily go through ice II on the way to liquid water (see this awesome site):

Ice-nine (ice IX) is the low-temperature equilibrium, slightly denser, structure of ice-three (Space group P41212, cell dimensions 6.692 Å (a) and 6.715 Å (c) at 165 K and 280 MPa [385]). It is metastable in the ice-two phase space and converts to ice-two, rather than back to ice-three, on warming. The change from proton disordered is a partial process starting within ice-three that is only completed at lower temperatures, but with a first order transition near 126 K[1087]. The hydrogen bonding is mostly proton-ordered as ice-three undergoes a proton disorder-order transition to ice-nine when rapidly cooled in liquid nitrogen (77 K, so avoiding ice-two formation, see Phase Diagram); ice-three and ice-nine having identical structures apart from the proton ordering [389].

From what I can tell, if you start at ice III and cool things way down from there, you'll have to spend some time in the ice II zone, at the very least while being re-heated up from ice IX and perhaps as the state to be kept in for very long term storage. Luckily, ice II appears to also have a density of ~1.16 g / cm^3, so it is also denser than normal water and presumably would also not pop cellular membranes due to expansion :-)

Comment author: is4junk 20 March 2015 07:14:25PM *  4 points [-]

Brokerage accounts (fidelity/etrade) are better then bank accounts in every way (in the US). Use them with a margin account to safely maximize your investments. The margin account will basically function as an overdraft / short term loan at very favorable rates. Reasons:

  • direct deposit in to your brokerage account - all surplus money should be sweeped in to an index fund (SPY or global equiv)
  • You can have a ATM card and do all your checks through them usually for free
  • they all have bill pay service for free
  • depositing checks - they can be mailed in
  • Even if you don't invest the money it will automatically be in a money market account earning you interest
  • investment interest payments (on the margin) can be tax advantaged unlike credit card payments

I didn't have a bank account for over a decade. There is no reason to think about checking and savings being separate things.

Concerns about margin account being scary are only that way when you margin a substantial fraction of your account. If you are under 10% and invest in stable index funds you won't have a worry.

instead of investing in SPY consider Berkshire Hathoway (brk) for the tax advantages - (Warren Buffet doesn't like to pay taxes). I'd look at costco's sharebuilder if you can't afford to buy 1 share.

Comment author: JenniferRM 04 April 2015 06:28:03PM 4 points [-]

This seems like awesome advice that I have never heard before. Do you think it might be dangerous for some people? Like is it a "you must be this tall to ride this ride" kind of thing?

Also, it seems like it might help to have this made actionable by talking about the steps someone would take to convert their financial service provider setup to this. Do you have a good method for picking a broker? If someone was not very financially savvy (like they didn't know what a brokerage even was exactly) what should they do right after reading here to start on the path to setting things up this way?

View more: Next