Posts

Sorted by New

Wiki Contributions

Comments

Keep in mind that people who apply serious life-changing ideas after reading about them in fiction are the exception rather than the norm. Most people who aren't exceptionally intellect-oriented need to personally encounter someone who "has something" that they themselves wish they had, and then have some reason to think that they can imitate them in that respect. Fiction just isn't it, except possibly in some indirect ways. Rationalist communities competing in the "real-world" arena of people living lives that other people want to and can emulate are a radically more effective angle for people who don't identify strongly with their intellectual characteristics.

It seems at best fairly confused to say that an L-zombie is wrong because of something it would do if it were run, simply because we evaluated what it would say or do against the situation where it didn't. Where you keep saying "is" and "concludes" and "being" you should be saying "would", "would conclude", and "would be", all of which is a gloss for "would X if it were run", and in the (counter-factual) world where the L-zombie "would" do those things it "would be running" and therefore "would be right". Being careful with your tenses here will go a long way.

Nonetheless I think the concept of an L-zombie is useful, if only to point out that computation matters. I can write a simple program that encapsulates all possible L-zombies (or rather would express them all, if it were run), yet we wouldn't consider that program to be those consciousnesses--a point well worth remembering in numerous examinations of the topic.

Once you know about affective death spirals, you can use them in tricky ways. Consider for example, that you got into an affective death spiral about capital "R" Rationality which caused you to start entertaining false delusions (like that teaching Rationality to your evil stepmother would finally make her love you, or whatever). If you know that this is an affective death spiral, you can do an "affective death spiral transfer" that helps you avoid the negative outcome without needing to go to war with your own positive feelings: in this case, realise that it's incredibly awesome that Rationality is so cool that it can even help you correct an affective death spiral about itself. Of course, you have to be careful to become actually good at this (but you get Rationality points for realising this, and triple Rationality points for actually achieving it. Awesome!!!). You also get huge Rationality boosts for realising failure modes in your pursuit of Rationality in general (because that's totally Rational too! See how that works?).

Affective death spirals are like anti-akrasia engines, so getting rid of them entirely might be substantially less advantageous than applying some clever munchkinry to them.

Ah well, I had to ask. I know religion is usually the "other team" for us, so I hope I didn't push any buttons by asking--definitely not my intention.

This article is awesome! I've been doing this kind of stuff for years with regards to motivation, attitudes, and even religious belief. I've used the terminology of "virtualisation" to talk about my thought-processes/thought-rituals in carefully defined compartments that give me access to emotions, attitudes, skills, etc. I would otherwise find difficult. I even have a mental framework I call "metaphor ascendence" to convert false beliefs into virtualised compartments so that they can be carefully dismantled without loss of existing utility. It's been nearly impossible to explain to other people how I do and think about this, though often you can show them how to do it without explaining. And for me the major in-road was totally a realisation that there exist tasks which are only possible if you believe they are--guess I'll have to check out The Phantom Tollbooth (I've never read it).

This might be a bit of a personal question (feel free to pm or ignore), but have you by any chance done this with religious beliefs? I felt like I got a hint of that between the lines and it would be amazing to find someone else who does this. I've come across so many people in my life who threw away a lot of utility when they left religion, never realising how much of it they could keep or convert without sacrificing their integrity. One friend even teasingly calls me the "atheist Jesus" because of how much utility I pumped back into his life just by leveraging his personal religious past. Religion has been under strong selective pressure for a long time, and has accumulated a crapload of algorithmic optimisations that can easily get tossed by their apostates just because they're described in terms of false beliefs. My line is always, "I would never exterminate a nuisance species without first sequencing its DNA." You just have to remember that asking the organism about its own DNA is a silly strategy.

Anyways, I could go on for a long time about this, but this article has given me the language to set up a new series I've been trying to rework for Less Wrong, along the lines of this, so I better get cracking. But the buzz of finding someone like-minded is an awesome bonus. Thank you so much for posting.

p.s. I have to agree with various other commenters that I wouldn't use the "dark arts" description myself--mind optimisation is at the heart of legit rationality. But I see how it definitely makes for useful marketing language, so I won't give you too much of a hard time for it.

You seem to be making a mistake in treating bridge rules/hypotheses as necessary--perhaps to set up a later article?

I, like Cai, tend to frame my hypotheses in terms of a world-out-there model combined with bridging rules to my actual sense experience; but this is merely an optimisation strategy to take advantage of all my brain's dedicated hardware for modelling specific world components, preprocessing of senses, etc.. The bridging rules certainly aren't logically required. In practice there is an infinite family of equivalent models over my mental experience which would be totally indistinguishable, regardless of how I choose to "format" that idea mentally. My choice of mental model format is purely about efficiency considerations, not a claim about either my senses or the phenomena behind their behaviour. I'm just better at Tic tac toe than JAM.

To see this, let's say Cai uses python internally to describe zir hypotheses A and B in their entirety. Clearly, ze can write either program with or without bridging rules and still have it yield identical predictions in all possible circumstances. Cai's true hypothesis is the behaviour of the python program, regardless of how ze actually structures it: the combined interaction of all of it together. Both hypotheses could be written purely in terms of predictions on how Cai's senses will change, thereby eliminating the "type error" issue. And if Cai is as heavily optimised for a particular structure of hypothesis as humans are, Cai can just use that--but for performance reasons, not because Cai has some magical way of knowing at what level of abstraction zir existence is implemented. Alternatively Cai might use a particular hypothesis structure because of the programmer's arbitrary decision when writing zir. But the way the hypothesis is structured mentally isn't a claim about how the universe works. The "hard problem of consciousness" is a problem about human intuitions, not a math problem.

I’ve got kind of a fun rationalist origin story because I was raised in a hyper-religious setting and pretty much invented rationalism for use in proselytisation. This placed me on a path of great transformation in my own personal beliefs, but one that has never been marked by a “loss of faith” scenario, which in my experience seems atypical. I’m happy to type it up if anyone’s interested, but so far the lack of action on comments I make to old posts has me thinking that could be a spectacularly wasted effort. Vote, comment, or pm to show interest.

Causal knowledge is required to ensure success, but not to stumble across it. Over time, noticing (or stumbling across if you prefer) relationships between the successes stumbled upon can quickly coalesce into a model of how to intervene. Isn't this essentially how we believe causal reasoning originated? In a sense, all DNA is information about how to intervene that, once stumbled across, persisted due to its efficacy.

All these conclusions seem to require simultaneity of causation. If earthquakes almost always caused recessions, but not until one year after the earthquake; and if recessions drastically increase the number of burglars, but not until one year after the recession; then drawing any of the conclusions you made from a survey taken at a single point in time would be entirely unwarranted. Doesn't that mean you’re essentially measuring entailment rather than causation via a series of physical events which take time to occur?

Also, the virtue theory of metabolism is so ridiculous that it seems only to be acting as a caricature here. Wouldn't the theory that “exercise normally metabolises fat and precursors of fat, reducing the amount of weight put on” result in a much more useful example? Or is there a subtext I'm missing here, like the excessive amount of fat-shaming done in many of the more developed nations?

Load More