Comment author: ChristianKl 31 August 2014 08:27:06AM 0 points [-]

If you read it carefully, my first rephrasing actually says that you torture the original person for a week, and then you (almost) perfectly erase their memories (and physical changes) during that week.

This depends very much on the definition of "original" and notions of identity. You can't expect that they behave in a common sense manner in such a thought experiment.

Comment author: bogdanb 31 August 2014 08:56:40AM 0 points [-]

Sure, but then why do you expect memory and experience would also behave in a common sense manner? (At least, that’s what I think you did in your first comment.)

I interpreted the OP as “I’m confused about memory and experience; let’s try a thought experiment about a very uncommon situation just to see what we think it would happen”. And your first comment reads to me as “you picked a bad thought experiment, because you’re not describing a common situation”. Which seems to completely miss the point, the whole purpose of the thought experiment was to investigate the consequences of something very distinct from situations where “common sense” has real experience to rely on.

The part about torturing children I don’t even get at all. Wondering about something seems to me almost the opposite of the philosophy of “doing something because you think you know the answer”. Should we never do thought experiments, because someone might act on mistaken assumptions about those ideas? Not thinking about something before doing it sounds to me like exactly the opposite of the correct strategy.

Comment author: bogdanb 31 August 2014 07:57:19AM 2 points [-]

Once AI is developed, it could "easily" colonise the universe.

I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?

Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by “destroy the world” extremists or something.)

But if someone is trying to create an FAI, and there is an accident with early prototypes, it seems likely that most of those prototypes would be programmed with only planet-local goals. Similarly, it doesn’t seem likely that intentionally-created weapon-AI would be programmed to care about what happens outside the solar system, unless it’s created by a civilization that already does, or is at least attempting, interstellar travel. Creators that care about safety will probably try to limit the focus, even imperfectly, both to make reasoning easier and to limit damage, and weapons-manufacturers will try to limit the focus for efficiency.

Now, I realize that a badly done AI could decide to colonize the universe even if its creators didn’t program it for that initially, and that simple goals can have that as an unforeseen consequence (like the prototypical paperclip manufacturer). But have we any discussion of how likely that is in a realistic setting? Perhaps the filter is that the vast majority of AIs limit themselves to their original solar system.

Comment author: peter_hurford 30 August 2014 02:34:27PM 2 points [-]

life, especially technological civilization, requires lots of heavy elements, which didn't exist too early in the universe, meaning only stars about the same generation as the Sun have chance to have it

Going off of this, what if life is somewhat common, but we're just one of the first life in the universe? That doesn't seem like an "early filter", so even if this possibility is really unlikely, it still would break your dichotomy.

Comment author: bogdanb 31 August 2014 07:38:23AM *  2 points [-]

The problem with that is that life on Earth appeared about 4 billion years ago, while the Milky Way is more than 13 billion years old. If life were somewhat common, we wouldn’t expect to be the first, because there was time for it to evolve several times in succession, and it had lots of solar systems where it could have done it.

A possible answer could be that there was a very strong early filter during the first part of the Milky Way’s existence, and that filter lessened in intensity in the last few billion years.

The only examples I can think of are elemental abundance (perhaps in a young galaxy there are much fewer systems with diverse enough chemical compositions) and supernova frequency (perhaps a young galaxy is sterilized by frequent and large supernovas much more often than an older one’s). But AFAIK both of those variations can be calculated well enough for a Fermi estimate from what we know, so I’d expect someone who knows the subject much better than I would have made that point already if they were plausible answers.

Comment author: ChristianKl 25 August 2014 08:56:44PM 0 points [-]

Your position is like objecting to a physics thought experiment that assumes frictionless surfaces, while the same thought experiment also assumes mass-less objects.

If the goal of the thought experiment is to think about the notion of mass and how it affects frictions that's indeed a bad thought experiment.

Your rephrasing essentially says that you torture an identical copy of a person for a week. It raises all sorts of issues around identity and copying but it ceases to be an experiment that's about memory.

Comment author: bogdanb 31 August 2014 07:15:10AM 0 points [-]

Your rephrasing essentially says that you torture an identical copy of a person for a week.

If you read it carefully, my first rephrasing actually says that you torture the original person for a week, and then you (almost) perfectly erase their memories (and physical changes) during that week.

This is not changing the nature of the thought experiment in the OP; it is exactly the same experiment, plus a hypothetical example of how it could be achieved technically, because you implied that the experiment in the OP is impossible to achieve and thus ill-posed.

Or, at least, that’s how I interpreted “Of course I'm fighting the hypothetical thought experiment. I think the notion of experience without being affected doesn't make any sense.” I just gave an example of how one can experience something and not be affected. It was a somewhat extreme example, but it seems appropriate when Omega is involved.

In response to comment by [deleted] on Memory is Everything
Comment author: ChristianKl 23 August 2014 01:36:28PM 0 points [-]

That isn't what I'm arguing. In arguing that his notion of experience fundamentally flawed.

If you engage in thought experiments that are build on mistaken assumptions about human cognition you likely won't move in a direction of understanding the subject matter better. Instead you propagate errors across your whole belief system.

There are much nicer real world examples that you can use when you want to speak about trade off between remembered experience and experience as felt in the moment. Problems that actually matter for day to day actions.

Comment author: bogdanb 25 August 2014 08:29:05PM 0 points [-]

It seems rather silly to argue about that, when the thought experiment starts with Omega and bets for amounts of a billion dollars. That allows glossing over a lot of details. Your position is like objecting to a physics thought experiment that assumes frictionless surfaces, while the same thought experiment also assumes mass-less objects.

As a simple example: Omega might make a ridiculously precise scan of your entire body, subject you to the experiment (depending on which branch you chose), then restore each molecule to the same position and state it was during the initial scan, within the precision limits of the initial scan. Sure, there’ll be quantum uncertainty and such, but there’s no obvious reason why the differences would be greater than, say, the differences appearing during nodding off for a couple minutes. Omega even has the option of anesthetizing and freezing you during the scan and restoration, to reduce errors. You’d remember that part of the procedure, but you still wouldn’t be affected by what happened in-between.

(If you think about it, that’s very nearly equivalent to applying the conditions of the bet, with extremely high time acceleration, or while you’re suspended, to a very accurate simulation of yourself. The end effect is the same: an instance of you experienced torture/ultra-pampering for a week, and then an instance of you, which doesn’t remember the first part, experiences gaining/loosing a billion dollars.)

Comment author: So8res 17 January 2014 01:46:43AM 1 point [-]

To address your postscript: "Dark Arts" was not supposed to mean "bad" or "irrational", it was supposed to mean "counter-intuitive, surface-level irrational, perhaps costly, but worth the price".

Strategically manipulating terminal goals and intentionally cultivating false beliefs (with cognitive dissonance as the price) seem to fall pretty squarely in this category. I'm honestly not sure what else people were expecting. Perhaps you could give me an idea of things that squarely qualify as "dark arts" under your definition?

(At a guess, I suppose heavily leveraging taboo tradeoffs and consequentialism may seem "darker" to the layman.)

Comment author: bogdanb 28 January 2014 07:30:28PM *  4 points [-]

perhaps costly, but worth the price

How about extending the metaphor and calling these techniques "Rituals" (they require a sacrifice, and even though it’s not as “permanent” as in HPMOR, it’s usually dangerous), reserving “Dark” for the arguably-immoral stuff?

Comment author: pianoforte611 19 January 2014 08:23:50PM 2 points [-]

Ah! So that's what I've been doing wrong. When I tried to go to the gym regularly with the goal of getting stronger/bigger/having more energy, the actual process of exercising was merely instrumental to me so I couldn't motivate myself to do it consistently. Two of my friends who are more successful at exercising than me have confirmed that for them exercising is both instrumental and a goal in and of itself.

But while I'm down with the idea of hacking terminal goals, I have no idea how to do that. Whereas compartmentalizing is easy (just ignore evidence against the position you want to believe), goal hacking sounds very difficult. Any suggestions/resources for learning how to do this?

Comment author: bogdanb 28 January 2014 07:04:37PM *  0 points [-]

The nice thing about hacking instrumental goals into terminal goals is that while they’re still instrumental you can easily change them.

In your case: You have the TG of becoming fit (BF), and you previously decided on the IG of going to the gym (GG). You’re asking about how to turn GG into a TG, which seems hard.

But notice that you picked GG as an instrument towards attaining BF before thinking about Terminal Goal Hacking (TGH), which suggests it’s not optimal for attainging BF via TGH. The better strategy would be to first ask yourself if another IG would work better for the purpose. For example, you might want to try lots of different sports, especially those that you instinctively find cool, or, if you’re lucky, that you’re good at, which means that you might actually adopt them as TGs more-or-less without trying.

(This is what happened to me, although in my case it was accidental. I tried bouldering and it stuck, even though no other sport I’ve tried in the previous 30 years did.)

Part of the trick is to find sports (or other I/TG candidates) that are convenient (close to work or home, not requiring more participants than you have easy access to) and fun to the point that when you get tired you force yourself to continue because you want to play some more, not because of how buff you want to get. In the sport case try everything, including variations, not just what’s popular or well known, you might be surprised.

(In my case, I don’t much like climbing tall walls—I get tired, bored and frustrated and want to give up when they’re too hard. One might expect that bouldering would be the same (it’s basically the same thing except with much shorter but harder walls), but the effect in my case was completely different: if a problem is too hard I get more motivated to figure out how climb it. The point is not to try bouldering, but to try variations of sports. E.g., don’t just try tennis and give up; try doubles and singles, try squash, try ping-pong, try real tennis, try badminton, one of those might work.)

Comment author: RichardKennaway 11 December 2013 09:03:11AM *  2 points [-]

I do not agree that this is an accurate map. Consider a random collection of dots: human eye can find patterns in it.

Not enough to compress it substantially.

Random collection of dots:

Not a random collection of dots:

ETA: Well, those links weren't working, then they were working, currently they aren't. The actual URLs are http://kennaway.org.uk/images/noise.jpg and http://kennaway.org.uk/images/notnoise.jpg

ETA2: And now they're working, or not, at random.

Comment author: bogdanb 22 December 2013 08:53:40AM 0 points [-]

It doesn’t work if you just click the link, but if you copy the link address and paste it in a browser then it works. (Because there isn’t a referrer header anymore.)

In response to comment by JoshuaFox on Lotteries & MWI
Comment author: [deleted] 19 November 2013 05:59:58PM 2 points [-]

There are some special cases. If someone thinks his life is worthless if he doesn't have something that could be bought or done with a $1,000,000, then the gamble could be justified. The thing that he buys pumps up the utility so much that it's more than thousands times the utility of $1000. But this is probably a really rare case.

In response to comment by [deleted] on Lotteries & MWI
Comment author: bogdanb 25 November 2013 07:17:33AM 2 points [-]

Medical issues that make life miserable but can be fixed with ~1M$ would be a (bit more concrete) example. Relatively rare, as you said.

Comment author: fubarobfusco 02 November 2013 05:59:58AM 13 points [-]

I have a recurring memory glitch that tells me I used to be able to levitate or fly. According to this memory, I used to be able to float a few feet off the ground simply by jumping up and holding there, choosing not to come down. There's a specific sensation memory associated with this, a tugging or lifting feeling in my abdomen.

The inference that follows, since I can't do it now, is that I forgot how to do it, or lost the ability somehow. This is moderately disappointing until I tell myself that it's just a memory glitch and humans can't levitate.

I have a few hypotheses about this:

  • It's a memory of a dream, possibly a recurring dream. Dreams of flying are pretty common.
  • It's a distorted memory of being picked up and carried as a small child.
  • It's a distorted memory of a childhood habit of jumping off of things. (Which I did frequently, sometimes getting in trouble in grade school for jumping off of things that were too high for an adult to safely jump off of, but never injured me any.)
Comment author: bogdanb 02 November 2013 07:08:58PM 3 points [-]

I have a rare but recurring dream that resembles very much what you describe.

View more: Prev | Next