Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, August 28 - September 3, 2017

1 Post author: Thomas 28 August 2017 06:11AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (69)

Comment author: Elo 29 August 2017 11:27:05PM 8 points [-]

Hamming question: if your life were a movie and you were watching your life on screen, what would you be yelling at the main character? (example: don't go in the woods alone! Hurry up and see the quest guy! Just drop the sunk costs and do X) (optional - answer in public or private)

Comment author: Screwtape 30 August 2017 03:25:56PM 1 point [-]
Comment author: Error 28 August 2017 05:38:09PM 4 points [-]

I'm looking for an anecdote about sunk costs. Two executives were discussing some bad business situation, one of them asks "look, suppose the board were to fire us and bring new execs in. What would those guys do?" "Get us out of the X business" "Then what's to stop us from leaving the room, coming back in, and doing exactly that?"

...but all my google-fu can't turn up the original source. Does it sound familiar to anyone here?

Comment author: Unnamed 28 August 2017 06:42:43PM 6 points [-]

Intel, 1985.

Grove says he and Moore were in his cubicle, "sitting around ... looking out the window, very sad." Then Grove asked Moore a question.

"What would happen if somebody took us over, got rid of us — what would the new guy do?" he said.

"Get out of the memory business," Moore answered.

Grove agreed. And he suggested that they be the ones to get Intel out of the memory business.

Comment author: Error 28 August 2017 08:12:17PM 1 point [-]

Thanks, that's the one.

Comment author: cousin_it 31 August 2017 10:08:19AM *  2 points [-]

It seems to me that there's no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)

If that's true, that means FAI designers shouldn't work on approaches like "extrapolation" that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it "faithfully" as a utility function, despite the absence of God.

Does that sound right? I've never seen it spelled out as strongly, but logically it seems inevitable.

Comment author: Oscar_Cunningham 31 August 2017 12:06:49PM 0 points [-]

It seems to me that there's no difference in kind between moral intuitions and religious beliefs,

That just doesn't seem true to me. I agree that there's often difference between religious beliefs and ordinary factual beliefs, but I don't think that religious beliefs are the same sort of thing as moral intuitions. They just feel different to me.

For one thing religious beliefs are often a "belief in belief" whereas I don't think moral beliefs are like that.

Also moral beliefs seem more instinctual, whereas religious beliefs are taught.

Comment author: entirelyuseless 01 September 2017 02:15:00AM 1 point [-]

For one thing religious beliefs are often a "belief in belief" whereas I don't think moral beliefs are like that.

I think moral beliefs are very often like that, at least for some people. See the comment here and JM's response.

Stephen Diamond makes a related argument, namely that people will not give up moral beliefs because it is obviously wicked to do so, according to those very same moral beliefs, in the same way that a religious person will not give up their religious beliefs because those beliefs say it would be wicked to do so.

Comment author: cousin_it 31 August 2017 12:28:23PM 1 point [-]

Every emotion connected with moral intuitions, e.g. recoiling from a bad act, can also happen due to religious beliefs.

Comment author: morganism 30 August 2017 02:57:42AM *  2 points [-]

Low-fat diet could kill you, major study shows (Lancet Canadian study of 135,000 adults )

http://www.telegraph.co.uk/news/2017/08/29/low-fat-diet-linked-higher-death-rates-major-lancet-study-finds/amp/

"those with low intake of saturated fat raised chances of early death by 13 per cent compared to those eating plenty.

And consuming high levels of all fats cut mortality by up to 23 per cent."

“Higher intake of fats, including saturated fats, are associated with lower risk of mortality.”

“Our data suggests that low fat diets put populations at increased risk for cardiovascular disease."

Comment author: halcyon 29 August 2017 08:23:17PM 1 point [-]

Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.

My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move the integrations into the powers that other stuff is raised by. But if we somehow end up with complicated patterns of logarithms and exponentiations, like if that other stuff itself involves calculus and so on, then who knows? Is there a standard name for this operation?

Comment author: Manfred 29 August 2017 09:41:37PM 3 points [-]

What is the analogy of sum that you're thinking about? Ignoring how the little pieces are defined, what would be a cool way to combine them? For example, you can take the product of a series of numbers to get any number, that's pretty cool. And then you can convert a series to a continuous function by taking a limit, just like an integral, except rather than the limit going to really small pieces, the limit goes to pieces really close to 1.

You could also raise a base to a series of powers to get any number, then take that to a continuous limit to get an integral-analogue. Or do other operations in series, but I can't think of any really motivating ones right now.

Can you invert these to get derivative-analogues (wiki page)? For the product integral, the value of the corresponding derivative turns out to be the limit of more and more extreme inverse roots, as you bring the ratio of two points close to 1.

Are there any other interesting derivative-analogues? What if you took the inverse of the difference between points, but then took a larger and larger root? Hmm... You'd get something that was 1 almost everywhere for nice functions, except where the function's slope got super-polynomially flat or super-polynomially steep.

Comment author: halcyon 31 August 2017 12:33:21PM *  0 points [-]

Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs.

(That is the context in which this came up. The specific situation is more technically convoluted.)

Comment author: Oscar_Cunningham 29 August 2017 10:57:17PM 2 points [-]

Good question!

The answer is called a Product integral. You basically just use the property

log(ab) = log(a) + log(b)

to turn your product integral into a normal integral

product integral of f(x) = e ^ [normal integral of log(f(x))]

Comment author: halcyon 31 August 2017 12:47:10PM *  0 points [-]

Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.

Comment author: Thomas 31 August 2017 09:42:32AM *  0 points [-]

I think that was not his question. Hi didn't ask about product integral of f(x), but "product integral of x".

EDIT: And that for "small x". At least I understood his question so.

Comment author: halcyon 31 August 2017 12:58:35PM 0 points [-]

No, he's right. I didn't think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.

Comment author: Thomas 29 August 2017 09:39:32PM 0 points [-]

I am afraid, that multiplication of even countably many small numbers yields 0. Let alone the product of more than that, what your integration analogous operation would be,

You can get a nonzero product if the sum of differences between 1 and your factors converge. Then and only then. But if all the factors are smaller than say 0.9 ... you get 0.

Except if you can find some creative way to that anyway. Might be possible, I don't know.

Comment author: halcyon 31 August 2017 01:08:41PM 0 points [-]

Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, "infinitely small" factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?

Comment author: Thomas 31 August 2017 01:31:47PM 1 point [-]

Right. There around 1 you often can actually multiply an infinite number of factors and get some finite result.

Comment author: Thomas 28 August 2017 06:12:31AM 1 point [-]
Comment author: cousin_it 28 August 2017 11:01:23AM 2 points [-]

The best strategy is to always say "it's the first time". (Or, equivalently, always say "it's the second time", etc.)

Comment author: Thomas 28 August 2017 11:16:27AM 0 points [-]

No. If that damn dungeon master hadn't tossed that fair coin himself first, then it would be the best strategy to say "It's my first time here" - and you are free.

But it may very well be, that he tossed heads up before and put you right back to sleep with amnesia induced. In that case, you are never out.

Comment author: cousin_it 28 August 2017 11:34:27AM *  0 points [-]

My strategy gives probability 1/2 of escaping. Can you show some strategy that gives higher probability? Doesn't have to be the best.

Comment author: Thomas 28 August 2017 11:47:00AM 0 points [-]

If you always say "It's my first time" you will be freed with the probability 1/2, yes.

I'll give the best strategy I know before the end of this week. Now, it would be a spoiler.

Comment author: Oscar_Cunningham 28 August 2017 05:20:01PM *  4 points [-]

Let p_n be the probability that I say n. Then the probability I escape on exactly the nth round is at most p_n/2 since the coin has to come up on the correct side, and then I have to say n. In fact the probability is normally less than that since there is a possibility that I have already escaped. So the probability I escape is at most the sum over n of p_n/2. Since p_n is a probability distribution it sums to 1, so this if at most 1/2. I'll escape with probability less than this is I have any two p_n nonzero. So the optimal strategies are precisely to always say the same number, and this can be any number.

Comment author: Unnamed 31 August 2017 10:14:09PM 2 points [-]

I got the same answer, with essentially the same reasoning.

Assuming that each guess is a draw from the same probability distribution over positive integers, the expected number of correct guesses is 0.5 if I keep guessing forever (rather than leaving after 1 correct guess), regardless of what distribution I choose.

So the probability of getting at least one correct guess (which is the win condition) is capped at 0.5. And the only way to hit that maximum is by removing all the scenarios where I guess correctly more than once, so that all of the expected value comes from the scenarios where I guess correctly exactly once.

Comment author: Thomas 31 August 2017 12:53:38PM 0 points [-]

Define flip values as H=0 and T=1. You have to flip this fair coin twice. You increase x=x+value(1) and y=y+value(2) and z=z+1. If x>y you stop flipping and declare - It's the z-th round of the game.

For example, after TH, x=1 and y=0 and z=1. You stop tossing and declare 1st round. If it is HH, you continue tossing it twice again.

No matter how late in the game you are, you have a nonzero probability to win. Chebyshev (and Chernoff) can help you improve the x>y condition a bit. I don't know how much yet. Neither I have a proof that then the probability of exiting is > 1/2. But at least that much it is. Some Monte-Carloing seems to agree.

Comment author: Dagon 31 August 2017 11:57:02PM 0 points [-]

Would you mind showing your work on monte-carlo for this? If you've tried more than a few runs and they all actually terminated, you have a bug.

You're describing a random-walk that moves left 25% of the time, right 25% and does not move 50% of the time, and counting steps until you get to 1. There is no possibility that this ends up better than 50% to exit after the same number of steps as 0.50^n.

Comment author: Thomas 01 September 2017 07:57:32AM *  0 points [-]

1.round____1_____8_0.125

2.round____15_____64_0.234

3.round___164____512_0.32

4.round___1,585____4,096_0.387

5.round__14,392___32,768_0.439

6.round__126,070___262,144_0.481

7.round_1,079,808__2,097,152_0.515

8.round_9,111,813__16,777,216__0.543

9.round_76,095,176__134,217,728__0.567

Comment author: Oscar_Cunningham 01 September 2017 09:57:53AM *  0 points [-]

I think you must just have an error in your code somewhere. Consider going round 3. Let the probability you say "3" be p_3. Then according to your numbers

164/512 = 15/64 + (1 - 15/64)*(1/2)*p_3

Since the probability of escaping by round 3 is the probability of escape by round 2, plus the probability you don't escape by round 2, multiplied by the probability the coin lands tails, multiplied by the probability you say "3".

But then p_3 = 11/49, and 49 is not a power of two!

Comment author: Oscar_Cunningham 31 August 2017 06:00:01PM 0 points [-]

Based on some heuristic calculations I did, it seems that the probability of escape with this plan is exactly 4/10.

Comment author: Thomas 31 August 2017 07:02:05PM *  0 points [-]

Interesting. Do you agree that every number is reached by the z function defined above, infinite number of times?

And yet, every single time z != sleeping_round? In the 60 percent of this Sleeping Beauty imprisonments?

Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%?

Extraordinary. Might be possible, though.

You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n.

That would be easier if it would be f(n)>>n almost always. But sometimes is bigger, sometimes is smaller.

Comment author: Oscar_Cunningham 31 August 2017 08:08:18PM 0 points [-]

Do you agree that every number is reached by the z function defined above, infinite number of times?

Yes, definitely.

Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%?

Yes. I proved it.

You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n.

Well, on average we have f(n)=n for one n, but there's a 50% chance the guy won't ask us on that round.

Comment author: Dagon 31 August 2017 04:12:44PM 0 points [-]

There are two pretty strong sketches above here that this approaches 1/2 as you get closer to any static answer, but cannot beat 1/2.

The best answer is "ignore the coin, declare first". There is no better chance of escape (though there are many ties at 1/2), and this minimizes your time in purgatory in the case that you do escape.

Comment author: Thomas 31 August 2017 04:47:30PM 0 points [-]

So, you say, the Sleeping Beauty is there forever with the probability of at least 1/2.

Then she has all the time in the world, to exercise this function which outputs z. Do you agree, that every natural number will be eventually reached by this algorithm, counting the double tossings, adding 0 or 1 to x and 0 or 1 to y and increasing z, until x>y?

Agree?

Comment author: Dagon 31 August 2017 06:43:56PM 0 points [-]

She has all the time in the world, but only as much probability as she gave up by not saying "first".

Every natural number is reachable by your algorithm, but the probability that it's reached ON THE SAME ITERATION as the wake-up schedule converges to zero pretty quickly. Both iterations and her responses approach infinity, the product of the probabilities approach zero way faster than the probabilities themselves.

Really. Go to Wolfram and calculate "sum to infinity 0.5^n * 0.5^n". The chance that the current wake-up is N is clearly 0.5^n - 50% chance of T, 25% of HT, 12.5 of HHT, etc. If your distribution is different, replace the second 0.5^n with any formula such that "sum to infinity YOUR_FORMULA" is 1. It's 0.3333 that she'll EVER escape if she randomizes across infinite possibilities with the same distribution, and gets closer to (but doesn't reach) 50% if she front-weights the distribution.

Comment author: Dagon 28 August 2017 06:47:18PM 0 points [-]

Simplest is always "this is wakening #1". 50% chance of escape, and soonest possible if it happens. Has the psychological disadvantage that if the first coin is tails, you're stuck forever with no future chance of escape. You have no memory of any of them, so that is irrelevant - all that matters is the probability of escape - but it feels bad to us as outside observers.

You can stretch it out by randomizing your answer with a declining chance of higher numbers. Say, flip YOUR coin until you get tails, then guess the number of flips you've made. HHHT you guess 4, for example. This gives you 25% to be released on day 1 (50% DM's coin is tails X 50% your first flip is.). And 6.5% to be released the second day (25% of HT on his coin and yours). Unfortunately, sum (i: 1->infinity) 0.25^i = 0.33333, so your overall chance of escaping is reduced. But you do always have some (very small) hope, unlike the simple answer.

Randomization with weighting toward earlier numbers improves your early chance, but reduces your later chances, and it seems (from sampling and thinking, not proven) that it can approach 0.5 but not exceed it.

I think the best you can do is 50% unless you have some information when you wake up about how long this has been going on.

Comment author: WalterL 28 August 2017 01:32:45PM 0 points [-]

Is it possible to pass information between awakenings? Use coin to scratch floor or something?

Comment author: Thomas 28 August 2017 01:35:36PM 0 points [-]

No, that is not possible.

Comment author: WalterL 28 August 2017 01:43:07PM 0 points [-]

So you only get one choice, since you will make the same one every time. I guess for simplicity choose 'first', but any number has same chance.

Comment author: Thomas 28 August 2017 02:02:59PM 0 points [-]

Can you do worse than that?

Comment author: WalterL 28 August 2017 02:21:25PM 0 points [-]

Sure, you can guess zero or negative numbers or whatever.

Comment author: Thomas 28 August 2017 02:24:48PM 0 points [-]

Say, you must always give a positive number. Can you do worse than 1/2 then?

Comment author: WalterL 28 August 2017 03:30:42PM 0 points [-]

No. You will always say the same number each time, since you are identical each time.

As long as it isn't that number, you are going another round. Eventually it gets to that number, whereupon you go free if you get the luck of the coin, or go back under if you miss it.

Comment author: Thomas 28 August 2017 03:33:50PM *  0 points [-]

You will always say the same number each time, since you are identical each time.

That's why you get a fair coin. Like a program, which gets seed for its random number generator from the clock.

Comment author: WalterL 28 August 2017 08:31:09PM 0 points [-]

Coin doesn't help. Say I decide to pick 2 if it is heads, 1 if it is tails.

I've lowered my odds of escaping on try 1 to 1/4, which initially looks good, but the overall chance stays the same, since I get another 1/4 on the second round. If I do 2 flips, and use the 4 spread there to get 1, 2, 3, or 4, then I have an eight of a chance on each of rounds 1-4.

Similarly, if I raise the number of outcomes that point to one number, that round's chance goes up , but the others decline, so my overall chance stays pegged to 1/2. (ie, if HH, HT, TH all make me say 1, then I have a 3/8 chance that round, but only a 1/8 of being awake on round 2 and getting TT).

Comment author: cousin_it 28 August 2017 02:45:50PM 0 points [-]

If you say either 1 or 2 with probability 1/2 each, the probability of escaping is 7/16.

Comment author: Thomas 28 August 2017 02:57:36PM 0 points [-]

True. You can do it worse than 1/2. Just toss a coin and if it lands head up choose 1, otherwise choose 2.

You can link more numbers this way and it can be even worse.

Comment author: morganism 31 August 2017 08:24:45AM 0 points [-]

The Accidental Elitist- (academic jargon)

https://thebaffler.com/latest/accidental-elitism-alvarez

" there’s a huge difference between jargon as a necessarily difficult tool required for the academic work of tackling difficult concepts, and jargon as something used by tools simply to prove they’re academics."

"confirm your choice to be a so-called academic, to assume it not only as a profession, but an identity, and to wear on yourself the trappings that come with that identity without stopping to wonder how necessary they really are and whether they are actually killing your ability to be and do something better. "

Comment author: Bound_up 30 August 2017 03:31:06PM 0 points [-]

I'm trying to find Alicorn's post, or anywhere else, where it is mentioned that she "hacked herself bisexual."

Comment author: Strangeattractor 30 August 2017 07:47:41PM 1 point [-]

Do you mean where she hacked herself to become polyamorous? If so, you may be looking for this post http://lesswrong.com/lw/79x/polyhacking/

Comment author: jam_brand 31 August 2017 07:26:16AM 0 points [-]

Here's a post, though not from Alicorn, that has some info that may be of interest: http://lesswrong.com/lw/453/procedural_knowledge_gaps/3i49

Comment author: torekp 28 August 2017 10:06:17PM *  0 points [-]

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we mean past macroscopic state), but can someone sketch the argument for me? Or give references?

For comparison, there are clear explanations available for why memory involves increasing entropy. I don't need anything that formal, but just an informal explanation of why different choices don't reliably correlate to different macroscopic events at lower-entropy (past) times.

Comment author: cousin_it 29 August 2017 11:15:57AM *  1 point [-]

It doesn't seem to be universally true. For example, a thermostat's action is correlated with past temperature. People are similar to thermostats in some ways, for example upon touching a hot stove you'll quickly withdraw your hand. But we also differ from thermostats in other ways, because small amounts of noise in the brain (or complicated sensitive computations) can lead to large differences in actions. Maybe Carroll is talking about that?

Comment author: torekp 29 August 2017 11:16:04PM *  1 point [-]

Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors.

Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

Comment author: whpearson 28 August 2017 01:11:03PM 0 points [-]

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.

Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.

She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.

blip

Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.

As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.

She should probably continue to try and save humanity, because of indexical uncertainty.

Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...

Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.

blip

A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.

Comment author: RowanE 28 August 2017 10:00:20PM 1 point [-]

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Comment author: whpearson 29 August 2017 05:48:34PM 0 points [-]

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.

I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).

Comment author: RowanE 02 September 2017 11:13:21PM 0 points [-]

That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?

Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.

I don't want to be upgraded into a "capable agent" and then cast back into the wilderness from whence I came, I'd settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer's Lower Bound.

Comment author: MattG2 29 August 2017 12:23:06AM 0 points [-]

Is it possible to make something a terminal value? If so, how?

Comment author: RowanE 29 August 2017 11:18:35AM 0 points [-]

By believing it's important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but "She had long been a believer in self-perfection and self-improvement" sounds like something one decides to care about.