If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
71 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Elo120

Hamming question: if your life were a movie and you were watching your life on screen, what would you be yelling at the main character? (example: don't go in the woods alone! Hurry up and see the quest guy! Just drop the sunk costs and do X) (optional - answer in public or private)

1Screwtape
https://xkcd.com/1768/

I'm looking for an anecdote about sunk costs. Two executives were discussing some bad business situation, one of them asks "look, suppose the board were to fire us and bring new execs in. What would those guys do?" "Get us out of the X business" "Then what's to stop us from leaving the room, coming back in, and doing exactly that?"

...but all my google-fu can't turn up the original source. Does it sound familiar to anyone here?

Intel, 1985.

Grove says he and Moore were in his cubicle, "sitting around ... looking out the window, very sad." Then Grove asked Moore a question.

"What would happen if somebody took us over, got rid of us — what would the new guy do?" he said.

"Get out of the memory business," Moore answered.

Grove agreed. And he suggested that they be the ones to get Intel out of the memory business.

2Error
Thanks, that's the one.

It seems to me that there's no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)

If that's true, that means FAI designers shouldn't work on approaches like "extrapolation" that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it "faithfully" as a util... (read more)

0Oscar_Cunningham
That just doesn't seem true to me. I agree that there's often difference between religious beliefs and ordinary factual beliefs, but I don't think that religious beliefs are the same sort of thing as moral intuitions. They just feel different to me. For one thing religious beliefs are often a "belief in belief" whereas I don't think moral beliefs are like that. Also moral beliefs seem more instinctual, whereas religious beliefs are taught.
2entirelyuseless
I think moral beliefs are very often like that, at least for some people. See the comment here and JM's response. Stephen Diamond makes a related argument, namely that people will not give up moral beliefs because it is obviously wicked to do so, according to those very same moral beliefs, in the same way that a religious person will not give up their religious beliefs because those beliefs say it would be wicked to do so.
1cousin_it
Every emotion connected with moral intuitions, e.g. recoiling from a bad act, can also happen due to religious beliefs.

Low-fat diet could kill you, major study shows (Lancet Canadian study of 135,000 adults )

http://www.telegraph.co.uk/news/2017/08/29/low-fat-diet-linked-higher-death-rates-major-lancet-study-finds/amp/

"those with low intake of saturated fat raised chances of early death by 13 per cent compared to those eating plenty.

And consuming high levels of all fats cut mortality by up to 23 per cent."

“Higher intake of fats, including saturated fats, are associated with lower risk of mortality.”

“Our data suggests that low fat diets put populations at increas... (read more)

Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.

My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move t... (read more)

5Manfred
What is the analogy of sum that you're thinking about? Ignoring how the little pieces are defined, what would be a cool way to combine them? For example, you can take the product of a series of numbers to get any number, that's pretty cool. And then you can convert a series to a continuous function by taking a limit, just like an integral, except rather than the limit going to really small pieces, the limit goes to pieces really close to 1. You could also raise a base to a series of powers to get any number, then take that to a continuous limit to get an integral-analogue. Or do other operations in series, but I can't think of any really motivating ones right now. Can you invert these to get derivative-analogues (wiki page)? For the product integral, the value of the corresponding derivative turns out to be the limit of more and more extreme inverse roots, as you bring the ratio of two points close to 1. Are there any other interesting derivative-analogues? What if you took the inverse of the difference between points, but then took a larger and larger root? Hmm... You'd get something that was 1 almost everywhere for nice functions, except where the function's slope got super-polynomially flat or super-polynomially steep.
0halcyon
Someone has probably thought of this already, but if we defined an integration analogue where larger and larger logarithmic sums cause their exponentiated, etc. value to approach 1 rather than infinity, then we could use it to define a really cool account of logical metaphysics: Each possible state of affairs has an infinitesimal probability, there are infinitely many of them, and their probabilities sum to 1. This probably won't be exhaustive in some absolute sense, since no formal system is both consistent and complete, but if we define states of affairs as formulas in some consistent language, then why not? We can then assign various differential formulas to different classes of states of affairs. (That is the context in which this came up. The specific situation is more technically convoluted.)
3Oscar_Cunningham
Good question! The answer is called a Product integral. You basically just use the property to turn your product integral into a normal integral
0halcyon
Thanks, product integral is what I was talking about. The exponentiated integral is what I meant when I said the integration will move into the power term.
0Thomas
I think that was not his question. Hi didn't ask about product integral of f(x), but "product integral of x". EDIT: And that for "small x". At least I understood his question so.
0halcyon
No, he's right. I didn't think to clarify that my infinitely small factors are infinitesimally larger than 1, not 0. See the Type II product integral formula on Wikipedia that uses 1 + f(x).dx.
0[anonymous]
Sum : integral of f(x) :: product :: exp(integral of log(f(x)))
0Thomas
I am afraid, that multiplication of even countably many small numbers yields 0. Let alone the product of more than that, what your integration analogous operation would be, You can get a nonzero product if the sum of differences between 1 and your factors converge. Then and only then. But if all the factors are smaller than say 0.9 ... you get 0. Except if you can find some creative way to that anyway. Might be possible, I don't know.
0halcyon
Yeah, it might have helped to clarify that the infinitesimal factors I had in mind are not infinitely small as numbers from the standpoint of addition. Since the factor that makes no change to the product is 1 rather than 0, "infinitely small" factors must be infinitesimally greater than 1, not 0. In particular, I was talking about a Type II product integral with the formula pi(1 + f(x).dx). If f(x) = 1, then we get e^sigma(1.dx) = e^constant = constant, right?
1Thomas
Right. There around 1 you often can actually multiply an infinite number of factors and get some finite result.
3cousin_it
The best strategy is to always say "it's the first time". (Or, equivalently, always say "it's the second time", etc.)
0Thomas
No. If that damn dungeon master hadn't tossed that fair coin himself first, then it would be the best strategy to say "It's my first time here" - and you are free. But it may very well be, that he tossed heads up before and put you right back to sleep with amnesia induced. In that case, you are never out.
0cousin_it
My strategy gives probability 1/2 of escaping. Can you show some strategy that gives higher probability? Doesn't have to be the best.
0Thomas
If you always say "It's my first time" you will be freed with the probability 1/2, yes. I'll give the best strategy I know before the end of this week. Now, it would be a spoiler.
8Oscar_Cunningham
Let p_n be the probability that I say n. Then the probability I escape on exactly the nth round is at most p_n/2 since the coin has to come up on the correct side, and then I have to say n. In fact the probability is normally less than that since there is a possibility that I have already escaped. So the probability I escape is at most the sum over n of p_n/2. Since p_n is a probability distribution it sums to 1, so this if at most 1/2. I'll escape with probability less than this is I have any two p_n nonzero. So the optimal strategies are precisely to always say the same number, and this can be any number.
4Unnamed
I got the same answer, with essentially the same reasoning. Assuming that each guess is a draw from the same probability distribution over positive integers, the expected number of correct guesses is 0.5 if I keep guessing forever (rather than leaving after 1 correct guess), regardless of what distribution I choose. So the probability of getting at least one correct guess (which is the win condition) is capped at 0.5. And the only way to hit that maximum is by removing all the scenarios where I guess correctly more than once, so that all of the expected value comes from the scenarios where I guess correctly exactly once.
0Thomas
Define flip values as H=0 and T=1. You have to flip this fair coin twice. You increase x=x+value(1) and y=y+value(2) and z=z+1. If x>y you stop flipping and declare - It's the z-th round of the game. For example, after TH, x=1 and y=0 and z=1. You stop tossing and declare 1st round. If it is HH, you continue tossing it twice again. No matter how late in the game you are, you have a nonzero probability to win. Chebyshev (and Chernoff) can help you improve the x>y condition a bit. I don't know how much yet. Neither I have a proof that then the probability of exiting is > 1/2. But at least that much it is. Some Monte-Carloing seems to agree.
0Dagon
Would you mind showing your work on monte-carlo for this? If you've tried more than a few runs and they all actually terminated, you have a bug. You're describing a random-walk that moves left 25% of the time, right 25% and does not move 50% of the time, and counting steps until you get to 1. There is no possibility that this ends up better than 50% to exit after the same number of steps as 0.50^n.
0Thomas
1._round_________________1____________________8______0.125 2._round________________15___________________64______0.234 3._round_______________164__________________512______0.32 4._round_____________1,585________________4,096______0.387 5._round____________14,392_______________32,768______0.439 6._round___________126,070______________262,144______0.481 7._round_________1,079,808____________2,097,152______0.515 8._round_________9,111,813___________16,777,216______0.543 9._round________76,095,176__________134,217,728______0.567
0Oscar_Cunningham
I think you must just have an error in your code somewhere. Consider going round 3. Let the probability you say "3" be p_3. Then according to your numbers Since the probability of escaping by round 3 is the probability of escape by round 2, plus the probability you don't escape by round 2, multiplied by the probability the coin lands tails, multiplied by the probability you say "3". But then p_3 = 11/49, and 49 is not a power of two!
0Thomas
Say, that SB has only 10 tries to escape. The DM (Dungeon Master) tosses his 10 coins and SB tosses her 20 coins, even before the game begins. There are 2^30, which is about a billion possible outputs. More than half of them grants her freedom. We compute her exit by - At the earliest x>y condition in each output bit string, the DM has also the freeing coin toss.
0Oscar_Cunningham
Based on some heuristic calculations I did, it seems that the probability of escape with this plan is exactly 4/10.
0Thomas
Interesting. Do you agree that every number is reached by the z function defined above, infinite number of times? And yet, every single time z != sleeping_round? In the 60 percent of this Sleeping Beauty imprisonments? Even if the condition x>y is replaced by something like x>y+sqrt(y) or whatever formula, you can't go above 50%? Extraordinary. Might be possible, though. You clearly have a function N->N where eventually every natural number is a value of this function f, but f(n)!=n for all n. That would be easier if it would be f(n)>>n almost always. But sometimes is bigger, sometimes is smaller.
0Oscar_Cunningham
Yes, definitely. Yes. I proved it. Well, on average we have f(n)=n for one n, but there's a 50% chance the guy won't ask us on that round.
0Dagon
There are two pretty strong sketches above here that this approaches 1/2 as you get closer to any static answer, but cannot beat 1/2. The best answer is "ignore the coin, declare first". There is no better chance of escape (though there are many ties at 1/2), and this minimizes your time in purgatory in the case that you do escape.
0Thomas
So, you say, the Sleeping Beauty is there forever with the probability of at least 1/2. Then she has all the time in the world, to exercise this function which outputs z. Do you agree, that every natural number will be eventually reached by this algorithm, counting the double tossings, adding 0 or 1 to x and 0 or 1 to y and increasing z, until x>y? Agree?
0Dagon
She has all the time in the world, but only as much probability as she gave up by not saying "first". Every natural number is reachable by your algorithm, but the probability that it's reached ON THE SAME ITERATION as the wake-up schedule converges to zero pretty quickly. Both iterations and her responses approach infinity, the product of the probabilities approach zero way faster than the probabilities themselves. Really. Go to Wolfram and calculate "sum to infinity 0.5^n * 0.5^n". The chance that the current wake-up is N is clearly 0.5^n - 50% chance of T, 25% of HT, 12.5 of HHT, etc. If your distribution is different, replace the second 0.5^n with any formula such that "sum to infinity YOUR_FORMULA" is 1. It's 0.3333 that she'll EVER escape if she randomizes across infinite possibilities with the same distribution, and gets closer to (but doesn't reach) 50% if she front-weights the distribution.
0Thomas
Your basic idea is right here. But ... this product isn't that straightforward. Say it's the 100th session. It is a lot of ways, that x becomes greater than y exactly this time. Especially if the formula is x>y+sqr(y) or something alike from the Chebyshev's arsenal. If the session is then 101st, this new small probability isn't much smaller than it was in the 100th session. Still, you may be right that the product (1-p_n)*(1-p_n+1) ... converges to 1/2 at the most. Well, I doubt it.
0Dagon
Don't doubt it, do the math (and http://www.wolframalpha.com/ helps a LOT with this. Provide any formula for probability of guessing "Nth wakeup" such that it sums to 1 (or less) from 1 to infinity. Calculate the sum from 1 to infinity of the product of this and the 0.5^n chance that you're currently on the Nth wakeup. You will never find one that sums to better than 0.5. Your weirdness using X and Y is not helping - any algorithm you can state eventually comes out to "some probability for each N of guessing N". And when you view it that way, you'll see that the sum has to be less than 50%.
0Dagon
Simplest is always "this is wakening #1". 50% chance of escape, and soonest possible if it happens. Has the psychological disadvantage that if the first coin is tails, you're stuck forever with no future chance of escape. You have no memory of any of them, so that is irrelevant - all that matters is the probability of escape - but it feels bad to us as outside observers. You can stretch it out by randomizing your answer with a declining chance of higher numbers. Say, flip YOUR coin until you get tails, then guess the number of flips you've made. HHHT you guess 4, for example. This gives you 25% to be released on day 1 (50% DM's coin is tails X 50% your first flip is.). And 6.5% to be released the second day (25% of HT on his coin and yours). Unfortunately, sum (i: 1->infinity) 0.25^i = 0.33333, so your overall chance of escaping is reduced. But you do always have some (very small) hope, unlike the simple answer. Randomization with weighting toward earlier numbers improves your early chance, but reduces your later chances, and it seems (from sampling and thinking, not proven) that it can approach 0.5 but not exceed it. I think the best you can do is 50% unless you have some information when you wake up about how long this has been going on.
0WalterL
Is it possible to pass information between awakenings? Use coin to scratch floor or something?
0Thomas
No, that is not possible.
0WalterL
So you only get one choice, since you will make the same one every time. I guess for simplicity choose 'first', but any number has same chance.
0Thomas
Can you do worse than that?
0WalterL
Sure, you can guess zero or negative numbers or whatever.
0Thomas
Say, you must always give a positive number. Can you do worse than 1/2 then?
0WalterL
No. You will always say the same number each time, since you are identical each time. As long as it isn't that number, you are going another round. Eventually it gets to that number, whereupon you go free if you get the luck of the coin, or go back under if you miss it.
0Thomas
That's why you get a fair coin. Like a program, which gets seed for its random number generator from the clock.
0WalterL
Coin doesn't help. Say I decide to pick 2 if it is heads, 1 if it is tails. I've lowered my odds of escaping on try 1 to 1/4, which initially looks good, but the overall chance stays the same, since I get another 1/4 on the second round. If I do 2 flips, and use the 4 spread there to get 1, 2, 3, or 4, then I have an eight of a chance on each of rounds 1-4. Similarly, if I raise the number of outcomes that point to one number, that round's chance goes up , but the others decline, so my overall chance stays pegged to 1/2. (ie, if HH, HT, TH all make me say 1, then I have a 3/8 chance that round, but only a 1/8 of being awake on round 2 and getting TT).
0Thomas
The coin can at least lower your chances. Say, that you will say 3 if it is head and 4 if it is the tail. You can win at round 3 with the probability 1/4 and you can win at round 4 with the probability 1/4. Is that right?
0WalterL
Oh, yeah, I see what you are saying. Having 2 1/4 chances is, what, 7/16 of escape, so the coin does make it worse.
0Thomas
Sure. But not only to 7/16 but to the infinite number of other values, too. You just have to play with it longer. The question now is, can the coin make it better, too? If not, why it can only make it worse?
0Gurkenglas
If you say two numbers with nonzero probability, you can improve your chances by shifting all the probability mass to one of them.
0cousin_it
If you say either 1 or 2 with probability 1/2 each, the probability of escaping is 7/16.
0Thomas
True. You can do it worse than 1/2. Just toss a coin and if it lands head up choose 1, otherwise choose 2. You can link more numbers this way and it can be even worse.

The Accidental Elitist- (academic jargon)

https://thebaffler.com/latest/accidental-elitism-alvarez

" there’s a huge difference between jargon as a necessarily difficult tool required for the academic work of tackling difficult concepts, and jargon as something used by tools simply to prove they’re academics."

"confirm your choice to be a so-called academic, to assume it not only as a profession, but an identity, and to wear on yourself the trappings that come with that identity without stopping to wonder how necessary they really are and whether they are actually killing your ability to be and do something better. "

I'm trying to find Alicorn's post, or anywhere else, where it is mentioned that she "hacked herself bisexual."

2Strangeattractor
Do you mean where she hacked herself to become polyamorous? If so, you may be looking for this post http://lesswrong.com/lw/79x/polyhacking/
1jam_brand
Here's a post, though not from Alicorn, that has some info that may be of interest: http://lesswrong.com/lw/453/procedural_knowledge_gaps/3i49
0[anonymous]

Sean Carroll writes in The Big Picture, p. 380:

The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.

I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we ... (read more)

2cousin_it
It doesn't seem to be universally true. For example, a thermostat's action is correlated with past temperature. People are similar to thermostats in some ways, for example upon touching a hot stove you'll quickly withdraw your hand. But we also differ from thermostats in other ways, because small amounts of noise in the brain (or complicated sensitive computations) can lead to large differences in actions. Maybe Carroll is talking about that?
1torekp
Good point. But consider the nearest scenarios in which I don't withdraw my hand. Maybe I've made a high-stakes bet that I can stand the pain for a certain period. The brain differences between that me, and the actual me, are pretty subtle from a macroscopic perspective, and they don't change the hot stove, nor any other obvious macroscopic past fact. (Of course by CPT-symmetry they've got to change a whole slew of past microscopic facts, but never mind.) The bet could be written or oral, and against various bettors. Let's take a Pearl-style perspective on it. Given DO:Keep.hand.there, and keeping other present macroscopic facts fixed, what varies in the macroscopic past?

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature... (read more)

2RowanE
Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.
1MattG2
Is it possible to make something a terminal value? If so, how?
0RowanE
By believing it's important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but "She had long been a believer in self-perfection and self-improvement" sounds like something one decides to care about.
0whpearson
Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals. I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).
0RowanE
That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons? Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number. I don't want to be upgraded into a "capable agent" and then cast back into the wilderness from whence I came, I'd settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer's Lower Bound.