Hamming question: if your life were a movie and you were watching your life on screen, what would you be yelling at the main character? (example: don't go in the woods alone! Hurry up and see the quest guy! Just drop the sunk costs and do X) (optional - answer in public or private)
I'm looking for an anecdote about sunk costs. Two executives were discussing some bad business situation, one of them asks "look, suppose the board were to fire us and bring new execs in. What would those guys do?" "Get us out of the X business" "Then what's to stop us from leaving the room, coming back in, and doing exactly that?"
...but all my google-fu can't turn up the original source. Does it sound familiar to anyone here?
Grove says he and Moore were in his cubicle, "sitting around ... looking out the window, very sad." Then Grove asked Moore a question.
"What would happen if somebody took us over, got rid of us — what would the new guy do?" he said.
"Get out of the memory business," Moore answered.
Grove agreed. And he suggested that they be the ones to get Intel out of the memory business.
It seems to me that there's no difference in kind between moral intuitions and religious beliefs, except that the former are more deeply held. (I guess that makes me a kind of error theorist.)
If that's true, that means FAI designers shouldn't work on approaches like "extrapolation" that can convert a religious person to an atheist, because the same procedure might convert you into a moral nihilist. The task of FAI designers is more subtle: devise an algorithm that, when applied to religious belief, would encode it "faithfully" as a util...
Low-fat diet could kill you, major study shows (Lancet Canadian study of 135,000 adults )
"those with low intake of saturated fat raised chances of early death by 13 per cent compared to those eating plenty.
And consuming high levels of all fats cut mortality by up to 23 per cent."
“Higher intake of fats, including saturated fats, are associated with lower risk of mortality.”
“Our data suggests that low fat diets put populations at increas...
Integrals sum over infinitely small values. Is it possible to multiply infinitely small factors? For example, Integration of some random dx is a constant, since infinitely many infinitely small values can sum up to any constant. But can you do something along the lines of taking an infinitely large root of a constant, and get an infinitesimal differential in that way? Multiplying those differentials will yield some constant again.
My off the cuff impression is that this probably won't lead to genuinely new math. In the most basic case, all it does is move t...
The Accidental Elitist- (academic jargon)
https://thebaffler.com/latest/accidental-elitism-alvarez
" there’s a huge difference between jargon as a necessarily difficult tool required for the academic work of tackling difficult concepts, and jargon as something used by tools simply to prove they’re academics."
"confirm your choice to be a so-called academic, to assume it not only as a profession, but an identity, and to wear on yourself the trappings that come with that identity without stopping to wonder how necessary they really are and whether they are actually killing your ability to be and do something better. "
I'm trying to find Alicorn's post, or anywhere else, where it is mentioned that she "hacked herself bisexual."
Sean Carroll writes in The Big Picture, p. 380:
The small differences in a person’s brain state that correlate with different bodily actions typically have negligible correlations with the past state of the universe, but they can be correlated with substantially different future evolutions. That's why our best human-sized conception of the world treats the past and future so differently. We remember the past, and our choices affect the future.
I'm especially interested in the first sentence. It sounds highly plausible (if by "past state" we ...
A short story - titled "The end of meaning"
It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.
Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature...
A short story - titled "The end of meaning"
It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.
Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.
Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.
She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.
blip
Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.
As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.
She should probably continue to try and save humanity, because of indexical uncertainty.
Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...
Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.
blip
A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.
Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "