please go read the most basic counterarguments to this class of objections to anti-aging at https://agingbiotech.info/objections/
In my experience as a subject of hypnosis, I always have a background thought that I could choose to not do/feel the thing, that I choose to do/feel as I'm told. I distinctly remember feeling the background thought there, before choosing to do, or letting myself feel, the thing I'm told. It is still surprising how much and ho many things that are usually subconscious can be controlled through it, though.
If your theory leads you to an obviously stupid conclusion, you need a better theory.
Total utilitarianism is boringly wrong for this reason, yes.
What you need is non-stupid utilitarianism.
First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. This is before going into the details where, for example, preference utilitarianism is a model where
didn't we use to call those "exokernels" before?
I'm curious who the half is and why. Is it that they are half a rationalist? Half (the time?) in Berkeley? (If it is not half the time then where is the other half?)
Also. The N should be equal to the cardinality of the entire set of rationalists you interacted with, not just of those who are going insane; so, if you have been interacting with seven and a half rationalists in total, how many of those are diving into the woo? Or if you have been interacting with dozens of rationalists, how many times more people were they than 7.5?
There was a web thing with a Big Red Button, running in Seattle, Oxford (and I think Boston also).
Each group had a cake and if they got nuked, they wouldn't get to eat the cake.
At the time when the Seattle counter said that the game was over for 1 second, someone there puched the button for the lulz, but the Oxford counter was not at zero yet and so they got nuked, then they decided to burn the cake instead of just not eating it.
With common priors.
This is what does all the work there! If the disagreeers have non-equal priors on one of the points, then of course they'll have different posteriors.
Of course applying Bayes' Theorem with the same inputs is going to give the same outputs, that's not even a theorem, that's an equals sign.
If the disagreeers find a different set of parameters to be relevant, and/or the parameters they both find relevant do not have the same values, the outputs will differ, and they will continue to disagree.
"the problem with lesswrong : it's not literally twitter"
Thank you so much for writing this! I remember reading a tumblr post that explained the main point a while back and could never find it again -because tumblr is an unsearchable memory hole- and kept needing to link it to people who got stuck on taking Eliezer's joking one-liner seriously.
i'd bet at at least 1:20 that lung scarring and brain damage are permanent.