please go read the most basic counterarguments to this class of objections to anti-aging at https://agingbiotech.info/objections/
In my experience as a subject of hypnosis, I always have a background thought that I could choose to not do/feel the thing, that I choose to do/feel as I'm told. I distinctly remember feeling the background thought there, before choosing to do, or letting myself feel, the thing I'm told. It is still surprising how much and ho many things that are usually subconscious can be controlled through it, though.
On Wednesdays at the Princeton Graduate College, various people would come in to give talks. The speakers were often interesting, and in the discussions after the talks we used to have a lot of fun. For instance, one guy in our school was very strongly anti-Catholic, so he passed out questions in advance for people to ask a religious speaker, and we gave the speaker a hard time.
Another time somebody gave a talk about poetry. He talked about the structure of the poem and the emotions that come with it; he divided everything up into certain kinds of...
If your theory leads you to an obviously stupid conclusion, you need a better theory.
Total utilitarianism is boringly wrong for this reason, yes.
What you need is non-stupid utilitarianism.
First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. Thi
...didn't we use to call those "exokernels" before?
I'm curious who the half is and why. Is it that they are half a rationalist? Half (the time?) in Berkeley? (If it is not half the time then where is the other half?)
Also. The N should be equal to the cardinality of the entire set of rationalists you interacted with, not just of those who are going insane; so, if you have been interacting with seven and a half rationalists in total, how many of those are diving into the woo? Or if you have been interacting with dozens of rationalists, how many times more people were they than 7.5?
There was a web thing with a Big Red Button, running in Seattle, Oxford (and I think Boston also).
Each group had a cake and if they got nuked, they wouldn't get to eat the cake.
At the time when the Seattle counter said that the game was over for 1 second, someone there puched the button for the lulz, but the Oxford counter was not at zero yet and so they got nuked, then they decided to burn the cake instead of just not eating it.
I hope we all learned a valuable lesson here today.
With common priors.
This is what does all the work there! If the disagreeers have non-equal priors on one of the points, then of course they'll have different posteriors.
Of course ap
..."the problem with lesswrong : it's not literally twitter"
Thank you so much for writing this! I remember reading a tumblr post that explained the main point a while back and could never find it again -because tumblr is an unsearchable memory hole- and kept needing to link it to people who got stuck on taking Eliezer's joking one-liner seriously.
It may be that the person keeps expounding their reasons for their wanting you to do the thing because it feels aversive to them to stop infodumping, and/or because they expect you to respond with your reasons for doing the thing so that they know whether your doing the thing is an instance of 2 or of 3.
The AIs still have to make atoms move for anything Actually Bad to happen.
No. Correct models are good. Or rather, more correct models, applied properly, are better than less correct models, or models applied wrongly.
All those examples, however, are bad:
Calories in / Calories out is a bad model because different sources of calories are metabolized differently and have different effects on the organism. It is bad because it is incomplete and used improperly for things that it is bad at. It stays true that to get output from a mechanism, you do have to input some fuel into it; CICO is good enough to calculate, for example, how m
Is this enlightenment anything like described in https://aellagirl.com/2017/07/07/the-abyss-of-want/ ?
Also possibly related : http://nonsymbolic.org/wp-content/uploads/2014/02/PNSE-Article.pdf (can you point on that map where you think you found yourself)
I'm thinking of something like a section on the main lesserwrong.com page showing the latest edits to the wiki, so that the users of the site could see them and choose to go look at whether what changed in the article is worth points.
I think the lesswrong wiki was supposed to be that repository of the interesting/important things that were posted to the community blog.
It could be a good idea to make a wiki in lw2.0 and award site karma to people contributing to it.
welp, 2/4 current residents and the next one planned to come there are trans women so um, what gender ratio issue again?
Why yes, there should be such a list; I don't know of any existing one.
Well so far it's ... a group house; with long late night conversations, also running self-experiments (currently measuring results of a low-carb diet), and organizing the local monthly rationalist meetup.
We are developing social tech solutions such as a database of competing access needs and a formal system for dealing with house logistics.
I am confused about what sort of blog post you are requesting people write. I assume you don't mean that people should list off a variety of interesting facts about the Bay Area, e.g. "the public transit system, while deeply inadequate, is one of the best in the country," "UCSF is one of the top hospitals in the United States for labor and delivery," "everything in San Francisco smells like urine," "adult night at the Exploratorium is awesome," "there are multiple socialist pizza places in Berkeley which sc...
This means you are trying to Procustes the human squishiness into legibility, with consistent values. You should, instead, be trying to make pragmatic AIs that would frame the world for the humans, in the ways that the humans would approve*, taking into account their objectively stupid incoherence. Because that would be Friendly and parsed as such by the humans.
*=this doesn't mean that such human preferences as those that violate meta-universalizability from behind the veil of ignorance should not be factored out of the calculation of what is ethicall...
Note, last year's survey was also run by /u/ingres
I think the questions of the next survey should be a superset of those on the last survey. Maybe not strictly, but it's too interesting to track year-on-year changes to remove questions unless it's really unquestionably obvious that they're superfluous.
New users need 2 points to vote.
i'd bet at at least 1:20 that lung scarring and brain damage are permanent.