Note that the original question wasn't "Is it right for a pure altruist to have children?", it was "Would a pure altruist have children?". And the answer to that question most definitely depends on the beliefs of the altruist being modeled. It's also a more useful question, because it leads us to explore which beliefs matter and how they effect the decision (the alternative being that we all start arguing about our personal beliefs on all the relevant topics).
This sounds like a sufficiently obvious failure mode that I'd be extremely surprised to learn that modern index funds operate this way, unless there's some worse downside that they would encounter if their stock allocation procedure was changed to not have that discontinuity.
I think the important insight you may be missing is that the AI, if intelligent enough to recursively self-improve, can predict what the modifications it makes will do (and if it can't, then it doesn't make that modification because creating an unpredictable child AI would be a bad move according to almost any utility function, even that of a paperclipper). And it evaluates the suitability of these modifications using its utility function. So assuming the seed AI is build with a sufficiently solid understanding of self-modification and what its own code is...
Could someone explain the reasoning behind answer A being the correct choice in Question 4? My analysis was to assume that, since 30 migraines a year is still pretty terrible (for the same reason that the difference in utility between 0 and 1 migraines per year is larger than the difference between 10 and 11), I should treat the question as asking "Which option offers more migraines avoided per unit money?"
Option A: $350 / 70 migraines avoided = $5 per migraine avoided
Option B: $100 / 50 migraines avoided = $2 per migraine avoided
And when I di...
My understanding is that it was once meant to be almost tvtropes-like with a sort of back-and forth linking between pages about concepts on the wiki and posts which refer to those concepts on the main site (in the same way that tvtropes gains a lot of its addictiveness from the back-and-forth between pages for tropes and pages for shows/books/etc).
I think we're in agreement then, although I've managed to confuse myself by trying to actually do the Shannon entropy math.
In the event we don't care about birth orders we have two relevant hypotheses which need to be distinguished between (boy-girl at 66% and boy-boy at 33%), so the message length would only need to be 0.9 bits#Definition) if I'm applying the math correctly for the entropy of a discrete random variable. So in one somewhat odd sense Sarah would actually know more about the gender than George does.
Which, given that the original post said
...S
The standard formulation of the problem is such you are the one making the bizarre contortions of conditional probabilities by asking a question. The standard setup has no children with the person you meet, he tells you only that he has two children, and you ask him a question rather than them revealing information. When you ask "Is at least one a boy?", you set up the situation such that the conditional probabilities of various responses are very different.
In this new experimental setup (which is in very real fact a different problem from eithe...
I agree that George definitely does know more information overall, since he can concentrate his probability mass more sharply over the 4 hypotheses being considered, but I'm fairly certain you're wrong when you say that Sarah's distribution is 0.33-0.33-0-0.33. I worked out the math (which I hope I did right or I'll be quite embarassed), and I get 0.25-0.25-0-0.5.
I think your analysis in terms of required message lengths is arguably wrong, because the purpose of the question is to establish the genders of the children and not the order in which they were b...
I'll just note in passing that this puzzle is discussed in this post, so you may find it or the associated comments helpful.
I think the specific issue is that in the first case, you're assuming that each of the three possible orderings yields the same chance of your observation (the son out walking with him is a boy). If you assume that his choice of which child to go walking with is random, then the fact that you see a boy makes the (girl, boy) possibilities each less likely, so together they are equally likely to the (boy, boy) one.
Let's define (imaginin...
The Shangri-La diet has been mentioned a few times around here, and each time I came across it I went "Hmm, that's cool, I'll have to do it some time". Last week I realized that this was in large part due to the fact that all discussions of it say something along the lines of "Sugar water is listed as one of the options, but you should really do one of the less pleasant alternatives". And this was sufficient to make me file it away as something I should do "some time".
I'm not in any population which is especially more strongly...
Maybe "value loading" is a term most people here can be expected to know, but I feel like this post would really be improved by ~1 paragraph of introduction explaining what's being accomplished and what the motivation is.
As it is, even the text parts make me feel like I'm trying to decipher an extremely information-dense equation.
Actually, I don't think oxygen tanks are that expensive relative to the potential gain. Assuming that the first result I found for a refillable oxygen tank system is a reasonable price, and conservatively assuming that it completely breaks down after 5 years, that's only $550 a year, which puts it within the range of "probably worthwhile for any office worker in the US" (assuming an average salary of $43k) if it confers a performance benefit greater than around 1.2% on average.
These tanks supposedly hold 90% pure oxygen, and are designed to be us...
The self-modification isn't in itself the issue though is it? It seems to me that just about any sort of agent would be willing to self-modify into a utility monster if it had an expectation of that strategy being more likely to achieve its goals, and the pleasure/pain distinction is simply adding a constant (negative) offset to all utilities (which is meaningless since utility functions are generally assumed to be invariant under affine transformations).
I don't even think it's a subset of utility monster, it's just a straight up "agent deciding to become a utility monster because that furthers its goals".
removed unnecessarily harsh comment, the basic message of which was supposed to be "the evidence you presented is imaginary", but which ended up being too much of a personal attack
When in doubt, try a cheap experiment.
Make a list of various forms of recreation, then do one of them for some amount of time whenever you feel the need to take a break. Afterwards, note how well-rested you feel and how long you performed the activity. It shouldn't take many repetitions before you start to notice trends you can make use of.
Although to be honest, the only conclusive thing I've learned from trying that is that there's a large gap between "feeling rested" and "feeling ready to get back to work on something productive".
I realized upon further consideration that I don't actually have any evidence regarding keyboards and RSI, so here are the most relevant results of my brief research:
I'm going to disagree with the weakness of your recommendation. I may be falling prey to the typical mind fallacy here, but I feel that everyone who types for a significant fraction of their day (programming, writing, etc) should at least strongly consider getting a mechanical keyboard. In addition to feeling nicer to type on, there's some weak evidence that buckling-spring keyboards can lower your risk of various hand injuries down the line, and even a slightly lessened risk of RSI is probably worth the $60 or so more that a mechanical keyboard costs, even ignoring the greater durability.
I'm not particularly attached to that metric, it was mostly just an example of "here's a probably-cheap hack which could help remedy the problem". On the other hand, I'm not convinced that one post means that a "Automatically promote after a score of 10" policy wouldn't improve the overall state of affairs, even if that particular post is a net negative.
I feel like the mechanism probably goes something like:
An appropriate umeshism might be "If you've never gotten a post moved to Discussion, you're being too much of a perfectionist."
The problem, of course, is that there are very few things we can do to...
This is an extremely clear explanation of something I hadn't even realized I didn't understand. Thank you for writing it.