Well, this is a stupid questions thread after all, so I might as well ask one that seems really stupid.
How can a person who promotes rationality have excess weight? Been bugging me for a while. Isn't it kinda the first thing you would want to apply your rationality to? If you have things to do that get you more utility, you can always pay diet specialist and just stick to the diet, because it seems to me that additional years to life will bring you more utility than any other activity you could spend that money on.
How can a person who promotes rationality have excess weight?
Easily :-)
This has been discussed a few times. EY has two answers, one a bit less reasonable and one a bit more. The less reasonable answer is that he's a unique snowflake and diet+exercise does not work for him. The more reasonable answer is that the process of losing weight downgrades his mental capabilities and he prefers a high level of mental functioning to losing weight.
From my (subjective, outside) point of view, the real reason is that he is unwilling to pay the various costs of losing weight. That, by the way, is not necessarily a rationality failure since rationality does not specify your value system and it's your values which determine whether a trade-off is worthwhile or not.
I have an intuition that I have dissolved the sleeping beauty paradox as semantic confusion about the word "probability". I am aware that my reasoning is unlikely to be accepted by the community, but I am unsure what is wrong with it. I am posting this to the "stupid questions" thread to see if helps me gain any insight either on Sleeping Beauty or on the thought process that led to me feeling like I've dissolved the question.
When the word "probability" is used to describe the beliefs of an agent, we are really talking abou...
I don't think this is a stupid question, but everyone else seems to—that is, the immediate reaction to it is usually "there's obviously no difference." I've struggled with this question a lot, and the commonly accepted answer just doesn't sit well with me.
If different races have different skin, muscle/bone structure, genetics, and maybe other things, shouldn't it follow that different races could have different brains, too?
I know this is taboo, and feel the following sort of disclaimer is obligatory: I'm not racist, nor do I think any difference ...
So there exists a Pure Caucasian, a Pure Mongoloid, and a Pure Negroid out there? Can you identify them? Can you name a rational basis for those morphological qualities by which you know them? Is it a coincidence that the qualities you have chosen coincide perfectly with those that were largely developed by bias-motivated individuals living in Europe, Australia, and North America over the past few centuries? Why not back hair, toe length, presence of palmeris longus muscle, renal vein anatomy, positon of the sciatic nerve relative to piriformis muscle? Am...
What is exactly status? What is this "thing I feel" when, everything equal, I have the sense that someone has more or less status than me? It must be a sort of neurotransmitters cocktail or what?
Sorry, not native english.
Is there a non-obvious reason why I can't create a non-profit entity whose sole purpose is to receive donations for selected effective charities that operate overseas and distribute that money to them, thereby enhancing the usefulness of local donations by allowing them to be tax-deductable?
(Specifically Australia, generally otherwise)
Is there a biological basis that explains that utilitarianism and preservation of our species should motivate our actions? Or is it a purely selfish consideration: I feel well when others feel well in my social environment (and therefore even dependent on consensus)?
Why do people believe that AI is dangerous? What direct evidence is there that this is likely to be the case?
What kind of information it gives to you when you observe that a bunch of filthy rich people were convinced by EY's arguments but MIRI is still badly in need of more funding?
Should one value the potential happiness of theoretical future simulated beings more than a certain decline in happiness for currently existing meat beings which will result as soon as the theoretical become real? Should one allow for absurdly large populations if the result is absurd morality?
The promise of countless simulated beings of equal moral value to meat beings, and who can be more efficiently cared for than meat, seems to make the needs and wants of simulated beings de facto overrule the needs and wants of meat beings ( as well as some absurdly l...
Listen to what the Khmer Rouge said
That's ... rather broad. Can you point to some specific thing indicating that the Khmer Rouge did what they did for reasons that resemble the ones you described?
the way the Chinese become market dominant in every south-east Asian country that acquires a Chinese minority
Thank you for alerting me to an interesting phenomenon of which I was not previously aware. On the face of it there are other explanations besides racial superiority; for instance, different social traditions can make one group succeed "against&...
Evidence that that was why they did what they did?
[EDITED to add: Also: if this is meant to be an example of an atrocity arising from a "false belief about equality": evidence that in fact the Chinese were better off than the Khmer on account of racial inequality?]
I am confused by the distinction between solving a problem and checking the solutions for it if I 'just estimate' the solution. For example, if I am shown a picture of various scattered geometric figures, of slightly but obviously sometimes different areas, and asked how many kinds (classes?) of surface areas are there, I will squint and guess. If I am shown many such pictures, perhaps the accuracy of my guesses will improve. But what is it I am actually doing?
People in finance tend to believe (reasonably I think) that the stock market trends upward. I believe they mean it trends upward even after you account for the value of the risk you take on by buying stock in a company (i.e. being in the stock market is not just selling insurance). So how does this mesh with the general belief that the market is at least pretty efficient. Why are we systematically underestimating future returns of companies?
Would an AI that simulates a physical human brain be less prone to FOOM than a human-level AI that doesn't bother simulating neurons?
It sounds like it might be harder for such an AI to foom, since it would have to understand the physical brain well enough before it could improve on its' simulated version. If such an AI exists at all, that knowedge would probably be available somewhere, so it could still happen if you simulated someone smart enough to learn it (or simulated one of the people who helped build it). The AI should at least be boxable if it does...
Would brain emulation work as a potential shortcut to the singularity? Upload a mind, speed up its subjective time, let it work on the problem? What could EY do with a thousand years to work on FAI? Could he come back in a few days of our time with the answer?
Does an AI have to have a utility function? Can we just make it good at giving answers, instead of asking it to act on them?
Going over the Yudkowsky/Hanson AI-Foom debate, it seems like the basic issue is how much of a difference an insight or two can make in an AI.
An AI with chimp-level intelligent ...
The most recent post in December's Stupid Questions article is from the 11th.
I suppose as the article's been pushed further down the list of new articles, it's had less exposure, so here's another one for the rest of December.
Plus I have a few questions, so I'll get it kicked off.
It was said in the last one, and it's good advice, I think:
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.