I presume the purpose of a utopia is to optimize the feelings of all those in it. This could be done in two main ways: (1) directly, by acting upon the emotional/sensory processing areas of the brain, or (2) indirectly, by altering the environment. This article seems to assume that (2) is the only way to create a utopia. I understand that some degree of effort would need to be spent on optimizing the environment, in order to allow for the practical aspects of (1) to be developed and continually improved, but this article focuses entirely on environmental optimization. I wonder if this is due to a (subconscious?) moralistic belief that pleasure should be "meaningful", and that pleasure which is not related to real life outcomes is "bad". Or perhaps it is simply due to a lack of familiarity with the current and potential ways of directly acting on our brains (I doubt there would be a lack of familiarity with the theory behind this, but perhaps a lack of personal experience could result in the concept remaining abstract and not as well understood).
If we don’t get bored, we still don’t like the idea of joy without variety. And joyful experiences only seems good if they are real and meaningful
I don't understand what you mean about not liking the idea of joy without variety. Do you mean that people don't want to constantly feel joy, that they would rather feel a range of different emotions, including the unpleasant ones? This is not true for me personally.
Also, why do joyful experiences need to be real or meaningful? I think there is meaning in pleasure itself. Perhaps the most joyful experiences you have had were experiences that were "real and meaningful", so you have come to see joy as being inextricably connected to meaningfulness. Throughout evolution, this was probably true. But with modern technology, and the ability to biohack our neurochemistry, this association is no longer a given.
I agree with gfarb that the claimant does not necessarily believe in a belief; taking the parable literally, it is far more likely that the claimant suffers from a delusional disorder and actually genuinely believes that there is a dragon in his garage. As to why he anticipates that no one else will be able to detect the dragon, that would most likely be explained by his past experiences, where other people denied his experiences and failed to find evidence for them.
Truth is important because it is instrumental to all areas of life. By increasing our overall epistemic rationality, we will understand the world better, and so be able to act (or withold action) in ways that increase our quality of life. Without epistemic rationality, instrumental rationality may be incoherent and misdirected, seeking goals that are counterproductive to the agent's and/or common wellbeing. For example, a person might highly value outcome X, and practice instrumental rationality to achieve that outcome. However, if they had a better understanding of epistemic rationality, they might no longer value outcome X and instead more highly value different outcomes. Epistemic rationality allows us to "optimize" our values.
Optimizing our values and behaviour increases common wellbeing, therefore I think truth seeking and epistemic rationality is a moral imperative for everyone. I believe that the desire for increased wellbeing is actually the most important reason for truth seeking, and since it affects everyone, it is a moral/civil duty.
I think its great that the apostrophes were left out. Apart from possessive apostrophes, which I think should be used, apostrophes are an extra effort (especially when texting) that add no extra meaning or clarification.
Hi everyone, I have recently started reading this site again after having had a break for a couple years.
What I like most about LessWrong (apart from the name), is that it contains so much good quality information about rationality, all in one place. And the posters maintain such a high standard of reasoning. One thing I would like to see more of on the site, though, is analyses of common human behaviour.
Also, it would be great to see some humour on here.
My model for this is that there are strong norms against optimization. Specifically we are supposed to be genuine, which is to say conduct ourselves in dating as we would normally conduct ourselves, such that the people we date get an accurate view of the "real" us.
From what I have seen of online dating profiles, this view is extremely rare amongst the general population, and even rare amongst members of the rationalist community. Anectodally, people tend to be even more dishonest in their dating profiles than they are irl. Most people don't seem to understand the concept of representing themselves accurately, much less believe it is something they should aim for.
I think it is more likely that most poorly received dating profiles/dating behaviour is due to poor social awareness, as well as limits on how well certain perceived personal flaws can be concealed. E.g. an overweight person will try to dress in a way that makes them look thinner, and will use a photo of when they weighed less, but there is only so much their clothes can do to hide their weight, and their photo can't differ TOO much from reality because this will be discovered upon meeting irl. Also, differences in social attitudes and relationship goals can make for some unpleasant dating experiences.
In The Fun Theory Sequence Eliezer writes about the real life applications of this, eg.
and
So I don't think the article was intended only as a guide to authors.