Is simplicity truth indicative?
This essay claims to refute a popularized understanding of Occam's Razor that I myself adhere to. It is confusing me, since I hold this belief at a very deep level that it's difficult for me to examine. Does anyone see any problems in its argument, or does it seem compelling? I specifically feel as though it might be summarizing the relevant Machine Learning research badly, but I'm not very familiar with the field. It also might be failing to give any credit to simplicity as a general heuristic when simplicity succeeds in a specific field, and it's unclear whether such credit would be justified. Finally, my intuition is that situations in nature where there is a steady bias towards growing complexity are more common than the author claims, and that such tendencies are stronger for longer. However, for all of this, I have no clear evidence to back up the ideas in my head, just vague notions that are difficult to examine. I'd appreciate someone else's perspective on this, as mine seems to be distorted.
Essay: http://bruce.edmonds.name/sinti/
The Unique Games Conjecture and FAI: A Troubling Obstacle
I am not a computer scientist and do not know much about complexity theory. However, it's a field that interests me, so I occasionally browse some articles on the subject. I was brought to https://www.simonsfoundation.org/mathematics-and-physical-science/approximately-hard-the-unique-games-conjecture/ by a link on Scott Aaronson's blog, and read the article to reacquaint myself with the Unique Games Conjecture, which I had partially forgotten about. If you are not familiar with the UGC, that article will explain it to you better than I can.
One phrase in the article stuck out to me: "there is some number of colors k for which it is NP-hard (that is, effectively impossible) to distinguish between networks in which it is possible to satisfy at least 99% of the constraints and networks in which it is possible to satisfy at most 1% of the constraints". I think this sentence is concerning for those interested in the possibility of creating FAI.
It is impossible to perfectly satisfy human values, as matter and energy are limited, and so will be the capabilities of even an enormously powerful AI. Thus, in trying to maximize human happiness, we are dealing with a problem that's essentially isomorphic to the UGC's coloring problem. Additionally, our values themselves are ill-formed. Human values are numerous, ambiguous, even contradictory. Given the complexities of human value systems, I think it's safe to say we're dealing with a particularly nasty variation of the problem, worse than what computer scientists studying it have dealt with.
Not all specific instances of complex optimization problems are subject to the UGC and thus NP hard, of course. So this does not in itself mean that building an FAI is impossible. Also, even if maximizing human values is NP hard (or maximizing the probability of maximizing human values, or maximizing the probability of maximizing the probability of human values) we can still assess a machine's code and actions heuristically. However, even the best heuristics are limited, as the UGC itself demonstrates. At bottom, all heuristics must rely on inflexible assumptions of some sort.
Minor edits.
Is arguing worth it? If so, when and when not? Also, how do I become less arrogant?
I've had several political arguments about That Which Must Not Be Named in the past few days with people of a wide variety of... strong opinions. I'm rather doubtful I've changed anyone's mind about anything, but I've spent a lot of time trying to do so. I also seem to have offended one person I know rather severely. Also, even if I have managed to change someone's mind about something through argument, it feels as though someone will end up having to argue with them later down the line when the next controversy happens.
It's very discouraging to feel this way. It is frustrating when making an argument is taken as a reason for personal attack. And it's annoying to me to feel like I'm being forced into something by the disapproval of others. I'm tempted to just retreat from democratic engagement entirely. But there are disadvantages to this, for example it makes it easier to maintain irrational beliefs if you never talk to people who disagree with you.
I think a big part of the problem is that I have an irrational alief that makes me feel like my opinions are uniquely valuable and important to share with others. I do think I'm smarter, more moderate, and more creative than most. But the feeling's magnitude and influence over my behavior is far greater than what's justified by the facts.
How do I destroy this feeling? Indulging it satisfies some competitive urges of mine and boosts my self-esteem. But I think it's bad overall despite this, because it makes evaluating the social consequences of my choices more difficult. It's like a small addiction, and I have no idea how to get over it.
Does anyone else here have an opinion on any of this? Advice from your own lives, perhaps?
I Want To Believe: Rational Edition
Relevant: http://lesswrong.com/lw/k7h/a_dialogue_on_doublethink/
I would like this conversation to operate under the assumption that there are certain special times when it is instrumentally rational to convince oneself of a proposition whose truth is indeterminate, and when it is epistemically rational as well. The reason I would like this conversation to operate under this assumption is that I believe questioning this assumption makes it more difficult to use doublethink for productive purposes. There are many other places on this website where the ethics or legitimacy of doublethink can be debated, and I am already aware of its dangers, so please don't mention such things here.
I am hoping for some advice. "Wanting to believe" can be both epistemically and instrumentally rational, as in the case of certain self-fulfilling prophecies. If believing that I am capable of winning a competition will cause me to win, believing that I am capable of winning is rational both in the instrumental sense that "rationality is winning" and in the epistemic sense that "rationality is truth".
I used to be quite good at convincing myself to adopt beliefs of this type when they were beneficial. It was essentially automatic, I knew that I had the ability and so applying it was as trivial as remembering its existence. Nowadays, however, I'm almost unable to do this at all, despite what I remember. It's causing me significant difficulties in my personal life.
How can I redevelop my skill at this technique? Practicing will surely help, and I'm practicing right now so therefore I'm improving already. I'll soon have the skill back stronger than ever, I'm quite confident. But are there any tricks or styles of thinking that can make it more controllable? Any mantras or essays that will help my thought to become more fluidly self-directed? Or should I be focused on manipulating my emotional state rather than on initiating a direct cognitive override?
I feel as though the difficulties I've been having become most pronounced when I'm thinking about self-fulfilling prophecies that do not have guarantees of certainty attached. The lower my estimated probability that the self-fulfilling prophecy will work for me, the less able I am to use the self-fulfilling prophecy as a tool, even if the estimated gains from the bet are large. How might I deal with this problem, specifically?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)