Comment author: Qiaochu_Yuan 05 February 2013 05:31:52PM 1 point [-]

My thoughts and my behaviors. I suppose there is a third kind of emotion-hacking, namely hacking your emotional responses to external stimuli. But it's not as if I can respond to other people's thoughts, even in principle: all I have access to are sounds or images which purport to be correlated to those thoughts in some mysterious way.

Comment author: non-expert 05 February 2013 07:26:34PM *  0 points [-]

All emotions are responses to external stimuli, unless your emotions relate only to what is going on in your head, without reference to the outside (i.e. outside your body) world.

I agree you can't respond to others' thoughts, unless they express them such that they are "behaviors." Interestingly, the "problem" you have with the sounds or images (or words?) which purport to be correlated to others' thoughts is the same exact issue everyone is having with you (or me).

if we're confident in our own ability to express our thoughts (i.e. the correlation problem is not an issue for you), then how much can we dismiss others' expressions because of that very same issue?

Comment author: Qiaochu_Yuan 04 February 2013 04:58:15PM 2 points [-]

I suppose I should distinguish between two kinds of emotion-hacking: hacking your emotional responses to thoughts, and hacking your emotional responses to behaviors. The former is an epistemic technique and the latter is an instrumental technique. Both are quite useful.

Comment author: non-expert 05 February 2013 04:45:18PM 1 point [-]

whose thoughts and whose behaviors? not disagreeing, just asking.

Comment author: non-expert 04 February 2013 05:22:36PM *  -4 points [-]

EDIT: made small edits.

in my opinion, the question is brilliant and its importance is misunderstood, though EY somewhat dances around it.

Whether or not the tree makes a noise is irrelevant once no one can hear it, and thus whether or not the tree is heard is a pre-condition to knowledge that it has fell/made noise. the point then is that (i) the lack of truth to a statement and (ii) truth of a statement that cannot be understood are effectively the same thing.

In other words, what is pointless is trying to pin down truths that cannot be conclusively proven within the bounds of human comprehension (e.g., is there free-will, what is the meaning of life), because practically speaking you're in the same place you would be if there was no answer -- just arguing amongst those who choose to consider the question in the first place.

Comment author: non-expert 04 February 2013 05:54:42PM *  1 point [-]

the best evidence that confirmation bias is real and ever-present is a website of similarly thinking people that values comments based on those very users' reactions. perhaps unsurprisingly, those that conform to the conventional thought are rewarded with points. so i guess while the point system doesn't actually work as a substantive matter, at least we are afforded a constant reminder that confirmation bias is a problem even among those that purport to take it into account.

of course, my poking fun will only work so long as i don't get so many negative points that i can no longer question the conventional thought (gasp!). what is my limit? I'll make sure to conform just enough to stay on here. :) The worst part is I'm not even trying to troll, I'm trying to listen and question at the same time, which is how i thought I'm supposed to learn!

Comment author: TheOtherDave 04 February 2013 05:35:42PM 0 points [-]

Whether or not the tree makes a noise is irrelevant once no one can hear it, and thus whether or not the tree is heard is a pre-condition to knowledge that it has fell.

This simply isn't true. There are lots of ways I can know a tree has fallen, even if nobody has heard the tree fall.

Comment author: non-expert 04 February 2013 05:45:28PM *  0 points [-]

what you're saying is obviously true, but it goes beyond the information available. the question, limited the facts given, is representative of a larger point, which is the one I'm trying to explain as a general observation and is not limited to whether in fact That tree fell and made a noise.

btw, I never thanked you for our previous back and forth -- it was actually quite helpful, and your last comment in our discussion has kept me thinking for a couple weeks now, and perhaps in a couple more i will respond!

In response to Against Modal Logics
Comment author: non-expert 04 February 2013 05:36:20PM -1 points [-]

what is the basis for the position that knowledge of the world must come from analytical/probabilistic models? I'm not questioning the "correctness" of your view, only wondering your basis for it. It seems awfully convenient that a type of model that yields conclusions is in fact the correct one -- put another way, why is the availability of a clear methodology that gives you answers indicative of its universal applicability in attaining knowledge?

traditional philosophy, as you correctly point out, has failed to bridge its theory to practice -- but perhaps that is the flaw of the users and not the theory. rationalists generally believe the use of probabilities is sound methodology, but the problems regarding decision-making are a flaw of the practitioners. though I appreciate you likely disagree, perhaps we have the same problem with philosophy. Though there are no clear answers, the models of thought they provide could effectively apply in practical situations, its just that no philosopher has been able to get there.

Comment author: non-expert 04 February 2013 05:22:36PM *  -4 points [-]

EDIT: made small edits.

in my opinion, the question is brilliant and its importance is misunderstood, though EY somewhat dances around it.

Whether or not the tree makes a noise is irrelevant once no one can hear it, and thus whether or not the tree is heard is a pre-condition to knowledge that it has fell/made noise. the point then is that (i) the lack of truth to a statement and (ii) truth of a statement that cannot be understood are effectively the same thing.

In other words, what is pointless is trying to pin down truths that cannot be conclusively proven within the bounds of human comprehension (e.g., is there free-will, what is the meaning of life), because practically speaking you're in the same place you would be if there was no answer -- just arguing amongst those who choose to consider the question in the first place.

Comment author: Qiaochu_Yuan 03 February 2013 09:33:31PM *  4 points [-]

Agreed. The idea that I should be paying attention to and then hacking my emotions is not something I learned from the Sequences but from the CFAR workshop. In general, though, the Sequences are more concerned with epistemic than instrumental rationality, and emotion-hacking is mostly an instrumental technique (although it is also epistemically valuable to notice and then stop your brain from flinching away from certain thoughts).

Comment author: non-expert 04 February 2013 04:52:27PM 0 points [-]

emotion-hacking seems far more important in epistemic rationality, as your understanding of the world is the setting in which you use instrumental rationality, and your "lens" (which presumably encompasses your emotions) is the key hurdle (assuming you are otherwise rational) preventing you from achieving the objectivity necessary to form true beliefs about the world.

Comment author: TheOtherDave 14 January 2013 02:10:00PM 1 point [-]

You are presupposing the world has certainty, and only are concerned with our ability to derive that certainty (or answers).

Yes. The vial is either poisoned or it isn't, and my task is to decide whether to drink it or not. Do you deny that?

In that model, looking for the "best system" to find answers makes sense.

Yes, I agree. Indeed, looking for systems to find answers that are better than the one I'm using makes sense, even if they aren't best, even if I can't ever know whether they are best or not.

I am proposing that there are issues for which answers do not necessarily exist,

Sure. But "which vial is poisoned?" isn't one of them. More generally, there are millions of issues we face in our lives for which answers exist, and productive techniques for approaching those questions are worth exploring and adopting.

Immediately before the vial is chosen, the only relevance of the Truth (referring to actual truth) is the extent to which the people and I believe something consistent.

This is where we disagree.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.

And if you have a more survival-conducive way of approaching the vials than I and the other 999 people in the room, we do better to listen to you than to each other, even though your opinion is inconsistent with ours.

thus I am arguing that the only relevance of Truth is the extent to which humans agree with it.

Again, this is where we disagree. The relevance of "Truth" (as you're referring to it... I would say "reality") is also the extent to which some ways of approaching the world (for example, sniffing the two vials, or weighing them, or a thousand other tests) reliably have better results than just measuring the extent to which other humans agree with an assertion.

In your example, immediately after the vial is taken -- we find out we're right or wrong -- and our subjective truths may change.

Sure, that's true.

But it's far more useful to better entangle our decisions (our "subjective truths," as you put it) with reality ("Truth") before we make those decisions.

Comment author: non-expert 14 January 2013 06:42:50PM 0 points [-]

With respect to your example, I can only play with those facts that you have given me. In your example, I assumed that knowledge of which vial has poison could not be known, and the best information we had was our collective beliefs (which are based on certain factors you listed). I agree with the task at hand as you put it, but the devil is of course in the details.

Which vial contains poison is a fact about the world, and there are a million other contingent facts about the world that go one way or another depending on it. Maybe the air around the vial smells a little different. Maybe it's a different temperature. Maybe the poisoned vial weighs more, or less. All of those contingent facts means that there are different ways I can approach the vials, and if I approach the vials one way I am more likely to live than if I approach the vials a different way.

But as noted above, if we cannot derive the truth, it is just as good as not existing. If the "vial picker" knows the truth beforehand, or is able to derive it, so be it, but immediately before he picks the vial, the Truth, as the vial picker knows it, is of limited value -- he is unsure and everyone around him thinks hes an idiot. After the fact, everyone's opinion will change accordingly with the results. By creating your own example, you're presupposing (i) an answer exists to your question AND (ii) that we can derive it -- we don't have that luxury in the real life, and even if we have that knowledge to know an "answer" exists, we don't know whether the vial picker can accurately pick the appropriate vial based on the information available.

The idea of subjective truth (or subjective reality) doesn't rely solely on the fact that reality doesn't exist, most generally it is based on the idea that there may be cases a human cannot derive what is real even where there is some answer. If we cannot derive that reality, the existence of that reality must also be questioned. We of course don't have to worry about these subtleties if the examples we use assume an answer to the issue exists.

The meaning of this is that rationality in my mind is helpful only to the extent (i) an answer exists and (ii) it can be derived. If the answer to (i) and (ii) are yes, rationality sounds great. If the answer to (i) is no, or the answer to (i) is yes but (ii) is no, rationality (or any other system) has no purpose other than to give us a false belief that we're going about things in the best way. In such a world, there will be great uncertainty as to the appropriate human course of action.

This is why I'm asking why you are confident the answer to (i) is yes for all issues. You're describing a world that provides a level of certainty such that the rationality model works in all cases -- I'm asking why you know that amount of certainty exists in the world -- its convenience is precisely what makes its universal application suspect. As noted in my answer to MugaSofer, perhaps your position is based on assumption/faith without substantiation, which I'm comfortable with as a plausible answer, but not sure that is the basis you are using for the conclusion (for the record, my personal belief is that any sort of theory or basis for going about our lives requires some type of faith/assumptions because we cannot have 100% certainty)

Comment author: MugaSofer 14 January 2013 09:38:56AM *  1 point [-]

My position is that moral truths are not relative, exactly, but agents can of course have different goals. We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective".

Of course, an AI with limited reasoning capacity might judge wrongly, but then humans do likewise - see e.g. Nazis.

EDIT: Regarding EY writings on the subject, he wrote a whole Metaethics Sequence, much of which is leading up to or directly discussing this exact topic. Unfortunately, I'm having trouble with the filters on this library computer, but it should be listed on the sequences page (link at top right) or in a search for "metaethics sequence".

Comment author: non-expert 14 January 2013 06:04:03PM 0 points [-]

We can know what is Right, as long as we define it as "right according to human morals." Those are an objective (if hard to observe) part of reality. If we built an AI that tries to figure those out, then we get an ethical AI - so I would have a hard time calling them "subjective"

I don't dispute the possibility that your conclusion may be correct, I'm wondering the basis under which you believe your position to be correct. Put another way, why are moral truths NOT relative? How do you know this? Thinking something can be done is fine (AI, etc.), but without substantiation it introduces a level of faith to the conversation -- I'm comfortable with that as the reason, but wondering if you are or if you have a different basis for the position.

From my view, moral truths may NOT be relative, but I have no basis for which to know that, so I've chosen to operate as if they are relative because (i) if moral truths exist but I don't know what they are, I'm in the same position as them not existing/being relative, and (ii) moral truths may not exist. This doesn't mean you don't use morality in your life, its just that you need to have a belief, without substantiation, that those you subscribe to conform with universal morals, if they exist.

OK, i'll try to search for those EY writings, thanks.

Comment author: MugaSofer 10 January 2013 10:37:35AM *  -2 points [-]

Wait, does this "truth is relative" stuff only apply to moral questions? Because if it does then, while I personally disagree with you, there's a sizable minority here who wont.

Comment author: non-expert 14 January 2013 08:01:43AM 0 points [-]

What do you disagree with? That "truth is relative" applies to only moral questions? or that it applies to more than moral questions?

If instead your position is that moral truths are NOT relative, what is the basis for that position? No need to dive deep if you know of something i can read...even EY :)

View more: Prev | Next