Comment author: [deleted] 22 October 2011 10:50:51PM *  1 point [-]

The diamond case: Even if I did want a diamond, I simulate that I would feel nervous, alarmed even, if I indicated that I wanted it to bring me one box and I was brought a different box instead.

My brief recapitulation of Yudkowsky’s diamond example (which you can read in full in his CEV document) probably misled you a little bit. I expect that you would find Yudkowsky’s more thorough exposition of “extrapolating volition” somewhat more persuasive. He also warns about the obvious moral hazard involved in mere humans claiming to have extrapolated someone else’s volition out to significant distances – it would be quite proper for you to be alarmed about that!

If creating something that would act according to what one would want if one /were/ more intelligent or more moral or more altruistic, then A) that would only be desirable if one were such a person currently instead of being the current self, or B) that would be a good upgraded-replacement-self to let loose on the universe while oneself ceasing to exist without seeking to have one's own will be done (other than on that matter of self-replacement).

Taken to the extreme this belief would imply that every time you gain some knowledge, improve your logical abilities or are exposed to new memes, you are changed into a different person. I’m sure you don’t believe that – this is where the concept of “distance” comes into play: extrapolating to short distance (as in the diamond example) allows you to feel that the extrapolated version of yourself is still you, but medium or long distance extrapolation might cause you to see the extrapolated self as alien.

It seems to me that whether a given extrapolation of you is still “you” is just a matter of definition. As such it is orthogonal to the question of the choice of CEV as an AI Friendliness proposal. If we accept that an FAI must take as input multiple human value sets in order for it to be safe – I think that Yudkowsky is very persuasive on this point in the sequences – then there has to be a way of getting useful output from those value sets. Since our existing value computations are inconsistent in themselves, let alone with each other the AI has to perform some kind of transformations to cohere a useful signal from this input – this screens off any question of whether we’d be happy to run with our existing values (although I’d certainly choose the extrapolated volition in any case). “Knowing more”, “thinking faster”, “growing up closer together” and so on seem like the optimal transformations for it to perform. Short-distance extrapolations are unlikely to get the job done, therefore medium or long-distance extrapolations are simply necessary, whatever your opinion on the selfhood question.

Eliezer says: “If our extrapolated volitions say we don't want our extrapolated volitions manifested, the system replaces itself with something else we want, or vanishes in a puff of smoke.” A possible cause of such an output might be the selfhood concern that you have raised.

Comment author: Multipartite 23 October 2011 07:25:22PM 0 points [-]

Diamond: Ahh. I note that looking at the equivalent diamond section, 'advise Fred to ask for box B instead' (hopefully including the explanation of one's knowledge of the presence of the desired diamond) is a notably potentially-helpful action, compared to the other listed options which can be variably undesirable.


Varying priorities: That I change over time is an accepted aspect of existence. There is uncertainty, granted; on the one hand I don't want to make decisions that a later self would be unable to reverse and might disapprove of, but on the other hand I am willing to sacrifice the happiness of a hypothetical future self for the happiness of my current self (and different hypothetical future selves)... hm, I should read more before I write more, as otherwise redundancy is likely. (Given that my priorities could shift in various ways, one might argue that I would prefer something to act on what I currently definitely want, rather than on what I might or might not want in the future (yet definitely do not want (/want not to be done) /now/). An issue of possible oppression of the existing for the sake of the non-existant... hm.)

To check, does 'in order for it to be safe' refer to 'safe from the perspectives of multiple humans', compared to 'safe from the perspective of the value-set source/s'? If so, possibly tautologous. If not, then I likely should investigate the point in question shortly.

Another example that comes to mind regarding a conflict of priorities: 'If your brain was this much more advanced, you would find this particular type of art the most sublime thing you'd ever witnessed, and would want to fill your harddrive with its genre. I have thus done so, even though to you who owns the harddrive and can't appreciate it it consists of uninteresting squiggles, and has overwritten all the books and video files that you were lovingly storing.'


Digression: If such an entity acts according to a smarter-me's will, then theoretically existing does the smarter-me necessarily 'exist' as simulated/interpreted by the entity? Put another way, for a chatterbot to accurately create the exact interactions/responses that a sapient entity would, is it theoretically necessary for a sapient entity to effectively exist, simulated by the non-sapient entity, or could such an entity mimic a sapient entity withou sapience entering into the matter? (Would then a mimicked-sapient entity exist in a meaningful sense, but only if there were sapient entities hearing its words and benefiting from its willed actions, compared to if there were only multple mimicked-entities talking to each other? Hrm.) | If a smarter-me was necessarily simulated in a certain sense in order to carry out its will, I might be willing to accede to it in the same spirit as to extremely-intelligent aliens/robots wanting to wipe out humanity for their own reasons, but I would be unwilling to accept things which are against my interests being carried out for the interests of an entity which does not in fact in any sense exist.


Manifestation: It occurs to me that a sandbox version could be interesting to oberve, one's non-extrapolated volition wanting our extrapolated volitions to be modelled in simulated world-section level 2, and as a result of such a contradiction instead the extrapolated volitions of those in level 2 /not/ being modelled in level 3, yet still being modelled in level 2... again, though, while such a tool might be extremely useful for second-guessing one's decisions and discussing with one very, very good reasons to rethink them (and thus in fact oneself changing hopefully-beneficially as a person (?) where applicable), something which directly defies one's will(/one's curiosity) lacks appeal as a goal (/stepping stone) to work towards.

Comment author: Multipartite 22 October 2011 12:17:21PM *  4 points [-]

I unfortunately lack time at the moment; rather than write a badly-thought-out response to the complete structure of reasoning considered, I will for the moment write fully-thought-out thoughts on minor parts thereof that my (?) mind/curiosity has seized on.


'As for “taking over the world by proxy”, again SUAM applies.': this sentence stands out, but glancing upwards and downwards does not immediately reveal what SUAM refers to. Ctrl+F and looking at all appearances of the term SUAM on the page does not reveal what SUAM refers to. The first page of Google results for 'SUAM' does not reveal what SUAM refers to.

Hopefully SUAM is a reference to an S* U* A* M* acronym used elsewhere in the article or in a different well-known article, but a suggestion may be helpful that if the first then S* U* A* M* (SUAM) would be convenient in terms of phrase->acronym, and if the second then a reference to the location or else the expanded form of the acronym would be convenient.


The diamond case: Even if I did want a diamond, I simulate that I would feel nervous, alarmed even, if I indicated that I wanted it to bring me one box and I was brought a different box instead. I'm reminded--though this is not directly relevant--of Google searches, where I on occasion look up a rare word I'm unfamiliar with, and instead am given a page of results for a different (more common) word, with a question at the top asking me if I instead want to search for the word I searched for.

For Google, I would be much less frustrated if it always gave me the results I asked for, and maybe asked if I wanted to search for something else. (That way, when I do misspell something, I'm rightfully annoyed at myself and rightfully pleased with the search engine's consistent behaviour.) For the diamond case, I would be happy if it for instance noticed that I wanted the diamond and alerted me to its actual location, giving me a chance to change my official decision.

Otherwise, I would be quite worried about it making other such decisions without my official consent, such as "Hmm, you say you want to learn about these interesting branches of physics, but I can tell that you say that because you anticipate doing so will make you happy, so I'll ignore your request and pump your brain full of drugs instead forever.". Even if in most cases the outcome is acceptable, for something to second-guess your desires at all means there's always the possibility of irrevocably going against your will.

People may worry that a life of getting whatever one wants(/asks for) may not be ideal, but I'm reminded of the immortality/bat argument in that a person who gets whatever that person wants would probably not want to give that up for the sake of the benefits that would arguably come with not having those advantages.

In a more general sense, given that I already possess priorities and want them to be fulfilled (and know how I want to fulfill them), I would appreciate an entity helping me to do so, but would not want an entity to fulfill priorities that I don't hold or try to fulfill them in ways which conflict with my chosen methods of fulfilling them. If creating something that would act according to what one woul want if one /were/ more intelligent or more moral or more altruistic, then A) that would only be desirable if one were such a person currently instead of being the current self, or B) that would be a good upgraded-replacement-self to let loose on the universe while oneself ceasing to exist without seeking to have one's own will be done (other than on that matter of self-replacement).

Comment author: Multipartite 22 October 2011 08:25:15PM 0 points [-]

Reading other comments, I note my thoughts on the undesirability of extrapolation have largely been addressed elsewhere already.


Current thoughts on giving higher preference to a subset:

Though one would be happy with a world reworked to fit one's personal system of values, others likely would not be. Though selected others would be happy with a world reworked to fit their agreed system of values, others likely would not be. Moreover, assuming changes over time, even if such is held to a certain degree at one point in time, changes based on that may turn out to be regrettable.

Given that one's own position (and those of any other subset) are liable to be riddled with flaws, multiplying may dictate that some alternative to the current situation in the world be provided, but it does not necessarily dictate that one must impose one subset's values on the rest of the world to the opposition of that rest of the world.

Imposition of peace on those filled with hatred who thickly desire war results in a worsening of those individuals' situation. Imposition of war on those filled with love who strongly esire peace results in a worsening of those individuals' situation. Taking it as given that each subset's ideal outcome differs significantly from that of every other subset in the world, any overall change according to the will of one subset seems liable to yield more opposition and resentment than it does approval and gratitude.

Notably, when thinking up a movement worth supporting, such an action is frightening and unstable--people with differing opinions climbing over each other to be the ones who determine the shape of the future for the rest.

What, then, is an acceptable approach by which the wills coincide of all these people who are opposed to the wills of other groups being imposed on the unwilling?

Perhaps to not remake the world in your own image, or even in the image of people you choose to be fit to remake the world in their own image, or even the image of people someone you know nothing about chose to be fit to remake the world in their own image.

Perhaps a goal worth cooperating towards and joining everyone's forces together to work towards is that of an alternative, or perhaps many, which people can choose to join and will be imposed on all willing and only those who are willing.

For those who dislike the system others choose, let them stay as they are. For those who like such systems more than their current situation, let them leave and be happier.

Leave the technophiles to their technophilia, the... actually I can't select other groups, because who would join and who would stay depends on what gets made. Perhaps it might end up with different social groups existing under the separate jurisdictions of different systems, while all those who preferred their current state to any systems as yet created remained on Earth.

A non-interference arrangement with free-to-enter alternatives for all who prefer it to the default situation: while maybe not anyone's ideal, hopefully something that all can agree is better, and something that to no one is in fact worse.

(Well, maybe to those people who have reasons for not wanting chunks of the population to leave in search of a better life..? Hmm.)

Comment author: Stuart_Armstrong 22 October 2011 12:35:00PM 5 points [-]

SUAM = shut up and multiply

Comment author: Multipartite 22 October 2011 07:56:13PM 1 point [-]

Ahh. Thank you! I was then very likely at fault on that point, being familiar with the phrase yet not recognising the acronym.

Comment author: Multipartite 22 October 2011 12:17:21PM *  4 points [-]

I unfortunately lack time at the moment; rather than write a badly-thought-out response to the complete structure of reasoning considered, I will for the moment write fully-thought-out thoughts on minor parts thereof that my (?) mind/curiosity has seized on.


'As for “taking over the world by proxy”, again SUAM applies.': this sentence stands out, but glancing upwards and downwards does not immediately reveal what SUAM refers to. Ctrl+F and looking at all appearances of the term SUAM on the page does not reveal what SUAM refers to. The first page of Google results for 'SUAM' does not reveal what SUAM refers to.

Hopefully SUAM is a reference to an S* U* A* M* acronym used elsewhere in the article or in a different well-known article, but a suggestion may be helpful that if the first then S* U* A* M* (SUAM) would be convenient in terms of phrase->acronym, and if the second then a reference to the location or else the expanded form of the acronym would be convenient.


The diamond case: Even if I did want a diamond, I simulate that I would feel nervous, alarmed even, if I indicated that I wanted it to bring me one box and I was brought a different box instead. I'm reminded--though this is not directly relevant--of Google searches, where I on occasion look up a rare word I'm unfamiliar with, and instead am given a page of results for a different (more common) word, with a question at the top asking me if I instead want to search for the word I searched for.

For Google, I would be much less frustrated if it always gave me the results I asked for, and maybe asked if I wanted to search for something else. (That way, when I do misspell something, I'm rightfully annoyed at myself and rightfully pleased with the search engine's consistent behaviour.) For the diamond case, I would be happy if it for instance noticed that I wanted the diamond and alerted me to its actual location, giving me a chance to change my official decision.

Otherwise, I would be quite worried about it making other such decisions without my official consent, such as "Hmm, you say you want to learn about these interesting branches of physics, but I can tell that you say that because you anticipate doing so will make you happy, so I'll ignore your request and pump your brain full of drugs instead forever.". Even if in most cases the outcome is acceptable, for something to second-guess your desires at all means there's always the possibility of irrevocably going against your will.

People may worry that a life of getting whatever one wants(/asks for) may not be ideal, but I'm reminded of the immortality/bat argument in that a person who gets whatever that person wants would probably not want to give that up for the sake of the benefits that would arguably come with not having those advantages.

In a more general sense, given that I already possess priorities and want them to be fulfilled (and know how I want to fulfill them), I would appreciate an entity helping me to do so, but would not want an entity to fulfill priorities that I don't hold or try to fulfill them in ways which conflict with my chosen methods of fulfilling them. If creating something that would act according to what one woul want if one /were/ more intelligent or more moral or more altruistic, then A) that would only be desirable if one were such a person currently instead of being the current self, or B) that would be a good upgraded-replacement-self to let loose on the universe while oneself ceasing to exist without seeking to have one's own will be done (other than on that matter of self-replacement).

Comment author: [deleted] 11 October 2011 07:55:50PM *  2 points [-]

If you think that torture is worse than dust specks, at what step do you not go along with the reasoning?

When I first read Eliezer's post on this subject, I was confused by this transitivity argument. It seems reasonable. But even at that point, I questioned the idea that if all of the steps as you outline them seem individually reasonable, but torture instead of dust specks seems unreasonable, it is "obvious" that I should privilege the former output of my value computation over the latter.

My position now is that in fact, thinking carefully about the steps of gradually increasing pain, there will be at least one that I object to (but it's easy to miss because the step isn't actually written down). There is a degree of pain that I experience that is tolerable. Ouch! That's painful. There is an infinitesimally greater degree of pain (although the precise point at which this occurs, in terms of physical causes, depends on my mood or brain state at that particular time) that is just too much. Curses to this pain! I cannot bear this pain!

This seems like a reasonable candidate for the step at which I stop you and say no, actually I would prefer any number of people to experience the former pain, rather than one having to bear the latter - that difference just barely exceeded my basic tolerance for pain. Of course we are talking about the same subjective level of pain in different people - not necessarily caused by the same severity of physical incident.

This doesn't seem ideal. However, it is more compatible with my value computation than the idea of torturing someone for the sake of 3^^^3 people with dust specks in their eyes.

In response to comment by [deleted] on [SEQ RERUN] Torture vs. Dust Specks
Comment author: Multipartite 15 October 2011 10:35:29PM 1 point [-]

I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.

'prefer any number of people to experience the former pain, rather than one having to bear the latter': applying to across time as well as across numbers, one can reach the state of comparing {one person suffering brief unbearable pain} to {a world of pain, every person constantly existing just at the theshold at which it's possible to not go insane}. Somewhat selfishly casting oneself in the position of potential sufferer and chooser, should one look on such a world of pain and pronounce it to be acceptable as long as one does not have to undergo a moment of unbearable pain? Is the suffering one would undergo truly weightier than the suffering the civilisation wold labor under?

The above question is arguably unfair both in that I've extended across time without checking acceptability, and also in that I've put the chooser in the position of a sacrificer. For the second part, hopefully it can be resolved by letting it be given that the chooser does not notably value another's suffering above or below the importance of the chooser's own. (Then again, maybe not.)

As for time, can an infinite number of different people suffering a certain thing for one second be determined to be at least no less than a single person suffering the same thing for five seconds? If so, then one can hopefully extend suffering in time as well as across numbers, and thus validly reach the 'world of pain versus moment of anguish' situation.

(In regard to priveleging, note that dealing with large numbers is known to cause failure of degree appreciation due to the brain's limitations, whereas induction tends to be reliable.)

Comment author: [deleted] 11 October 2011 10:44:32AM 3 points [-]

Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.

...

It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.

If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don't believe there's any reason to expect there to be a distribution of pain in the way that you describe - or in any case it seems like Yudkowsky's point should generalise, and I'm not sure that it does.

If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.

I may have increased the number of pain units in existence, but my value computation doesn't work by adding up "pain units". I'm almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven't reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.

I don't know where the threshold(s) are, but I'm sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else's pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that "pain units" are additive.

In response to comment by [deleted] on [SEQ RERUN] Torture vs. Dust Specks
Comment author: Multipartite 11 October 2011 07:30:10PM 0 points [-]

Is the distribution necessary (other than as a thought experiment)?

Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?

...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically... hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).

...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...

(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)


The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy--particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning--would seem to be incompatible with true benevolence and desire for others' well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.

(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we're contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.

(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one's true system of values. If deliberately rewiring one's brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one's previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one's default behaviour.)

Comment author: Multipartite 08 October 2011 04:07:16PM 5 points [-]

('Should it fit in a pocket or backpack?': Robot chassis, please. 'Who is the user?': Hopefully the consciousness itself. O.O)

Comment author: Multipartite 07 October 2011 09:22:41AM *  1 point [-]
  • In general, make decisions according to the furtherance of your current set of priorities.
  • Personally, though I enjoy certain persistant-world games for their content and lasting internal advantages, the impression I've gotten from reading others' accounts of World of Warcraft compared to other games is that it takes up a disproportionate amount of time/effort/money compared to other sources of pleasure.

For that game, the sunk-costs fallacy and the training-to-do-random-things-infinitely phenomenon may help in speculating about why so many sink and continue to sink time into it. I've noticed that people who bite the bullet and quit speak not as though they were dependent and longing to relapse into remembered joy, but rather as though horrified in retrospect at how they let themselves get used to essentially playing to work, that is doing something which in theory they enjoyed yet which in practice was itself a source of considerable stress/boredom/frustration. (Again, I have had no direct experience with the game.)

For cocaine, straightforwardly there's the expectation that it would do bad things to your receptors (as well as your nose lining...) such that you would gain dependency and require it for normality, as with caffeine and nicotine and alcohol. Your priorities would be forcibly changed to a state incompatible with your current priorities, thus it is worth avoiding. If there were a form or similar thing which in fact had no long-term neurological effects, that is one which actually gave you the high without causing any dependency (is that even theoretically possible, though, considering how the brain works? Well, if dropped to the level of most things then instead, say...), it might be worth trying in the same way that music is helpful to cheer oneself up (if it cost less as well?), or perhaps sweet foods would be a better example there.

The standard answer for sex is that it's already part of your system of priorities, and so there's little helping it. Practically speaking, it would probably be far easier if one could just turn off one's interest in that regard and focus one's energy elsewhere--particularly, in terms of the various psychological/physiological health benefits, if one already cannot experience it yet is near-futilely driven to seek it. Again though, there one more wants to turn off 'the impulse to have sex' rather than sex itself, since there are advantages if you want to have sex and do compared to if you want to and can't, and also advantages if you don't want to and don't compared to if you want to and can't.

Hm... returning to the original question wording, if one treats World of Warcraft as a potentially-addictive use of time that may truly or otherwise effectively rewire one's sytem of priorities to the point of interference with one's current priorities, then it is likely reasonable to avoid it for that reason. It's again important to note which priorities are true priorities (such as improvement of the world?) that one wishes to whole-heartedly support, and which are priorities which, when stopped to think about, don't have a particularly reason to value (such as the sex drive issue, which actually doesn't have much going for it compared to other ways of pursuing pleasure).

(Species-wide reproductive advantages are acknowledged.)

Comment author: lukeprog 02 October 2011 02:37:06AM 11 points [-]

No, because "Alice" was not operating by Crocker's Rules.

Comment author: Multipartite 04 October 2011 10:05:20PM *  1 point [-]

Crocker's Rules: A significantly interesting formalisation that I had not come across before! <happiness> Thank you!


On the one hand, even if someone doesn't accept responsibility for the operation of their own mind it seems that they nevertheless retain responsibility for the operation of their own mind. On the other hand, from a results-based (utilitarian?) perspective I see the problems that can result from treating an irresponsible entity as though they were responsible.

Unless judged it as having significant probability that one would shortly be stabbed, have one's reputation tarnished, or otherwise suffer an unacceptable consequence, there seem to be significant ethical arguments against {acting to preserve a softening barrier/buffer between a fragile ego and the rest of the world} and for {providing information either for possible refutation or for helpful greater total understanding}.
|
Then again, this is the same line of thought which used to get me mired in long religion-related debates which I eventually noticed were having no effect, so--especially given the option of decreasing possible reprisals' probabilities to nearly zero--treating others softly as lessers to be manipulated and worked around instead of interacting with on an equal basis has numerous merits.
|
--Though that then triggers a mental reminder that there's a sequence (?) somewhere with something to say about {not becoming arrogant and pitying others} that may have a way to {likewise treat people as irresponsible and manipulate them accordingly, but without looking down on them} if I reread it. <goes to look>

Comment author: Multipartite 28 September 2011 10:56:19PM 3 points [-]

Note that these people believing this thing to be true does not in fact make it any likelier to be false. We judge it to be less {more likely to be true} than we would for a generic positing by a generic person, down to the point of no suspicion one way or the other, but this positing is not in fact reversed into a positive impression that something is false.

If one takes two otherwise-identical worlds (unlikely, I grant), one in which a large body of people X posit Y for (patently?) fallacious reasons and one in which that large body of people posit the completely-unrelated Z instead, then it seems that a rational (?) individual in both worlds should have roughly the same impression on whether Y is true or false, rather than in the one world believing Y to be very likely to be false.

One may not give the stupid significant credence when they claim it to be day, but one still doesn't believe it any more likely to be night (than one did before).

((As likely noted elsewhere, the bias-acknowledgement situation results in humans being variably treated as more stupid and less stupid depending on local topic of conversation, due to blind spot specificity.))

View more: Prev | Next