Comment author: kateblu 02 January 2012 04:34:12PM 1 point [-]

Confidence is a state of mind. It is critical from the standpoint of motivation. Without confidence we would be paralyzed into inaction; we would be unable to turn decisions into structured consequences. We would be constantly "scoping the game plan" and never playing. However, confidence should not play a major role in making decisions. Cold rationality is key in two aspects of the decision process: (a) how important is the decision? (b) if the decision is important, what is the "outside view" ? (per Kahneman) The first decision, IMO, should be handled with an algorithm analogous to triage and requires ascertaining sufficient basic facts to determine what, in fact, is the decision that needs to be made and how long can the decision be deferred. In other words, part of the algorithm might be answering the question, 'what happens if we do nothing?' If the decision appears essentially trivial (i.e., should I buy a new chair and, if yes, should I buy the red chair or the blue), you don't need to get to (b). If a decision is important, you need to use cold rationality.

If I am dealing with a situation where the decision has already been made, I may be able to use learned skills and experience to determine how to act. Then the key question from the standpoint of confidence is whether the situation falls within the scope of my expertise, where I can be confident that my trained 'gut reaction' will be an appropriate response. If it is outside my area of expertise, I have no reason to be confident - although I may act like I am confident if success depends upon others trusting my abilities.

Regardless of how I may need to appear to others, I would never try to kid myself about my abilities. What may be missing in the above question - Should I believe hard that I can accomplish X regardless of the likelihood of success - are the foundational questions: Do I really have to try to accomplish X? Is there a reasonable alternative method that is more likely to be successful? Is there a reasonable alternative outcome Y that will give me the benefits I need from X with a greater chance of success. If the answers are Yes, No, No - then you have to believe in order to win, so throw the "Hail Mary" with total confidence.

Comment author: Stuart_Armstrong 25 December 2011 08:20:31AM *  17 points [-]

Indeed :-) but just like modern nature lovers will tell you all about it on their cell-phone, there are some artifices he just won't count as being artificial...

Plus, the Machines just dropped a love interest straight on him...

Comment author: kateblu 25 December 2011 04:18:17PM 1 point [-]

True, but you must remember that it is HER adventure. She is the one who hit the "pause" button. Did he have the ability to say "No"? Was there a "pause" button that he could have hit before she did?

Comment author: DanielLC 24 December 2011 09:09:15PM 7 points [-]

It's not quite clear to me whether Ishtar exists in manipulated multiverses or as an avatar with her brain in a vat, or ?

I get the impression that it's the real world. There was a guy obsessed with everything being natural, which would be impossible in a simulation. I suppose he could have meant it being identical to nature.

Comment author: kateblu 24 December 2011 10:01:40PM 3 points [-]

I think he wanted to create his eden without the assistance of machines. Since he has been at it apparently for centuries, he couldn't be totally natural.

Comment author: kateblu 24 December 2011 12:49:55PM 7 points [-]

Magnificent. I gather one has an eternity to figure out his or her version of utopia and that physical death is an option. It's not quite clear to me whether Ishtar exists in manipulated multiverses or as an avatar with her brain in a vat, or ?

Comment author: Maelin 23 December 2011 12:27:02AM *  1 point [-]

I don't think the intended meaning of the title The Fallacy of Grey is "grey is a fallacy". I think it's a much nicer sounding name than "the fallacy of concluding that because things are shades of grey instead of black and white, that they are all equivalent".

Comment author: kateblu 23 December 2011 01:59:15PM 0 points [-]

Along the lines of the 'ambiguity of gray'? Or that something classified as gray can be said to be inherently undefined? To think about anything, it seems that we have to categorize it in some way. The category we choose unless it is a category of that one item, will be a model also used to describe things or concepts that differ in significant ways from the 'it' we are trying to think about. The fallacy of black and white might then be described as confusing the category with the item itself. The fallacy of gray would be a failure to recognize that gray is a non-category used for 'its' we have not yet been able usefully to categorize as properly belonging with other 'its' on one side of the spectrum or the other.

Comment author: kateblu 20 December 2011 02:15:23AM 0 points [-]

Thinking about the title of the post: why is gray a fallacy?

Comment author: wedrifid 17 December 2011 03:06:51PM *  0 points [-]

Putting it another way: bias cannot be eliminated since it provides the mental structure used by the brain to organize data.

I assume you mean all bias cannot be eliminated? Obviously we can eliminate most of it. We just need to keep inductive bias and our predilection to satisfying our own preferences.

Comment author: kateblu 17 December 2011 03:49:36PM 0 points [-]

Yes, I mean all bias. My working definition of bias is the set of assumptions we more or less rely on for most of our daily activity. In most of what I do, I don't have time or it's not worth the energy to scrutinize all the underlying assumptions that shape my reactions. But I can develop methodologies to identify when I need to be more critical of my assumptions and think before I act. I can also, I hope, learn to be a better analytical thinker and problem solver.

Comment author: kateblu 17 December 2011 02:37:56PM 0 points [-]

Putting it another way: bias cannot be eliminated since it provides the mental structure used by the brain to organize data. Bias can be described as the operating system built by the brain as it functions. From what I have read, certain responses are hardwired, so to speak, into our brains by selective adaptation. We each have to have a point of view, a place where the individual receives initial limited sets of data, and a system to turn the data into thoughts, measurements, reactions or opinions. As we learn to recognize our biases and how they can lead us to serious errors in our interpretation of data, we hope to be able to make better decisions. I think most people registered on this website would agree that the goal of better decisions is both worthy and possible.

Looking at this statement from a different point of view, all measurements are seemingly on a continuum that regresses to some theoretical limit depending upon how finely grained is your measuring rod. My understanding of modern realism is that the absolute or limit – be it infinity or the concept of a point particle or perfectly black – does exist in some independent real world. Does the lead statement refer to our perceptions of black and white or does it refute the possibility of perfectly black or white in an independent real world? On another level, does the possibility of perfectly black or white in an independent real world even matter? Most people agree that at some point on the spectrum, gray can usefully be called black. Shouldn’t the focus of our moral judgment be aimed at the shifting line dividing more black from more white?

Comment author: kateblu 11 December 2011 02:26:37PM 2 points [-]

I started reading "Existential Risk Prevention" and ended up in an article by Bostrom titled "Existential Risk". I will read both. One of the existential risks classified as a "Bang" is 4.3 We’re living in a simulation and it gets shut down:

"A case can be made that the hypothesis that we are living in a computer simulation should be given a significant probability [27]. The basic idea behind this so-called “Simulation argument” is that vast amounts of computing power may become available in the future (see e.g. [28,29]), and that it could be used, among other things, to run large numbers of fine-grained simulations of past human civilizations. Under some not-too-implausible assumptions, the result can be that almost all minds like ours are simulated minds, and that we should therefore assign a significant probability to being such computer-emulated minds rather than the (subjectively indistinguishable) minds of originally evolved creatures. And if we are, we suffer the risk that the simulation may be shut down at any time. A decision to terminate our simulation may be prompted by our actions or by exogenous factors."

A brief comment on this statement since it appears to be a real and present danger in the minds of many people: That God will destroy the earth is an existential threat perceived by adherents of a number of religions. The risk is managed by engaging in specified rituals or actions enabling the perception of the likelihood of the threat coming to fruition to be manipulated by those in power or seeking power. Related concepts are that God will not destroy or permit the destruction of ALL humans or that God will not permit the destruction of God’s creation. The first cedes enormous power to those who are perceived as holding the keys to salvation; the second is a rationalization for doing nothing.

Back to reading. Thank you for sharing this link.

Comment author: kateblu 09 December 2011 03:42:02AM 15 points [-]

"If a theory has a lot of parameters, you adjust their values to fit a lot of data, and your theory is not really predicting those things, just accommodating them. Scientists use words like “curve fitting” and “fudge factors” to describe that sort of activity. On the other hand, if a theory has just a few parameters but applies to a lot of data, it has real power. You can use a small subset of the measurements to fix the parameters; then all other measurements are uniquely predicted. " Frank Wilczek

View more: Next