Comment author: Vaniver 22 November 2012 03:42:40PM *  3 points [-]

The problem is how classical logical statements work. The statement "If A then B" more properly translates as "~(A and ~B)".

Thus, we get valid logical statements that look bizarre to humans: "If Paris is the capital of France, then Rome is the capital of Italy" seems untrue in a causal sense (if we changed the capital of France, we would not change the capital of Italy, and vice versa) but it is true in a logical sense, because A is true, B is true, true and ~true is false, and ~false is true.

That example seems just silly, but the problem is the reverse example is disastrous. Notice that, because of the "and," if A is false then it doesn't matter what B is: false and X is false, ~false is true. If I choose the premise "Marseilles is the capital of France," then any B works. "If Marseilles is the capital of France, then I will receive infinite utility" is a true relationship under classical logic, but is clearly not a causal relationship: changing the capital will not grant me infinite utility, and as soon as the capital changes, the logical truth of the sentence will change.

If you have a reasoner that makes decisions, they need to use causal logic, not classical logic, or they'll get tripped up by the word "implication."

Comment author: aaronde 24 November 2012 03:26:12AM *  1 point [-]

I get that. What I'm really wondering is how this extends to probabilistic reasoning. I can think of an obvious analog. If the algorithm assigns zero probability that it will choose $5, then when it explores the counterfactual hypothesis "I choose $5", it gets nonsense when it tries to condition on the hypothesis. That is, for all U,

  • P(utility=U | action=$5) = P(utility=U and action=$5) / P(action=$5) = 0/0

is undefined. But is there an analog for this problem under uncertainty, or was my sketch correct about how that would work out?

Comment author: [deleted] 21 November 2012 12:05:54PM *  4 points [-]

but it makes me uncomfortable when LWers say things like:

Edited! If that's poor phrasing, I want to fix it. My intended goal was "I need to reference the topic of this article in some manner, so that people will know why to read it." and from your post that wasn't getting across.

However, that is not the first critique I have gotten about phrasing, and in retrospect, I am concerned that I am more of a rationality pretender than an actual rationalist. I mean, I approve of rationality, and I try to follow the math (and can't when it starts getting hard, frequently because it would take too long and I am usually following Less Wrong intermittently while focusing on other things as well), but I have received multiple complaints that I feel like I can fairly sum up as "You're the rationalist equivalent of a annoying cheerleader yelling 'Go Team, Smash the Other Team', that's not what rationality is about, please stop."

I think it is safe to say that I really do have that as a problem (multiple different sources seem to indicate it to me.) And I would prefer to fix it, but I'm not sure how to fix it. If you or anyone else have thoughts on how to change, I am open to suggestion.

In response to comment by [deleted] on Open Thread, November 16–30, 2012
Comment author: aaronde 21 November 2012 07:17:12PM *  5 points [-]

That's exactly the impression that I got. That it was awkward phrasing, because you just didn't know how to phrase it - but that it wasn't a coincidence that you defaulted to that particular awkward phrasing. It seems that, on some level, you were surprised to see people outside lesswrong discussing "lesswrong ideas." Even though, intellectually, you know that most of the good ideas on lesswrong didn't originate here. Don't be too hard on yourself. I probably have the opposite problem, where, as a meta-contrarian, I can't do anything but criticize lesswrong.

If you want to avoid sounding like a cheerleader, I think the best rule of thumb is to just not name-drop. It's great if you get a lot of ideas from Eliezer and lesswrong, but then communicate those ideas in a way that makes it difficult to trace them back to lesswrong. This should come naturally, because you shouldn't believe everything you hear on lesswrong anyway. Confirm what you hear with an independent source, and then you can refer to that source instead of lesswrong, just like you would with information you learned on wikipedia.

Comment author: Anatoly_Vorobey 19 November 2012 08:52:47PM *  3 points [-]

I believe that (mathematical) proofs aren't easily reducible to axiomatic proofs, and that proofs have been, and still are, profoundly social by their nature, although I don't know if that will continue indefinitely. I probably won't find the time to write a large post on this topic that I've been thinking of, so I want to quote here one observation that's been on my mind recently, in case someone finds it useful.

Eliezer quotes (in the post which I, with accordance to the above, see as wrong-headed in some respects) one definition of proof that he disagrees with: "A proof is a social construct – it is what we need it to be in order to be convinced something is true. If you write something down and you want it to count as a proof, the only real issue is whether you’re completely convincing."

I think this is too vague as a definition of proof and doesn't really work, but it does capture the social/communication aspect of it I consider important. A thinks X is right and P is a proof of X. It isn't enough that P convinces A that X is right; when A communicates P to B, it should convince them too.

A few days ago, I was rereading Vladimir Uspensky's essay on the philosophy of mathematics that I read many years ago and forgot. In the section about proofs, Uspensky offers this informal definition:

A proof is a convincing argument that convinces us to such a degree that we can with its help convince others.

That is, it isn't enough that B is convinced by P that X is right; P should be such that B should be able to spread the gospel onwards. A proof doesn't just bridge a void between two minds; it's capable of leaping on and on. It's a virus with conviction as its payload: it instills conviction in its host and can spread itself (rather than merely conviction) to others.

I've been musing since then about this addition to what I'd thought of as an informal social definition of a proof. I've been going back and forth about how necessary and profound it is, but I think I'm converging on forth.

Comment author: aaronde 21 November 2012 12:05:39AM 0 points [-]

I don't understand how Uspensky's definition is different from Eliezer's. Is there some minimum number of people a proof has to convince? Does it have to convince everyone? If I'm the only person in the world, is writing a proof impossible, or trivial? It seems that both definitions are saying that a proof will be considered valid by those people who find it absolutely convincing. And those people who do not find it absolutely convincing will not consider it valid. More importantly, it seems that this is all those two definitions are saying, which is why neither of them is very helpful if we want something more concrete than the colloquial sense of proof.

Comment author: [deleted] 20 November 2012 07:45:14PM *  2 points [-]

I found an article explaining Motivated Reasoning in The Atlantic and it seemed like a good fit for the Open Thread.

http://www.theatlantic.com/politics/archive/2012/11/how-partisans-fool-themselves-into-believing-their-own-spin/265336/

That being said, one of the core links inside the article (The one that links to the paper that it is using to draw some of it's conclusions) was broken. I've pasted the correct link below if you want to read the paper as well.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2071478

In response to comment by [deleted] on Open Thread, November 16–30, 2012
Comment author: aaronde 20 November 2012 11:24:40PM *  8 points [-]

I liked the fact that the author didn't use cognitive bias as an excuse to give up on talking about politics altogether (which seems to be LWian consensus), but instead made demonstrable claims about politics.

EDIT: in response to the previous version of Michaelos' post, I said:

It makes me uncomfortable when LWers say things like:

"Politics is the Mindkiller" appears to be acknowledged as early as the second sentence.

It smacks of, "Oh, look at the unenlightened people finally catching on." Lesswrong didn't invent cognitive science, and "politics is the mindkiller" is just our term for a well-established result of cognitive science. The article is about motivated reasoning, and the author isn't "acknowledging" it, but explaining it.

Comment author: aaronde 20 November 2012 09:00:22PM 0 points [-]

Can anyone point me toward work that's been done on the five-and-ten problem? Or does someone want to discuss it here? Specifically, I don't understand why it is a problem for probabilistic algorithms. I would reason:

There is a high probability that I prefer $10 to $5. Therefore I will decide to choose $5, with low probability.

And there's nowhere to go from there. If I try to use the fact that I chose $5 to prove that $5 was the better choice all along (because I'm rational), I get something like:

The probability that I prefer $5 to $10 is low. But I have very high confidence in my rationality, meaning that I assign high probability, a priori, to any choice I make being the choice I prefer. Therefore, given that I choose $5, the probability that I prefer $5 is high. So $5 doesn't seem like a bad choice, since I'll probably end up with what I prefer.

But things still turn out right, because:

However, the probability that I prefer $10, given that I choose $10, is even higher, because the probability that I prefer $10 was high to begin with. Therefore, $10 is a better choice than $5, because the probability that (I prefer $10 to $5 given that I choose $10) is higher than the probability that (I prefer $5 to $10 given that I choose $5).

So unless I'm missing something, the five-and-ten problem is just a problem of overconfidence.

Comment author: MileyCyrus 14 November 2012 04:26:09PM 10 points [-]

If I 'work' for forty hours next week on reading LW

Reading Less Wrong is consumptive, not productive. You need to have something to show for your work, ex. a novel draft, a fitter body, a cleaner house.

If you want to accomplish anything in a post-forager society, you're going to need to learn how to plan, and how to follow through with those plans. How are you going to get anything done if you don't have the discipline to put in the hours?

And yes, self-discipline in one area is linked to self-discipline in another. You have a "tank" so to speak of self-control that gets depleted when you are doing something difficult, and gets renewed when you are resting or leasuring. Using your self-control in any area depletes the amount of self-control left for another. If you have small tank (low self-discipline) then you run out of fuel faster (you quit working sooner). In the long run though, you can increase the size of your tank by by doing difficult tasks, such as working for a specified number of hours each week.

Comment author: aaronde 14 November 2012 11:53:15PM *  3 points [-]

Reading Less Wrong is consumptive, not productive. You need to have something to show for your work, ex. a novel draft, a fitter body, a cleaner house.

Isn't easy/hard the more useful distinction than consumptive/productive? After all, reading the news is productive in the sense of having something to show for it, because you will seem more informed in conversation. And working out can be a form of consumption, if you buy a gym membership.

Personally, I've always loved working out. So I don't have much to gain by trying to motivate myself to work out even more, because I'm obviously already very fit. And "forcing" myself to work out isn't going to test my self-discipline either. If I'm going to put in 40 hours of scheduled "work" next week, then at least some of it should be spent on things I find hard, and therefore don't do often enough.

Similarly, if reading geeky blog articles is what you do for fun, CAE_Jones, (which seems probable since you're here) it's unlikely that reading even more geeky blog articles will improve your life. That said, you might want to start off scheduling things you would expect yourself to do anyway, for the same reason that you might want to start off scheduling less that 40 hours a week, and slowly work your way up. Just to ease into it.

Comment author: [deleted] 11 November 2012 03:52:14PM *  1 point [-]

"Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either."

I agree. But in the cases of grey beings emerging from UFOs we can at least conclude that grey beings can occupy UFOs, if we trust primary evidence. This would be a massive discovery in itself, so why don't we hear about it? We don't have to conclude they come from outer space - who knows, they maybe live underground. Lets not speculate on that as we have plenty of interesting observations to delve into already - little gray men emerging from airborn thingies is HUGE in itself.

"So what is it that you think you know about these "Aliens"?"

It's not that I know anything about aliens. It's that more earthly explanations are completely implausible in many cases.

"That said, I don't think you can rule out weather and human craft."

In which cases? Just all cases, a priory? Or did you go through all previous sightings and came to that conclusion in every one case? Maybe others did the study for you, so you could provide a reference?

In response to comment by [deleted] on Struck with a belief in Alien presence
Comment author: aaronde 11 November 2012 08:26:03PM 4 points [-]

little gray men emerging from airborn thingies is HUGE in itself.

Um, no. A short guy in a grey suit stepping off a helicopter is a little grey man emerging from an airborn thingy.

Or did you go through all previous sightings and came to that conclusion in every one case?

No. I don't see the point in digging through all the reports, when the reports I have heard about have been so underwhelming. I was skipping around, watching bits and pieces of the video you linked, until Manfred pointed this out:

The geiger counter reading is reported as "10 times background," which sounds impressive if you've never held a geiger counter, but really just means a nearby rock had some potassium in it, or a dozen other possibilities.

So they basically lied. I actually haven't ever held a geiger counter, so I had no way of knowing this. If asked to explain it, I would have had to admit that something weird was going on that I couldn't explain. Except there's a perfectly mundane explanation, and the only reason I was confused is because I was misled about the significance of the reading in the first place. After that I didn't see the value in watching the rest of the documentary.

So I have a better idea. You tell me what you think is the single most convincing incident, and I will tell you,

  • How convincing I find the report on its own, and
  • How convincing it would be, assuming that there were thousands of similar, equally reliable reports.
Comment author: [deleted] 11 November 2012 09:15:20AM *  1 point [-]

"I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them"

Citation?

There's plenty of evidence for non-man made, non-hoaxed, non-astronomical, non-weatherrealated unidentified flying objects according to studies made by the US and French military:

http://en.wikipedia.org/wiki/Project_Blue_Book#Project_Blue_Book_Special_Report_No._14

most important highlights:

http://lesswrong.com/lw/ffd/struck_with_a_belief_in_alien_presence/7t4i

The black swan example was just a general pondering.

"I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them. I think that any intelligent space-faring life would be busy building spheres around stars (if not outright disassembling the stars) as quickly as they spread out into the cosmos. So we'd notice them by the wake of solar systems going dark. At the very least, there's no reason to think that they would hide from us, which is what these scenarios tend to require"

This is very speculative to me. I don't think we can use it as evidence for or against.

In response to comment by [deleted] on Struck with a belief in Alien presence
Comment author: aaronde 11 November 2012 03:38:23PM 2 points [-]

Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either. Some UFOs may still be unexplained, but all that means is that we don't know enough about them to say what they are.

That said, I don't think you can rule out weather and human craft. Others have already explained why I find the "primary" evidence unconvincing.

This is very speculative to me. I don't think we can use it as evidence for or against.

Let me put it this way. My guess of what an interstellar civilization would look like makes predictions about what it would be like to encounter that civilization. Those predictions are not satisfied. This is strong evidence that no extra-terrestrial civilization (as I understand the term) has made it anywhere near us.

One of the reasons you were downvoted is that you asked us to evaluate evidence for "Aliens". But that is impossible until you explain what you mean by "Aliens". Obviously, there is something about these UFO sightings that makes you think they are more likely to be caused by aliens than by weather. Which implies that you think you know something about aliens that makes them a better explanation.

So what is it that you think you know about these "Aliens"?

Comment author: [deleted] 11 November 2012 04:52:10AM -1 points [-]

If we're going to talk about how and why we should formulate priors, rather than what Bayes' rule says, this is what we're interested in.

But that's not what I'm talking about. I was specifically responding to your claim that:

"prior probability", by definition, means that we throw out all previous evidence.

So far as I can tell, that's not part of the accepted definition. For example, Jaynes' work on prior probabilities explicitly invokes prior information:

in two problems where we have the same prior information, we should assign the same prior probabilities.

I don't mean to come off as a dick for nit-picking about definitions. But rigorous mathematical definitions are really important, especially if you are claiming to argue something is true by definition - and you were.

In response to comment by [deleted] on Struck with a belief in Alien presence
Comment author: aaronde 11 November 2012 05:34:07AM 3 points [-]

Yes, I was wrong. I was explaining why I got so focused on the blank-slate version of the prior.

Comment author: [deleted] 11 November 2012 02:17:58AM *  2 points [-]

Wait, what? Bayesians never assign 0 probability to anything, because it means the probability will always remain 0 regardless of future updates.

Yes. This name for this is Cromwell's rule.

And "prior probability", by definition, means that we throw out all previous evidence.

Not quite. The prior probability is the probability of the hypothesis and the background information, independent from the evidence we are updating on. This includes previous evidence. We usually write the "prior probability" as P(H), but it should really be written as P(H.B), where "H" is hypothesis and "B" is background information.

For example, let's say I am asking you to update your belief that Julius Caesar existed given a recently discovered, apparently first-hand account of Caesar's crossing the Rubicon. Your prior probability should NOT exclude all previous evidence on whether Caesar actually existed - e.g. official Roman documents and coins with his face. Ideally, your prior probability should be your posterior probability from your most recent update.

In response to comment by [deleted] on Struck with a belief in Alien presence
Comment author: aaronde 11 November 2012 03:45:12AM 0 points [-]

Right. What I want to do is calculate the probability that a random conscious entity would find itself living in a world where someone satisfying the definition of Julius Caesar had existed. And then calculate the conditional probability given the evidence, which is everything I've ever observed about the world including the newly discovered account.

Obviously that's not what you do in real life, but the point remains that everything after the original prior (based on Kolmogorov complexity or something) is just conditioning. If we're going to talk about how and why we should formulate priors, rather than what Bayes' rule says, this is what we're interested in.

View more: Prev | Next