All of Martin-2's Comments + Replies

Good question. I didn't have an answer right away. I think it's useful because it gives structure to the act of updating beliefs. When I encounter evidence for some H I immediately know to estimate P(E|H) and P(E|~H) and I know that this ratio alone determines the direction and degree of the update. Even if the numbers are vague and ad hoc this structure precludes a lot of clever arguing I could be doing, leads to productive lines of inquiry, and is immensely helpful for modeling my disagreement with others. Before reading LW I could have told you, if aske... (read more)

What were some specific ideas you had for "solving debates"? I was hoping Arbital would take the debate around a given topic and organize it into a tree. You start with an assertion that branches into supporting and opposing arguments, then those branch into rebuttals, then those branch into counter-rebuttals, etc.

7Alexei
That's one approach we ruled out pretty much from the start because that kind of structure is hard to read and laborious to create and maintain. However that mechanic on the blog level makes sense and that's basically how debates work right now in the wild. Our main approach was creating "claims". Blogs would reuse claims and the discussion around each claim. I'd say that part was actually moderately successful. One idea we played around with but didn't get to implement was allowing comments to easily leverage double-crux structure.

One of my favorite lessons from Bayesianism is that the task of calculating the probability of an event can be broken down into simpler calculations, so that even if you have no basis for assigning a number to P(H) you might still have success estimating the likelihood ratio.

1nyralech
How is that information by itself useful?

In the spirit of OP, since there's no guaranteed way to overcome this form of social anxiety and the afflictee will need to try many things to see what works for them, listening to a good evpsych story is as good a thing to try as any.

This post is not evidence for that lesson. When OP's puzzle is stated as intended it indeed has a wonderful and strange answer. The meta-puzzle: "Are these two puzzles essentially the same?" referring to the puzzle as intended and as presented also has a wonderful and strange answer; in fact, John Baez and maybe all of his commenters have been getting it wrong for several years. Our intuition is imperfect, and whether the puzzles you come across tend to use this fact or just trick you with sneaky framing depends on where you get your puzzles.

I'm not sure how much to trust these meta-meta analyses. If only someone would aggregate them and test their accuracy against a control.

I can't do anything on purpose.

  • Professor Utonium, realizing he has a problem

Also, since cars are now quite integrated with computers this person might have lots of fun stealing them. And if ze watches Breaking Bad there's a whole lot of inspiration there for intellectuals looking to turn to a life of blue-collar crime.

Maybe I should be steel-manning Locaha's argument but my point is I don't think the limits of this sort of self-mod are well understood, so it's premature to declare which mods are or aren't "real world".

I'm a musician if that's any hint.

Done. I hate to get karma without posting something insightful, so here's a song about how we didn't land on the moon.

0[anonymous]
That's the worst music video I've ever seen/listened to.
4redlizard
Taking the survey IS posting something insightful.
3gjm
Just to check whether I've understood: Do you in fact consider that song insightful? If so, what insight do you think it embodies? (I'm trying to figure out whether you, or they, are being ironic, or whether you are seriously endorsing as insightful a song that seriously complains that the Apollo moon landings were fake. My prior for the latter is rather low, but evidence for the former just doesn't seem to be there.)

One of the penalties for participating in politics is that your superiors end up being governed by their inferiors.

I believe that is the point of the exercise.

Further reading suggests Gould is not representative of scientists. My confidence has gone back down.

''unconscious or dimly perceived finagling is probably endemic in science, since scientists are human beings rooted in cultural contexts, not automatons directed toward external truth''

Somehow this post has actually increased my confidence in Gould's claim here.

1Martin-2
Further reading suggests Gould is not representative of scientists. My confidence has gone back down.

Maybe, since arguments have component parts that can be individually right or wrong; or maybe not, since chains of reasoning rely on every single link; or maybe, since my argument improves (along with my beliefs) as I toss out and replace the old one.

Come to think of it, if "trees grow roots most strongly when wind blows through them" because the trees with weak roots can't survive in those conditions then this would make a very bad metaphor for people.

Come to think of it, if "trees grow roots most strongly when wind blows through them" because the trees with weak roots can't survive in those conditions then this would make a very bad metaphor for people.

No, it's probably accurate as stated. I don't know about trees as such, but if you try to start vegetable seedlings indoors and then transfer them outside, they'll often die in the first major wind; the solution is to get the air around them moving while they're still indoors (as with a fan), which causes them to devote resources to growing stronger root systems and stems.

If this quote were about people improving through adversity I wouldn't have posted it (I also read that article). But I think it's true for arguments. The last sentence does a better job of fitting the character than illuminating the point so I could have left it out.

0Document
Do arguments themselves "improve", rather than simply being right or wrong?

Elayne blinked in shock. “You would have actually done it? Just… left us alone? To fight?”

"Some argued for it," Haman said.

“I myself took that position,” the woman said. “I made the argument, though I did not truly believe it was right.”

“What?” Loial asked [...] “But why did you-“

“An argument must have opposition if it is to prove itself, my son,” she said. “One who argues truly learns the depth of his commitment through adversity. Did you not learn that trees grow roots most strongly when wind blows through them?”

Covril, The Wheel of Time

7Document
Is that true (for trees or people)? Edit: For one example, this person currently linked in the sidebar isn't sure.

It is not July. It is August.

[This comment is no longer endorsed by its author]Reply

Saw this under "latest rationality quotes" and was like "man, I'm really missing the context as to how this is a rationality quote."

5Vaniver
Fixed! The perils of copy/paste.

Keep in mind this is a hypothetical character behaving in an unrealistic and contrived manner. If she doesn't heed social norms or effective communication strategies then there's nothing we can infer from those considerations.

it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."

Really? I think the one I see most is "I am supposed to make humans happy, but they fight with each other and make themselves unhappy, so I must kill/enslave all of them". At least in Hollywood. You may be looking in more interesting places.

Per your AI, does it have an obvious incentive to help people below the median energy level?

1[anonymous]
To me, that seems like a very similar story, it's just their wasting their energy on fighting/unhappiness. I just thought I'd attempt to make an AI that thinks "Human's wasting energy? Under some caveats, I approve!" I made a quick sample population to run some numbers about incentives (8 people, using 100, 50, 25,13,6,3,2,1 energy, assuming only one unit of time) and ran some numbers to consider incentives. The AI got around 5.8 utility from taking 50 energy from the top person, giving 10 energy to use to the bottom 4, and just assuming that the remaining 10 energy either went unused or was used as a transaction cost. However, the AI did also get about .58 more Utility from killing any of the four bottom people, (even assuming their energy vanished) Of note, roughly doubling the size of everyone's energy pie does get a greater amount of Utility then either of those two things (Roughly 10.2), except that they aren't exclusive: You can double the Pie and also redistribute the Pie (and also kill people that would eat the pie in such a way to drag down the Median) Here's an even more bizzare note: When I quadrupled the population (giving the same distribution of energy to each people, so 100x4, 50x4, 25x4, 13x4, 6x4,3x4, 2x4, 1x4) The Algorithm gained plenty of additional utility. However, the amount of utility the algorithm gained by murdering the bottom person skyrocketed (to around 13.1) Because while it would still move the Median from 9.5 to 13, the Squareroot of that Median was multiplied by a much greater population than when Median was multiplied by a much greater population. So, if for some reason, the energy gap between the person right below the Median and the person right above the Median is large, the AI has a significant incentive to murder 1 person. In fact, the way I set it up, the AI even has incentive to murder the bottom 9 people to get the Median up to 25.... but not very much, and each person it murders before the Median shifts is a substantia

Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.

*dun-dun-dun-dun

Steelman that shoddy argument

Mend its faults so they can't be seen

Help that bastard make more sense

A reformulation to see what they mean

1skeptical_lurker
Alestorm are a very rationalist band. I particularly like the lyrics: You put your faith in Odin and Thor, We put ours in cannons and whores! Its about how a religious society can never achieve what technology can.
6RomeoStevens
To whomever downvoted parent: Please don't downvote methods for providing epistemic rationality techniques with better mental handles so they actually get used. Different tricks are useful for different people.

Steven Landsburg at TBQ has posted a seemingly elementary probability puzzle that has us all scratching our heads! I'll be ignominiously giving Eliezer's explanation of Bayes' Theorem another read, and in the mean time I invite all you Bayes-warriors to come and leave your comments.

I took "Harry's parents come to Hogwarts" as a completely insane move

I did too at first, but when Harry reads the follow-up letter from his father we see that it turned out for the best.

2NancyLebovitz
And there's a clue that I should remember it when I twitch slightly. When Harry was worried about whether he'd wrecked his relationship with his parents, I wondered whether that actually made sense.

I like the premise. Last month's Douglas Hofstadter quote comes to mind. Some problems:

At some point, a young person asks you how some simple loops of electrical signals can engender music and conversations... you insist that your science is about to crack that problem at any moment.

Why would I insist this? I don't even know how the electrical signals (the what?!) change the volume. I just know how to make the wires change the volume, and I know how to make them change the music too.

You would conclude that somehow the right configuration of wires eng

... (read more)

2) Ask myself what I would differentially expect to observe if ghosts existed or didn't, and look for those things

The tricky part about this is establishing how much weird stuff you'd expect to see in the absence of ghosts. There will always be unexplained phenomena, but how many is too many?

0TheOtherDave
Establishing that would be helpful, but is not necessary to get started. Either there's more weird stuff in this house than outside of it, or there isn't. If there is, that should increase my confidence that there's something weird-stuff-related in this house. If there isn't, that should decrease my confidence. If I'm confident that ghosts are weird-stuff-related, the second case should decrease my confidence that there are ghosts in this house, and the first case should increase it.

"...need to make billions of sequential self-modifications when humans don't need to" to do what? Exist, maximize utility, complete an assignment, fulfill a desire...? Some of those might be better termed as "wants" than "needs" but that info is just as important in predicting behavior.

based upon the expectation set upon the observance of subsequent facts, at some later date, ~A could also end up being evidence for B

Here's a contradiction with A and ~A both being evidence for the same thing. You could tell your spouse "Go up and check if little Timmy went to bed". Before ze comes back you already have an estimate of how likely Timmy is to go to bed on time (your prior belief). But then your spouse, who was too tired to climb the stairs, comes back and tells you "Little Timmy may or may not have gone to bed". Now, i... (read more)

1Tsuki
Yes. I get that. We cannot use A and ~A to update our estimates in the same way at the same time. That's not the same as saying that it is impossible for A and ~A to be evidence of the same thing. One could work on Tuesday, and the other could work on Friday, depending on the situation. That was my only point: can't generalize a timeline but need to operate at specific points on that timeline. That goes back to the justification for interning Japanese citizens. If we say ~A just can't ever be evidence of B because at some previous time A was evidence for B, then we are making a mistake. At some later date, ~A could end up being better evidence, depending on the situation. My point was that a better counterargument to the governor's justification is to point out that the prospect of naturalized citizens turning against their home country in favor of their country of ancestry presents a very low prior, because the Japanese (and other groups that polyglot nations have gone to war with) have not usually behaved that way in the past. I could be wrong, but it doesn't have anything to do with updating estimates with a variable and its negation to reach the same probability at the same time. I pretty much agree with what you said, just not the implication that it conflicts in some way with what I said.

Or, if one of the kids is Eliezer Yudkowsky, you can write Maxwell's equations and say "simple", then write a program simulating Thor and say "not simple".

Finally, Lucas implicitly assumes that if the mind is a formal systems, then our “seeing” a statement to be true involves the statement being proved in that formal system.

To me this seems like the crux of the issue (in fact, I perceive it to be the crux of the issue, so QED). Of course there are LW posts like Your Intuitions are not Magic, but surely a computer could output something like "arithmetic is probably consistent for the following reasons..." instead of a formal proof attempt if asked the right question.

0Decius
My mind is not a consistent formal system; I believe everything that I can prove to be the case.

do the rest of you actually find the choice of 1A clearly intuitive?

I chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic.

1[anonymous]
Me too! We're just two greedy people!:)

that's why grocery stores design their floor layouts so that you can't help but notice the delicious rows of candy bars while you're trapped in the checkout line. no escape!

In theory your escape would be a competing supermarket that hides their candy bars to attract your business.

The number of upvotes indicates popularity, not quality. I just upvoted Doug's comment but that doesn't mean I think it's 8 times better than josh's comment.

[This comment is no longer endorsed by its author]Reply

Although it's late, I'd like to say that XiXiDu's approach deserves more credit and I think it would have helped me back when I didn't understand this problem. Eliezer's Bayes' Theorem post cites the percentage of doctors who get the breast cancer problem right when it's presented in different but mathematically equivalent forms. The doctors (and I) had an easier time when the problem was presented with quantities (100 out of 10,000 women) than with explicit probabilities (1% of women).

Likewise, thinking about a large number of trials can make the notion o... (read more)

Suppose I believe strongly that violent crime rates are soaring in my country (Canada), largely because I hear people talking about "crime being on the rise" all the time, and because I hear about murders on the news. I did not reason myself into this position, in other words.

It looks to me like you arrived at this position via weighing the available evidence. In other words, you reasoned yourself into it. Upon second reading I see you don't have a base rate for the amount of violent crime on the news in peaceful countries, and you derived a h... (read more)

Eliezer (who appears to only have a single name, like Prince or Jesus)

Mr. Jesus H. Christ is a bad example. Also there's this.

I presume Rokia was able to buy a hybrid and some prime real estate after all this.