All of Bongo's Comments + Replies

Bongo70

Harry didn't hear Hermione's testimony. Therefore, he can go back in time and change it to anything that would produce the audience reaction he saw, without causing paradox.

6glumph
But since the audience's (extended) reaction includes voting to send Hermoine to Azkaban, how will changing her testimony help?
Bongo40

I almost downvoted this because when I clicked on it from my RSS reader, it appeared to have been posted on main LW instead of discussion (known bug). This might be the reason for a lot of mysterious downvoting, actually.

Bongo20

(Bug report: I was sent to this post via this link, and I see MAIN bolded above the title instead of DISCUSSION. The URL is misleading too, shouldn't urls of discussion posts contain "/r/discussion/" instead of "/lw"?)

(EDIT: Grognor just told me that "every discussion post has a main-style URL that bolds MAIN")

Bongo50

fraction of revenue that ultimately goes to paying staff wages

About a third in 2009, the last year for which we have handy data.

1timtyler
Practically all of it goes to them or their "associates" - by my reckoning. In 2009 some was burned on travel expenses and accomodation, some was invested - and some was stolen. Who was actually helped? Countless billions in the distant future - supposedly.
Bongo10

Snape says this in both MoR and the original book:

"I can teach you how to bottle fame, brew glory, even stopper death"

Isn't this silly? Of course you can stopper death, because duh, poisons exist.

It might be just a slip-up in the original book, but I'm hoping it will somehow make sense in MoR. My first thought was that maybe a magical death potion couldn't be stopped using magical healing, unlike non-magical poisons.

I asked this on IRC and got some interesting ideas. feep thought it might mean that you can make a Potion of Dementor, which w... (read more)

1staticIP
What I got from that was snape claiming to be able to temporarily store or stop death. To extend someones life. Not a great interpretation in hindsight, but I was ~ten the last time I heard that line so I'll forgive myself.
5WrongBot
I would assume that Snape was referring to the Draught of Living Death, which creates a temporary condition indistinguishable from death.
8Slackson
I thought it was made pretty clear that Dementors come from the ritual that Quirrel says summons Death, and that the True Patronus charm is the lost spell that banishes it. The other options seem more plausible, but really, I'd place my bet on some kind of poison that causes total and immediate brain death like Avada Kedavra does. I am not aware of any normal poisons that do that.
Bongo50

This usually stops me from using myself as examples, sometimes with the result that the post stays unwritten or unpublished.

You could just tell the story with "me" replaced by "my friend" or "someone I know" or "Bob". I'd hate to miss a W_D post because of a trivial thing like this.

Bongo40

I ... was shocked at how downright anti-informative the field is

Explain?

shocked at how incredibly useless statistics is

Explain?

The opposite happened with the parapsychology literature

Elaborate?

Bongo10

algorithmic probability ... does not say that naturalistic mechanistic universes are a priori more probable!

Explain?

Bongo30

confirmation bias ... doesn't actually exist.

Explain?

0Will_Newsome
http://library.mpib-berlin.mpg.de/ft/gg/gg_how_1991.pdf is exemplary of the stuff I'm thinking of. Note that that paper has about 560 citations. If you want to learn more then dig into the literature. I really like Gigerenzer's papers as they're well-cited and well-reasoned, and he's a statistician. He even has a few papers about how to improve rationality, e.g. http://library.mpib-berlin.mpg.de/ft/gg/GG_How_1995.pdf has over 1,000 citations.
Bongo40

I wonder how this comment got 7 upvotes in 9 minutes.

EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.

0TheOtherDave
Though it's made more impressive when you realize that the comment you respond to, and its grandparent, are the user's only two comments, and they average 30 karma each. That's a beautiful piece of market timing!
LWMormon270

LW has a bunch of bored Bayesians on Mondays. Same thing happened to your score, mate.

Bongo50

(An increasing probability distribution over the natural numbers is impossible. The sequence (P(1), P(2),...) would have to 1) be increasing 2) contain a nonzero element 3) sum to 1, which is impossible.)

Bongo40

There's a related problem; Humans have a tendency to once they have terms for something take for granted that something that looks at a glance to make rough syntactic sense that it actually has semantics behind it.

This sentence is so convoluted that at first I thought it was some kind of meta joke.

0JoshuaZ
Well, the extra "that" before "that it actually" really doesn't help matters. I've tried to make it slightly better but it still seems to be a bit convoluted.
Bongo30

Also, I'd say both of those pictures seem to have the effect of inducing far mode.

Bongo60

Given any problem, one should look at it, and pick the course that maximising one's expectation. ... what if my utility is non-linear

You're confusing expected outcome and expected utility. Nobody thinks you should maximize the utility of the expected outcome; rather you should maximize the expected utility of the outcome.

Lets now take another example: I am on Deal or No Deal, and there are three boxes left: $100000, $25000 and $.01. The banker has just given me a deal of $20000 (no doubt to much audience booing). Should I take that? Expected gains ma

... (read more)
-1[anonymous]
Are you sure no-one advocates it? Because I've observed people doing it more than once.
Bongo40

Is there a video of the full lecture?

0PhilGoetz
Email Reto Schneider and ask.
Bongo60

it seems that an isomorphic argument 'proves' that computer programs will crash - since "almost any" computer program crashes.

More obviously, an isomorphic argument 'proves' that books will be gibberish - since "almost any" string of characters is gibberish. An additional argument that non-gibberish books are very difficult to write and that naively attempting to write a non-gibberish book will almost certainly fail on the first try, is required. The analogous argument exists for AGI, of course, but is not given there.

0timtyler
Right - so we have already had 50+ years of trying and failing. A theoretical argument that we won't succeed the first time does not tell us very much that we didn't already know. What is more interesting is the track record of engineers of not screwing up or killing people the first time. We have records about engineers killing people for cars, trains, ships, aeroplanes and rockets. We have failure records from bridges, tunnels and skyscrapers. Engineers do kill people - but often it is deliberately - e.g. nuclear bombs - or with society's approval - e.g. car accidents. There are some accidents which are not obviously attributable to calculated risks - e.g. the Titanic, or the Tacoma Narrows bridge - but they typicallly represent a small fraction of the overall risks involved.
Bongo30

It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)

Bongo10

By doing this you condition them to accept the radical form of dominance where they have the authority to tell you what you are morally entitled to believe.

*where you have the authority to tell them (?)

3MichaelVassar
Yep. Sorry.
Bongo30

My impression is that the level went up and then down:

  • OB-era comment threads were bad.
  • During the first year of LW the posts were good.
  • Nowadays the posts are bad again.
Bongo140

LW Minecraft server anyone?

0Michelle_Z
I've just gotten into that one! Though I mainly just goof off on creative mode.
0James_Miller
Yes!
Bongo40

If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.

Seems obviously worth the bragging rights.

* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.

Bongo30

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

This seems obviously false.

Bongo320

Thus, when aiming to maximize expected positive impact, it is not advisable to make giving decisions based fully on explicit formulas.

I love that you don't seem to argue against maximizing EV, but rather to argue that a certain method, EEV, is a bad way to maximize EV. If this was stated at the beginning of the article I would have been a lot less initially skeptical.

Bongo110

So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.

http://lesswrong.com/lw/v2/prices_or_bindings/

(Also, please try to avoid sentences like "if you care about X more than innocent lives" — that comes across to me as sarcastic moral condemnation and probably tends to emotionally trigger people.)

0[anonymous]
http://lesswrong.com/lw/v2/prices_or_bindings/ (Also, your comment reads to me — deliberately or not — as sarcastic moral opprobrium directed at Vladimir's position. Please try to avoid that.)

It's not just about what status you have, but what you actually are. You can view it as analogous to the Newcomb problem, where the predictor/Omega is able to model you accurately enough to predict if you're going to take one or two boxes, and there's no way to fool him into believing you'll take one and then take both. Similarly, your behavior in one situation makes it possible to predict your behavior in other situations, at least with high statistical accuracy, and humans actually have some Omega-like abilities in this regard. If you kill the fat man, t... (read more)

Bongo30

I don't think it's that bad. Anything at an inferential distance sounds ridiculous is you just matter-of-factly assert it, but that just means that if you want to tell someone about something at an inferential distance don't just matter-of-factly assert it. The framing probably matters at least as much as the content.

Bongo30

science is wrong

No. Something like "Bayesian reasoning is better than science" would work.

Every fraction of a second you split into thousands of copies of yourself.

Not "thousands". "Astronomically many" would work.

Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that's smarter than any human

That's the accelerating change, not the intelligence explosion school of singularity. Only the latter is popular around here.

Also, we sometimes prefer torture to dust-spe

... (read more)
6Zed
In http://lesswrong.com/lw/qa/the_dilemma_science_or_bayes/ Yudkowsky argues that you have to choose and that sometimes science is just plainly wrong. I find his arguments persuasive. Geez, I said "fraction of a second" for a reason. Of course. And the accelerating change school of singularity provides the deadline. Friendly AI has to be solved BEFORE computers become so fast moderately intelligent people can brute force an AI.
Bongo00

A little UI idea to avoid number clutter: represent the controversy score by having the green oval be darker (or lighter) green the more controversial the post is.

Bongo80

Extremely counterfactual mugging is the simplest such variation IMO. Though it has the same structure as Parfit's Hitchhiker, it's better because issues of trust and keeping promises don't come into it. Here it is:

Omega will either award you $1000 or ask you to pay him $100. He will award you $1000 if he predicts you would pay him if he asked. He will ask you to pay him $100 if he predicts you wouldn't pay him if he asked.

Omega asks you to pay him $100. Do you pay?

Bongo30

You mean this?:

1.) 26986000 people die, with certainty.

2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.

And of course the answer is obvious. Given a population of 40 billion, you'd have to be a monster to not pick 2. :)

Bongo50

The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.

The expected payoff calculations say that. Expected utility calculations say nothing since you haven't specified a utility function. Neither can you say that choice 2 must be better because of the fact that for any reasonable utility function U($14k)<U($17k), because the utility of the expected payoff is not equal to the expected utility.

EDIT: pretty much every occurrence of "expected utility" in this post should be replaced with "expected payoff".

0Peter Wildeford
You're right, but I was looking at the question in terms of the (bad) assumption of linear utility for money.
Bongo90

Reminder: the Allais Paradox is not that people prefer 1A>1B, it's that people prefer 1A>1B and 2B>2A. If you prefer 1A>1B and 2A>2B it could because of having non-linear utility for money, which is perfectly reasonable and non-paradoxical. Neither does "Shut up and multiply" have anything to do with linear utility functions for money.

1Peter Wildeford
You're right and I think I touched on that a bit -- people seem to see a larger difference between 100% and 99% than between 67% and 66%. Maybe I didn't touch on that enough, though.
Bongo30

Added some exclamation marks to bring out the sarcasm.

Bongo30

If you already know your decision the value of the research is nil.

No because then if someone challenges your decision you can give them citations! And then you can carry out the decision without the risk of looking weird!

0wedrifid
A worthy endeavour!
-1MixedNuts
Citing evidence that didn't influence you before you wrote your bottom line is lying.
Bongo00

Leading people to lesswrong on average makes them scoff then add things to their stereotype cache.

This is probably because of the site design and not necessary.

2wedrifid
That no doubt makes a difference but my appeal was to universal human behavior. Exposure to new, unusual behaviours from a foreign tribe will most often invoke a rejection and tweaking of social/political positions rather than an object level epistemic update. Because that's what humans care about. (This doesn't preclude directing interested parties to lesswrong or other sources of object level information. We must just allow that there will be an extremely low rate of updating.)
Bongo00

Downvoted for bad grammar but:

Podcasts only go so far. I recommend downloading lectures etc. from youtube and converting to mp3. The best downloader-converter I've found for Windows is this, and for Linux, this (read the comments for how to get it to work). I assume you know how to find stuff on youtube so I'll skip the recommendations, but I've probably listened to thousands of hours of stuff from there and haven't run out yet.

Bongo20

I also (1 2) downvoted only after reading.

Bongo320

I disagree. I'm entertained.

Bongo30

![](image url here)
2CronoDAS
Thanks. Let's see if I have that right... Yep, it works!
Bongo70

I believe Vladimir_Nesov was talking about the obscure language in your comments.

Bongo100

I don't know how much sense the real-world tropes of skeptical atheists and fervently faithful theists make in a world where you can literally bargain with God to get your dead friend back from Heaven. In the D&Dis world, it really is atheism that requires faith!

2Scott Alexander
In the campaign, the atheists are trying to fight/destroy the gods, who they believe are false gods distracting from the worship of the true gods Truth and Wisdom. I didn't want to make that too obvious in the book because it might limit the usefulness of the classes in other settings.
Bongo60

This read vaguely like it could possibly be interpreted in a non-crazy way if you really tried... until the stuff about jesus.

I mean, whereas the rest of the religous terminology could plausibly be metaphorical or technical, it actually looks as if you're actually non-metaphorically saying that jesus died so we could have a positive singularity.

Please tell me that's not really what you're saying. I would hate to see you go crazy for real. You're one of my favorite posters even if I almost always downvote your posts.

-3Will_Newsome
Nah, that's what I was actually saying. Half-jokingly and half-trollingly, but that was indeed what I was saying. And in case it wasn't totally freakin' obvious, I'm trying to steel man Christianity, not describe my own beliefs. I'm, like, crazy in correct ways, not stupid arbitrary ways. Ahem. Heaven wasn't that technical a concept really, just "the output of the acausal economy"---though see my point about "acausal economy" perhaps being a misleading name, it's just an easy way to describe the result of a lot of multiverse-wide acausal "trade". "Apocalypse" is more technical insofar as we can define the idea of a hard takeoff technological singularity, which I'm pretty sure can be done even if we normally stick to qualitative descriptions. (Though qualitative descriptions can be technical of course.) "God" is in some ways more technical but also hard to characterize without risking looking stupid. There's a whole branch of theology called negative theology that only describes God in terms of what He is not. Sounds like a much safer bet to me, but I'm not much of a theologian myself. Thanks. :) Downvotes don't phase me (though I try to heed them most of the time), but falling on deaf ears kills my motivation. At the very least I hope my comments are a little interesting.
Bongo60

Looks awesome. Some errata:

  • bottom of page 7 says Cartesian doubt is 3 speed and 1 rationality, while the list on page 13 says it's 3 speed and 0 rationality.
  • second paragraph on page 7 says "cast two squares and then cast the spell".
  • page 59 lists LHP things for RHP, where it says "giving you"
  • page 89 says "PROBABILITY THEORY: THE LANGUAGE OF SCIENCE" whereas it's actually the logic of of science.
2Scott Alexander
Good catches. Thanks,
Bongo20

This wasn't about people but generic game-theoretic agents (and all else equal generic game-theoretic agents prefer to exist because then there will be someone in the world with their utility function exerting an influence on the world so as to make it rate higher in their utility function than it would have if there wasn't anyone).

0Nisan
Ah, good point.
Bongo80

You made this thread at least partly to flaunt your status as someone who can get away with making a thread all about themselves (on the main LW no less).

5Wei Dai
(What did you mean by "main LW"? Do you mean as opposed to discussion? It looks like the post is in discussion to me...) I was going to point out that I already mentioned the status motivation in the parenthetical remark at the end of my post, but then I realized that you're talking about a different, additional status motivation. I tend to think of myself as someone who doesn't like to flaunt, or at least has their flaunting instincts well suppressed out of desire to not be seen as flaunting by others. But now I wonder... perhaps almost everyone thinks that about themselves, and I'm actually worse than average?
Bongo100

Consider the action of making a goal. I go to all my friends and say "Today I shall begin learning Swahili." This is easy to do. There is no chance of me intending to do so and failing; my speech is output by the same processes as my intentions, so I can "trust" it. But this is not just an output of my mental processes, but an input. One of the processes potentially reinforcing my behavior of learning Swahili is "If I don't do this, I'll look stupid in front of my friends."

I know it's only an example but it needs to be poin... (read more)

I bet it depends on the condition. I'd anticipate that something very vague like "I will become a writer" would do worse when told to your friends; something very specific like "I'm going to be writing this evening" would do better, especially if the alternative is going out for drinks this evening with your friends and having them ask "Why aren't you writing?"

3BrianG
I was thinking about this myself, and was going to bring it up. I stopped telling people I was going to become a writer for that reason. But I think there's a difference between telling people you are going to do a thing at a vaguely determined time in the future, and telling someone you are going to commit to a task that same day.
Load More