I agree with tut that increasing speed might help. Sometimes if I listen at default speed, I find my attention drifting off mid-sentence just because it's going so slowly. (Conversely, at higher speed, when my attention does drift off briefly, I sometimes miss a full sentence or two and have to rewind slightly.)
If that doesn't work, I don't really have many other ideas. Maybe you could try other repetitive mechanical actions to see if they coexist well with audiobooks. For example, maybe cooking, drawing, or exercising might work (if you do any of those). In general, I find it easy to not miss anything in an audiobook so long as I'm simultaneously doing something that does not also involve words.
[I made a request for job finding suggestions. I didn't really want to leave details lying around indefinitely, to be honest, so, after a week, I edited it to this.]
Incidentally, if your discount rate is really this high (you mention 22% annual at one point), you should be borrowing as much as you can from banks (including potentially running up credit cards if you have to - many of those seem to be 20% annual) and just using your income to pay down your debt.
I'd say just use your cost of borrowing (probably 7% or so?) for the purposes of discounting your salary and things, and then decide whether you should borrow to donate or not based on whether that rate is less than the expected rate of return for charities. (This is assuming that you can get access to adequate funds at this rate - I'm not entirely sure, but it seems plausible.)
I am really disappointed in you, gwern. Why would you use an English auction when you can use an incentive-compatible one (a second price auction, for example)? You're making it needlessly harder for bidders to come up with valuations!
(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)
(But I guess maybe if you're just trying to drive up the price, this may be a good choice. Sneaky.)
Having read about auctions before, I am well-aware of the winner's curse and expect coordination to be hard on bidding for this unique item.
Bwa ha ha! Behold - the economics of the damned.
Hm, that's true, I have heard that. Although in that particular case, it's actually unknown whether the shape is constructible or not, and I was trying to prove (in)constructibility rather than construct.
This is more like a conservative investment in various things by the managing funds for 200 years, followed by a reckless investment in the cities of Philadelphia and Boston at the end of 200 years. It probably didn't do particularly more for the people 200 years from the time than it did for people in the interim.
Also, the most recent comment by cournot is interesting on the topic:
...You may also be using the wrong deflators. If you use standard CPI or other price indices, it does seem to be a lot of money. But if you think about it in terms of relative we
Interestingly, that trick does get the ass to walk to at least one bale in finite time, but it's still possible to get it to do silly things, like walk right up to one bale of hay, then ignore it and eat the other.
Okay, sure, but that seems like the problem is "solved" (i.e. the donkey ends up eating hay instead of starving).
Does that really work for all (continuous? differentiable?) functions. For example, if his preference for the bigger/closer one is linear with size/closeness, but his preference for the left one increases quadratically with time, I'm not sure there's a stable solution where he doesn't move. I feel like if there's a strong time factor, either a) the ass will start walking right away and get to the size-preferred hay, or b) he'll start walking once enough time has past and get to the time-preferred hay. I could write down an equation for precision if I figure out what it's supposed to be in terms of, exactly...
I'm not sure what an investment in a particular far-future time would look like. Money does not, in fact, breed and multiply when left in a vault for long enough. It increases by being invested in things that give payoffs or otherwise rise in value. Even if you have a giant stockpile of cash and put it in a bank savings account, the bank will then take it and lend it out to people who will make use of it for whatever projects they're up to. If you do that, all you're doing is letting the bank (and the borrowers) choose the uses of your money for the first ...
Alternatively, if it's done by someone whom you already know decently well, and who you know isn't really a crazy obsessive pedant, it can instead signal a liking of international or British English over American.
That sounds like good policy, although there may be significant variation in what sounds awful to different people (specifically, "whom" is generally more popular outside the US). "Who" is probably the safer choice when in doubt, admittedly.
Nope, in fact that one should also be "Whom are you calling a cult leader?" Who is the subject form, i.e. it's supposed to be used when it's the "who" person that is doing the actions. In this case, though, the subject is "you", who is doing the action ("calling" someone something), and the object is the someone being called something ("whom").
Okay, thanks for the explanation. It does seem that you're right*, and I especially like the needle example.
*Well, assuming you're allowed to move the hay around to keep the donkey confused (to prevent algorithms where he tilts more and more left or whatever from working). Not sure that was part of the original problem, but it's a good steelman.
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.
Is using "whom" uncool or something? Maybe I'm just elitist (in a bad way) for liking it.
Thanks (and I actually read the other new comments on the post before responding this time!) I still have two objections.
The first one (which is probably just a failure of my imagination and is in some way incorrect) is that I still don't see how some simple algorithms would fail. For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random. For simplicity, let's say it takes the first letter o...
Sorry, I'm not sure I understand what you mean. What particle should we move to change the fact that the ass will eventually get hungry and choose to walk forward towards one of the piles at semi-random? It seems to me like you can move a particle to guarantee some arbitrarily small change, but you can't necessarily move one to guarantee the change you want (unless the particle in question happens to be in the brain of the ass).
don't get fixed in proving the constructibility of enormously large polygons
Is this common? 'Cause um, at one point I did try to prove (or disprove) the constructibility of a hendecagon (11 sides) with neusis, but I didn't realise this was a popular pursuit. This isn't really related to the post, but I was very surprised constructibility got a mention.
(I ran into equations lacking an easy solution - they were sufficiently long/hard that Maple refused to chug through them - and decided it wasn't worth the effort to keep trying.)
The problem with the Problem is that it simultaneously assumes a high cost of thinking (gradual starvation) and an agent that completely ignores the cost of thinking. An agent who does not ignore this cost would solve the Problem as Vaniver says.
That's fair. I guess adopting exponential discounting is also good enough to rule out Christianity. Not about trying to live infinitely long, though - it would depend on how much believing in Christianity would hinder you in achieving that. (Same for other religions that don't promise sufficiently amazing bliss.)
Sure, but it doesn't matter how much probability mass atheism gets, because the religions are the only ones offering infinities*, and we're probably interested in best expected payoff, not highest probability. If religions have 1/10^50 residual probability mass and atheism has all the rest, you'd still probably have to choose one of them if at least one is offering immense payoffs and you haven't solved Pascal's Mugging.
*I guess one could argue that a Solomonoff prior assigns a zero probability to truly infinite things, but I'm not sure that's an argument I'd want to rely on (also I know Buddhism offers some merely vast numbers, although I'm not sure they're vast enough, and some other religions do too, I'd imagine).
No, it's not (at least if we take the generous view and consider the Wager as an argument for belief in some type of deity, rather than the Christian one for which it was intended), because after considering all the hypotheses, you will still have to choose one (or more, I guess) of them, and it almost certainly won't be atheism. I also feel like you completely missed the point of my previous comment, but I'm not sure why, and am consequently at a loss as to how to clarify.
I suppose I should have said "reasonably inhabited land".
I don't think it's a good idea to discuss this, not only because it may give people ideas, but also because there is only one possible side to the argument that can really be mentioned.
I am not. The problem with Pascal's Wager is sort of that it fails to consider other hypotheses, but not in the conventional sense that most arguments use. Invoking an atheist god, as is often done, really does not counterbalance the Christian god, because the existence of Christianity gives a few bits of evidence in favour of it being true, just like being mugged gives a few bits of probability in favour of the mugger telling the truth. So, using conventional gods and heavens and hells like that won't balance to them cancelling out, and you will end up ha...
I think it depends on the reading. If you read it in a sort of snooty dismissive voice, yes, certainly. But if you read it in a genuinely perplexed kind of voice, it mostly sounds confused.
By the way, 3^^3 = 3^27 is "only" 7625597484987, which is less than a quadrillion. If you want a really big number, you should add a third arrow (or use a higher number than three).
I feel like this post is dated by the fact that it came before Pascal's Mugging discussions to the point of being fairly wrong. The problem with Pascal's Wager actually is that the payoffs are really high, so they overwhelm an unbounded utility function (and they don't precisely cancel out, since we do have a little evidence). On the other hand, I suppose the core point that you shouldn't dismiss things out of hand if they have a low (but not tiny) probability and a large payoff is sound.
I'm really sceptical that this is as big a factor as some of the others, but I can see how it might be a significant factor. I've also lived in cold places most of my life, so I'm not in a very good position to judge. I feel like the biggest factor will ultimately turn out to be "that's how history played out", though. Looking back, it's not clear that the hypothetical dominance of the North was really noticeable until maybe the 17th century (I'm not entirely confident on this, so correct me if I'm wrong), so I'd be more inclined to attribute it ...
What if the house merely floated the thing over there with reaction (pushing back on the floors/walls), and its floor rotted slightly (accumulating entropy, losing chemical energy) in proportion to the necessary force? In that case, he's only discovered ghostly energy transfer at small distances, which may be completely impractical (only one or two Nobels).
This appears to be all that exists for 3 (page 2): http://jech.bmj.com/content/suppl/2003/09/23/57.9.DC1/Abstracts.pdf
It was so small that after finding it I kept looking for a good 15 minutes, but I'm pretty sure the abstract is all there is and the full article was never published (the first author doesn't list it on his personal page, and all the references seem to be to the abstract).
This idea reminds me of some things in mechanism design, where it turns out you can actually prove a lot more for weak dominance than for strong dominance, even though one might naïvely suppose that the two should be equivalent when dealing with a continuum of possible actions/responses. (I'm moderately out of my depth with this post, but the comparison might be interesting for some.)
I think there are non-anthropic problems with even rational!humans communicating evidence.
One is that it's difficult to communicate that you're not lying, and it is also difficult to communicate that you're competent at assessing evidence. A rational agent may have priors saying that OrphanWilde is an average LW member, including the associated wide distribution in propensity to lie and competence at judging evidence. On the other hand, rational!OrphanWilde would (hopefully) have a high confidence assessment of himself (herself?) along both dimensions. How...
I meant "someone close to him" in a relationship, not a spatial, sense (so, "other family member or friend he knows about"). Which I guess is still kind of just a different connotation, but I think one worth noticing separately from the "crazy lurker who's been around for a while" hypothesis.
Either that or maybe either OrphanWilde or his sister or someone else close to him really enjoys messing with everyone and making it seem that the house is haunted.
Shooting is apparently scheduled to start April, so you probably don't have long to wait.
Technically, LW isn't about x-risk. It's about "refining the art of human rationality", as you can see up there in the header.
I am also not sure that a blogspot blog that gets 0-6 comments per post is really worth calling "a community" or taking particular notice of. The other ones you mention seem to more closely resemble communities, but have even less to do with x-risk.
Apparently an early script summary leaked. Spoilers:
Nppbeqvat gb gur fhzznel, n tebhc bs nagv-grpuabybtl greebevfgf nffnffvangr Jvyy, Riryla hcybnqf uvf oenva vagb n cebgbglcr fhcrepbzchgre. Nygubhtu fur ng svefg svaqf gur rkcrevzrag frrzf gb unir tbar jebat, orsber gbb ybat Riryla svaqf Jvyy erfcbaqvat va pbzchgre sbez.
Fur tbrf ba gb pbaarpg Jvyy gb gur Vagrearg fb ur pna uryc znxr shegure fpvragvsvp oernxguebhtuf. Jvyy nfxf Riryla gb pbaarpg n zvpebcubar naq n pnzren hc gb gur pbzchgre fb ur pna frr naq fcrnx gb ure nf jryy.
Jvyy perngrf n onpxhc bs uvzfr...
You put your MP3 player on random. You have a playlist of 20 songs. What are the odds that the next song played is the same song which was just played?
I think the option is more typically called "shuffle", which actually accurately represents what it does.
I care about possible people. My child, if I ever have one, is one of them, and it seems monstrous not to care about one's children.
I think you may have found one of the quickest ways to tempt me into downvoting a post without reading further (it wasn't quite successful - I read all the way through before downvoting). Poor reasoning and stereotypical appeal to emotion are probably not the ideal opener.
Beyond that, you never made clear what the purpose of the following arguments is and gave them really confusing titles.
I don't think your second point really is one, seeing as a CEO can not be installed without being affiliated with the power holders.
Why not? Some CEOs (especially for smaller companies, I think) are found via specialised recruiting companies, which I'd say is pretty unaffiliated. And in any case, it's not clear to me how you think the affiliation would be increasing pay. Do you imagine potential CEO candidates hold an auction in which they offer kickbacks to major shareholders/powerholders from their pay or something? Because I haven't heard of that eve...
I suspect there's too much of a difference in how much LW members know about basketball to get particularly wide participation. For example, I had to look up "March Madness" to figure out what this is about.
Also, there's a significant chance that either people would just copy the odds from Pinnacle, or maybe even arbitrage against it (valuing karma or whatever at 1-2 cents). Or, well, I'd certainly be tempted to =]
I'm pretty sure that low salaries are a dysfunction of democracies rather than high salaries being a dysfunction of companies. In particular, it's not the case with every company that a couple of people hold enormous shares. And aside from that, even when there is clear evidence that "the majority" gets directly involved in CEO compensation, it doesn't seem that the salaries go down all that much.
Or looking at it differently, if the high salaries were the consequence of an undue concentration of power, we would expect that when one CEO leaves, an...
I'm also curious, and would like to add a poll: [pollid:420]
Regarding the note, in statistics you could call that a population parameter. While parameters that are used are normally things like "mean" or "standard deviation", the definition is broad enough that "the centre of mass of a collection of atoms" plausibly fits the category.
I was supposed to check on this a long time ago, but forgot/went inactive on LW, but the post actually ended up at -26, so seemingly slightly lower than it was, which is evidence against your regression to 0 theory.