I use "who" for the subject form or when "whom" sounds awful.
That sounds like good policy, although there may be significant variation in what sounds awful to different people (specifically, "whom" is generally more popular outside the US). "Who" is probably the safer choice when in doubt, admittedly.
I'm pretty sure it should be "who", since the title is an inversion of "Who are you calling a cult leader?".
Nope, in fact that one should also be "Whom are you calling a cult leader?" Who is the subject form, i.e. it's supposed to be used when it's the "who" person that is doing the actions. In this case, though, the subject is "you", who is doing the action ("calling" someone something), and the object is the someone being called something ("whom").
For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random.
Your problem is that you're using an algorithm that can only be approximated on an analog computer. You can't do flow control like that. If you want it to do A if it has 0 as an input and B if it has 1 as an input, you can make it do A+(B-A)x where x is the input, but you can't just make it do A under one condition and B under another. If continuity is your only problem, you can make it do A+(B-A)f(x), where f(x)=0 for 0<=x<=0.49 and f(x)=1 for 0.51<=x<=1, but f(x) still has to come out to 1/2 when x is somewhere between 0.49<x<0.51.
If you tried to do your algorithm, after 15 seconds, there'd have to be some certainty level where the Ass will end up doing some combination of going left and choosing at random, which will keep it in the same spot if "random" was right. If "random" is instead left, then it stops if it's half way between that and right.
The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot).
I'm not really sure where that idea came from. Quantum physics is continuous. In fact, derivatives are vital to it, and you need continuity to have them. The position of an object is spread out over a waveform instead of being at a specific spot like a billiard ball, but the waveform is a continuous function of position. The waveform has a center of mass that can be specified however much you want. Also, the Planck length seems kind of arbitrary. It means something if you have an object with size one Planck mass (about the size of a small flea), but a smaller object would have a more spread out waveform, and a larger object would have a tighter one.
get it to make true random choices.
That would make it so you can't purposely fool the Ass, but it won't keep that from happening on accident. For example, if you try to balance a needle on the tip outside when there's a little wind, you're (probably) not going to be able to do it by making it stand up perfectly straight. It's going to have to tilt a little so it leans into every gust of wind. But there's still some way to get it to balance indefinitely.
Okay, thanks for the explanation. It does seem that you're right*, and I especially like the needle example.
*Well, assuming you're allowed to move the hay around to keep the donkey confused (to prevent algorithms where he tilts more and more left or whatever from working). Not sure that was part of the original problem, but it's a good steelman.
In discussions about AI risks, the possibility of a dangerous arms race between the US and China sometimes comes up. It seems like this kind of arms race could happen with other dangerous techs like nano and bio. Pushing for more democratic governments in states like Russia and China might also decrease the chances of nuclear war, etc.
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
How could we push for regime change? Since the cost of living in China is lower than the US, funding dissidents who are already working towards democracy seems like a solid option. Cyberattacks seem like another... how hard would it be to neuter the Great Firewall of China?
This article from the Christian Science Monitor suggests that if the Chinese government decided to stop helping North Korea, that might cause the country to "implode", which feels like a good thing from an x-risk reduction standpoint.
I think the civil war that would result combined with extreme proximity between Chinese and US troops (the latter supporting South Korea and trying to contain nuclear weapons) is probably an abysmal thing from an x-risk reduction standpoint.
Is using "whom" uncool or something? Maybe I'm just elitist (in a bad way) for liking it.
If you slowly move the particles one at a time from one bale to the other, you know that once you've moved the entire bale the Ass will change its decision. At some point before that it won't be sure.
There might not actually be a choice where the Ass stands there until it starves. It might walk forward, or split in half down the middle and have half of it take one bale of hay and half take the other, or any number of other things. It's really more that there's a point where the Ass will eventually take a third option, even if you make sure all third options are worse than the first two.
Thanks (and I actually read the other new comments on the post before responding this time!) I still have two objections.
The first one (which is probably just a failure of my imagination and is in some way incorrect) is that I still don't see how some simple algorithms would fail. For example, the ass stares at the bales for 15 seconds, then it moves towards whichever one it estimates is larger (ignoring variance in estimates). If it turns out that they are exactly equal, it instead picks one at random. For simplicity, let's say it takes the first letter of the word under consideration (h), plugs the corresponding number (8) as a seed into a pseudorandom integer generator, and then picks option 1 if the result is even, option 2 if it's odd. It does seem like this might induce a discontinuity in decisions, but I don't see where it would fail (so I'd like someone to tell me =)).
The second objection is that our world is, in fact, not continuous (with the Planck length and whatnot). My very mediocre grasp of QM suggests to me that if you try to use continuity to break the ass's algorithm (and it's a sufficiently good algorithm), you'll just find the point where its decisions are dominated by quantum uncertainty and get it to make true random choices. Or something along those lines.
The Problem only assumes the universe is continuous. If you move a particle by a sufficiently small amount, you can guarantee an arbitrarily small change any finite distance in the future. Thanks to the butterfly effect, it has to be an absurdly tiny amount, but it's only necessary that it exists.
Also, it assumes that the Ass will eventually die, but that's really more for effect. The point is that it can't make the decision in bounded time.
Sorry, I'm not sure I understand what you mean. What particle should we move to change the fact that the ass will eventually get hungry and choose to walk forward towards one of the piles at semi-random? It seems to me like you can move a particle to guarantee some arbitrarily small change, but you can't necessarily move one to guarantee the change you want (unless the particle in question happens to be in the brain of the ass).
don't get fixed in proving the constructibility of enormously large polygons
Is this common? 'Cause um, at one point I did try to prove (or disprove) the constructibility of a hendecagon (11 sides) with neusis, but I didn't realise this was a popular pursuit. This isn't really related to the post, but I was very surprised constructibility got a mention.
(I ran into equations lacking an easy solution - they were sufficiently long/hard that Maple refused to chug through them - and decided it wasn't worth the effort to keep trying.)
The problem with the Problem is that it simultaneously assumes a high cost of thinking (gradual starvation) and an agent that completely ignores the cost of thinking. An agent who does not ignore this cost would solve the Problem as Vaniver says.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Whom use, even correct use but especially incorrect use, can signal an excessive concern with pedantry.
Alternatively, if it's done by someone whom you already know decently well, and who you know isn't really a crazy obsessive pedant, it can instead signal a liking of international or British English over American.