If you make the run lengths increase exponentially instead of linearly then you get O(k) unconditionally.
I know. I was improving the constant factor.
No. But you can't do better than O(k) anyways, and you can do O(k) deterministically.
Randomising the length of the first step will improve on the constant factor by about 2. Similar analysis to the non-adversarial case, and with the same ETA I just added to my earlier comment.
Yeah, I guess LW rationality should be filed under "intellectual fads" rather than "cults".
What are the dynamics that produce a fad rather than growth into the mainstream? It might be worth CFAR thinking about that.
EA doesn't do that kind of thing.
Ipse dixit and motivated reasoning.
Why not just be absolutely anonymous?
Why not just be absolutely anonymous?
Accountability matters.
It's not news to anyone that it's pretty easy to screw up consequentialists. The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
The lesson I take from this is this: "maximize to solve a particular problem, rather than as a lifestyle choice."
Is that a solution to a particular problem, or a lifestyle choice?
Imagine an undirected graph where each node has a left and right neighbor (so it's an infinitely long chain). You are on a node in this graph, and somewhere to the left or right of you is a hotel (50/50 chance to be in either direction). You don't know how far -- k steps for an arbitrarily large k that an adversary picks after learning how you will look for a hotel.
The solution that takes 1 step left, 2 steps right, 3 steps left, etc. will find the hotel in O(k^2) steps. Is it possible to do better?
Since the problem posed is scale free, so should the solution be, and if there is a solution it must succeed in O(k) steps. Increase step size geometrically instead of linearly, picking an arbitrary distance for the first leg, and the worst-case is O(k), with a ratio of 2 giving the best worst-case value approaching 9k. The adversary chooses k to be much larger than the first leg and just after one of your turning points.
In the non-adversarial case, if log(k) is uniformly distributed between the two turning points in the right direction that enclose k, the optimum ratio is still somewhere close to 2 and the constant is around 4 or 5 (I didn't do an exact calculation).
ETA: That worst-case ratio of 9k is not right, given that definition of the adversary's choices. If the adversary is trying to maximise the ratio of distance travelled to k, they can get an unbounded ratio by placing k very close to the starting point and in the opposite direction to the first leg. If we idealise the search path to consist of infinitely many steps in increasing geometric progression, or assume that the adversary is constrained to choose a k at least one quarter of the first step, then the value of 9k holds.
Anyone have a better procedure for fixing this than the following?
- Notice the feeling.
- Treat it as a signal that your S1 wants you to search for cheaper ways to figure out which option is right than continuing to drive. Search for cheaper ways and execute them. Make it a thorough search and show your S1 the thoroughness of your search. Acknowledge the awfulness of "drive back and forth in an expensive search pattern" and only choose that as a last resort.
- If you don't immediately become much more certain of which way the hotel is in, and the "go 30mph" feeling does not go away, treat it as a signal that your S1 thinks the thought process by which you chose (under evidence-starvation) is wrong, which does not necessarily mean that the conclusion is wrong.
- List the ways your S1 thinks you're biased which are screwing up your evidenced-starved reasoning.
- Perform sanity-inducing rituals to counter those biases. (Think about your actual goal of getting to the hotel as soon as possible, forgive yourself for maybe driving past it, imagine all 4 outcomes (60mph forward, 60mph backward) x (get to hotel on next try after this, don't get to hotel on next try after this) and how you would feel about them)
- If the feeling is still there, this procedure has failed.
Anyone have a better procedure for fixing this than the following?
When the implications of the situation are clearly perceived, the right action is effortless.
The more effectively something does its job, the less superficially useful it appears to be.
If I have effective locks on the doors and windows of my house, as a result of which no-one breaks in, it will seem as if the locks are unnecessary. If I keep my car well maintained, so that it never breaks down, it will seem as if all that expense on maintenance was unnecessary. You don't see the casual thief who tried the door and went away, or the timing chain that never snapped and wrecked the engine. When there is no crime, it seems that the police are unnecessary; when no-one tries to invade, that the army is unnecessary. In places with clueless management, the more effectively the computer support staff do their job, the less reason management will see to employ them.
Founding the NHS, bringing in clear air and water acts, regulating minimum standards for child workers (and then all workers), extending the franchise. All these were done in defiance of precedent and with strong accusations of destroying prosperity.
The creation of the NHS is a good example. Nothing had been done like that before, and most of the predictions (both positive and negative) at the time, were very wrong (for instance, it was predicted that it would reduce medical costs overall!). This strongly implies that nobody really had any idea what was going to happen. And yet it basically worked out; and, in fact, most healthcare systems in developed countries (apart from the USA) seem to average out around the same broad categories of performance and cost, even if they seem to vary considerably in theory, This is evidence that our current systems push both revolutionary innovations and incremental ones, in the vague direction of decent performance,
On another side, many technological innovations completely destroy Chesterton fences existing in society. The whole idea of centralising and sharing knowledge across all different types of communities is something that there were a lot of fences to block; yet it seems to have kinda worked.
But the proper argument would require much more examples, and much defining of what a Chesterton Fence is.
But the proper argument would require much more examples, and much defining of what a Chesterton Fence is.
Indeed. Your examples seem to be simply changes. Not every change is a fence, and for that matter, not every taking down of a fence is done because no-one thought for five minutes about why it was there. All of those examples were intensively discussed at the time. Those opposed spoke at length about why it was there and why it should stay there, and those for spoke at length about why it should be taken down. In particular, extending the franchise, in the UK, was a process whose major part extended across nearly a century, step by step from the 1832 Reform Act to women getting equal voting rights in 1928.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Oil prices have recently fallen to near record lows. What are the risks and benefits?
Risks:
Benefits:
Debatable:
Any important dynamics I'm missing?
Saudi Arabia flooded the market in order to reduce the price, in order to combat the benefit to Iran of the raising of sanctions.