No, but several historical cultures and a few current ones legitimize the notion of blood money as restitution to a victim's kin.
No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.
All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.
"All negative effects of buying things from the banned store accrue to the individual who chose to purchase from the banned store"
Or, the individual who chooses to purchase from the banned store is able to compensate others for any negative effects.
Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.
The problem here is bias to one's own biases, I think. After all, we're all stupid some of the time, and realising this is surely a core component of the Overcoming Bias project. Robin Hanson may not think he'd ever be stupid enough to walk into the Banned Shop, but we all tend to assume we're the rational one.
You also need to consider the real-world conditions of your policy. Yes, this might be a good idea in its Platonic ideal form, but in practice, that actually doesn't tell us very much. As an argument against "regulation", I think, with a confidence value of 80, that it's worse than useless.
Why? In practice, you're not going to have "Banned Shops" with big signs on them. If enough people want to buy the banned products, and we know they do want them because their manufacturers are profitable, the rest of the retail trade will instantly start lobbying for the right to sell them, maybe on a Banned Shelf next to the eggs. That's an unrealistic example, but then it's an unrealistic proposal.
What's more likely is a case of Pareto inefficiency - if you relax, say, medicines control on the grounds that it's a step towards the ideal, the growth in ineffective, dangerous, or resistance-causing quackery is probably going to be a significant disbenefit.
I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.
With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.
Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out.
I think you're underestimating the average person.
Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)
There's a particular kind of groupthink peculiar to scholarly fields. In my review of "The Trouble with Physics", I pointed to two (other) specific examples of recent advances that were stymied for long periods of time by scholarly groupthink. There are many others.
But I think Eli has hit on another important mechanism. Few learners these days are expected to rediscover important concepts, so we get no training in this ability. I don't see how turning scientific knowledge into a body of secrets will address the problem, but it's a valuable insight. I'd offer solving puzzles and breaking codes as alternative training for finding the patterns that nature is hiding from us. More scientists should spend their time entering puzzle contests, hunting geocaches, and attacking cryptosystems.
And could someone provide an interpretation of the cast of characters here? I enjoyed the list that was presented for a previous article.
While I very much enjoy programming (look at my creations come to life!) and have been known to conduct experiments in video games to discover their rules, I am almost entirely disinterested in puzzles for their own sake.
I'm a programmer, though, not a scientist, but if puzzles that were largely free of context where solving them could be used to accomplish some goal were a large part of science curricula, I'd be concerned about possible side effects.
Not that I don't think there may be some merit to be mined here.
To get more meta, not only has Less Wrong not produced "results", but all the posts saying Less Wrong needs to produce more "results" (example: Instrumental Rationality Is A Chimera) haven't produced any results. Even though most people liked the idea in that recent PUA thread, I don't see any concrete moves in that direction either.
Most of these threads have been phrased along the lines of "Someone really ought to do something about this", and then everyone agrees that yeah, they should, and then nothing ever comes out of it. That's a natural phenomenon in an anarchy where no one is the Official Doer of Difficult Things That Need To Be Done. Our community has one leader, Eliezer, and he has much better things to do with his time. Absent a formal organization, no one is going to be able to move a few hundred people to do things differently.
But small interventions can have major changes on behavior (see the sentence beginning with "I was reminded of this recently..." here). For example, I think if there were socialskills.lesswrong.com and health.lesswrong.com subcommunities linked to the top of the page, they would auto-populate with a community and interesting posts. I would love to see a discussion forum on nootropics where people can post their experiences and questions in an organized and easy to find way, for example. This idea has been brought up since forever and no one has ever done anything about it. The alternate idea, that we make a bulletin board in which these things can be done easily and naturally (AND WHICH CAN HANDLE OPEN THREADS IN A SANE WAY) has also been brought up since forever and no one has done anything about it (one person made a bulletin board back in the Overcoming Bias days, but no one used it. Go figure.)
So I propose the following:
Community norm against saying "It would be nice if someone in our community did X" if you have no particular plans to do X and no reason to think anyone else will.
Poll on whether people want a bulletin board or subreddits. This poll is below this comment.
If people want a bulletin board, and they promise to actually use it once it is made, and Eliezer and Tricycle don't want to make it themselves, and no one else more competent with computers will make it, I will make and host it (maybe. I'm not sure how much traffic it would get and I don't want to commit to something that would bankrupt me. But in principle, yes.)
I don't know how to program subreddits, but if that solution wins the poll, I will pay someone who does know a small amount of money to do it, and other people probably will too (because we will do the fundraising in a rationalist way!) adding up to a medium amount of money.
Forgive me if I'm just being oblivious, but did anything end up happening on this?
I've been collecting exercises (slowly) for years. Would love to contribute to a shared collection of exercises, ideally on a site that allows them to be tagged, rated, searched, and have people comment with their experiences.
I am skeptical that people reading fun shiny LW posts and encountering an exercise would actually then go do that exercise. Doing exercises is work!
Where can I find rationality exercises?
As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines.
However, at the moment, there are also advantages to a man-machine symbiosis - namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects - and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet - during a pretty important period in history.
I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.
Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.
Doesn't that rather depend on the values of those who programmed them?
This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.
Organisations tend to construct machine intelligences which reflect their values. However, organisations don't have an "out of humans" constraint. They are typically a complex symbiosis of humans, culture, artefacts, plants, animals, fungi and bacteria.
Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.
All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.
So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
"Homeopathic brake pads" are a reductio-ad-absurdum of the actual proposal, though — which has to do with products that are not certified, tested, or guaranteed in the manner that you're used to.
There are lots of levels of (un)reliability between Homeopathic (works 0% of the time) and NHTSA-Certified (works 99.99% of the time). For instance, there might be Cheap-Ass Brake Pads, which work 99.95% of the time at 10% of the cost of NHTSA-Certified; or Kitchen Sponge Brake Pads, which work 90% of the time at 0.05% of the cost.
We do not have the option of requiring everyone to only do things that impose no danger to others. So if someone chooses to use a product that is incrementally more dangerous to others — whether because this lets them save money by buying Cheap-Ass Brake Pads; or because it's just more exciting to drive a Hummer than a Dodge minivan — how do we respond?
Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.
And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? I don't mean check a box or sign a paper, because that's like clicking "I Agree" on a EULA or a security warning, and we've all seen how well that's worked out for casual users in the computer realm, even though we constantly bombard them with messages not to do exactly the things that get them in trouble.
Is it Paternalist arrogance when the system administrator makes it impossible to download and open .exe attachments in Microsoft Outlook? Clearly, there are cases where system administrators are paternalist and arrogant; on the other hand, there are a great many cases where users trash their machines. The system administrator has a much better knowledge about safely operating the computer; the user knows more about what work they need to get done. These things are issues of balance, but I'm not ready to throw out top-down bans on dangerous-to-self products.