All of cypher197's Comments + Replies

how do we respond?

Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.

And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? ... (read more)

No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.

All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.

7fubarobfusco
"Homeopathic brake pads" are a reductio-ad-absurdum of the actual proposal, though — which has to do with products that are not certified, tested, or guaranteed in the manner that you're used to. There are lots of levels of (un)reliability between Homeopathic (works 0% of the time) and NHTSA-Certified (works 99.99% of the time). For instance, there might be Cheap-Ass Brake Pads, which work 99.95% of the time at 10% of the cost of NHTSA-Certified; or Kitchen Sponge Brake Pads, which work 90% of the time at 0.05% of the cost. We do not have the option of requiring everyone to only do things that impose no danger to others. So if someone chooses to use a product that is incrementally more dangerous to others — whether because this lets them save money by buying Cheap-Ass Brake Pads; or because it's just more exciting to drive a Hummer than a Dodge minivan — how do we respond?

Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.

0fubarobfusco
No, but several historical cultures and a few current ones legitimize the notion of blood money as restitution to a victim's kin.

I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.

With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.

2Nornagest
Certain types of content labeling might work a lot like Hanson's Banned Shop, minus the trivial inconvenience of going to a different shop: the more obvious and dire the label, the closer the approximation. Cigarettes are probably the most advanced example I can think of. Now, cigarettes have also been extensively regulated in other ways, so we can't infer from this too well, but I think we can tentatively describe the results as mixed: it's widely understood that cigarettes stand a good chance of killing you, and smoking rates have indeed gone down since labeling laws went into effect, but it's still common. Whether or not we count this as a win probably depends on whether, and how much, we believe smokers' reasons for smoking -- or dismiss them as the dribble of a hijacked habit-formation system.

Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)

While I very much enjoy programming (look at my creations come to life!) and have been known to conduct experiments in video games to discover their rules, I am almost entirely disinterested in puzzles for their own sake.

I'm a programmer, though, not a scientist, but if puzzles that were largely free of context where solving them could be used to accomplish some goal were a large part of science curricula, I'd be concerned about possible side effects.

Not that I don't think there may be some merit to be mined here.

Forgive me if I'm just being oblivious, but did anything end up happening on this?

1BaconServ
Seems not. Three years is plenty of time.

I messaged Eliezer several times about this and he never got back to me. I talked to Tricycle, they said they were working on something, and what ended up happening was the split between Discussion and Main. This was not quite what I wanted, but given my inability to successfully contact Eliezer at the time I gave up.

1Nisan
The Center for Applied Rationality is currently collecting and developing rationality exercises and training people with them. They have not published a list of their exercises, but you can find a game they made here.

I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.

Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.


All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in ... (read more)

-1timtyler
As you will see by things like my Angelic Foundations essay, I do appreciate the virtues of working with machines. However, at the moment, there are also advantages to a man-machine symbiosis - namely robotics is still far behind the evolved molecular nanotechnology in animals in many respects - and computers still lag far behind brains in many critical areas. A man-machine symbiosis will thus beat machines in many areas, until after machines reach the level of a typical human in most work-related physical and mental feats. Machine-only solutions will just lose. So: we will be working with organisations for a while yet - during a pretty important period in history.

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

-3timtyler
Doesn't that rather depend on the values of those who programmed them? Organisations tend to construct machine intelligences which reflect their values. However, organisations don't have an "out of humans" constraint. They are typically a complex symbiosis of humans, culture, artefacts, plants, animals, fungi and bacteria.

I read that as meaning something along the lines of, "if Nature is truly so wonderful, why did dogs leave it (to become domesticated)?"

Your stretching pulls the word over so large an area as to render it almost meaningless. I feel as though it exists to further some other goal.

The last time I heard art defined, it was as "something which has additional layers of meaning beyond the plain interpretation", or something like that. I'm not sure even that's accurate.

However, if you're going to insist on calling a spec ops team in action "art", then that level of stretching is such that so could designing a diesel locomotive, or any number of other purely practical exercise... (read more)

If you're a Transhumanist, you should give Ghost in the Shell: Standalone Complex a try. It's excellent Postcyberpunk in general.

This is basically the primary issue. It is possible for a hostile or simply incompetent drug company to spam the information sources of people with false or misleading information, drowning out the truth. The vast majority of humans in our society aren't experts in drugs, and becoming an expert in drugs is very expensive, so they rely on others to evaluate drugs for them. The public bureaucrats at least have a strong counter-incentive to letting nasty drugs out into the wild.

Furthermore, it can take some time to realize a drug isn't working, and the p... (read more)

One thing I desperately want to devise is some method, at least partial, of incentivizing bureaucrats (public or private) to act in the most useful manner. This is, by its very nature, a difficult challenge with lots of thorny sub-problems. However, I think it's something LWers have been thinking about, even if not always explicitly.

What if you bid $1, explain the risk of a bidding war resulting in a probable outcome of zero or net negative dollars, then offer to split your winnings with whoever else doesn't bid?

What occurred to me when I read it is "Why is this guy allowed to propose a motion which changes its actions based on how many people voted in favor of, or against, it?" While it's likely the company's bylaws don't specifically prohibit it, I'm not sure what a lawyer would make of it, and even if it worked, I don't think these sort of meta-motions would remain viable for long. I suspect the other members of the board would either sign a contract with each other, (gaining their own certainty of precommitment,) or refuse to acknowledge it on the grounds that it isn't serious.

1Artikan
What do you mean by "serious"?