No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.
All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.
Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.
I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.
With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.
Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)
While I very much enjoy programming (look at my creations come to life!) and have been known to conduct experiments in video games to discover their rules, I am almost entirely disinterested in puzzles for their own sake.
I'm a programmer, though, not a scientist, but if puzzles that were largely free of context where solving them could be used to accomplish some goal were a large part of science curricula, I'd be concerned about possible side effects.
Not that I don't think there may be some merit to be mined here.
Forgive me if I'm just being oblivious, but did anything end up happening on this?
I messaged Eliezer several times about this and he never got back to me. I talked to Tricycle, they said they were working on something, and what ended up happening was the split between Discussion and Main. This was not quite what I wanted, but given my inability to successfully contact Eliezer at the time I gave up.
Where can I find rationality exercises?
I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.
Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.
All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in ...
Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.
This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.
I read that as meaning something along the lines of, "if Nature is truly so wonderful, why did dogs leave it (to become domesticated)?"
Your stretching pulls the word over so large an area as to render it almost meaningless. I feel as though it exists to further some other goal.
The last time I heard art defined, it was as "something which has additional layers of meaning beyond the plain interpretation", or something like that. I'm not sure even that's accurate.
However, if you're going to insist on calling a spec ops team in action "art", then that level of stretching is such that so could designing a diesel locomotive, or any number of other purely practical exercise...
If you're a Transhumanist, you should give Ghost in the Shell: Standalone Complex a try. It's excellent Postcyberpunk in general.
This is basically the primary issue. It is possible for a hostile or simply incompetent drug company to spam the information sources of people with false or misleading information, drowning out the truth. The vast majority of humans in our society aren't experts in drugs, and becoming an expert in drugs is very expensive, so they rely on others to evaluate drugs for them. The public bureaucrats at least have a strong counter-incentive to letting nasty drugs out into the wild.
Furthermore, it can take some time to realize a drug isn't working, and the p...
One thing I desperately want to devise is some method, at least partial, of incentivizing bureaucrats (public or private) to act in the most useful manner. This is, by its very nature, a difficult challenge with lots of thorny sub-problems. However, I think it's something LWers have been thinking about, even if not always explicitly.
What if you bid $1, explain the risk of a bidding war resulting in a probable outcome of zero or net negative dollars, then offer to split your winnings with whoever else doesn't bid?
What occurred to me when I read it is "Why is this guy allowed to propose a motion which changes its actions based on how many people voted in favor of, or against, it?" While it's likely the company's bylaws don't specifically prohibit it, I'm not sure what a lawyer would make of it, and even if it worked, I don't think these sort of meta-motions would remain viable for long. I suspect the other members of the board would either sign a contract with each other, (gaining their own certainty of precommitment,) or refuse to acknowledge it on the grounds that it isn't serious.
Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.
And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? ... (read more)