Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

how do we respond?

Well, as a society, at some point we set a cut-off and make a law about it. Thus some items are banned while others are not, and some items are taxed and have warnings on them instead of an outright ban.

And it's not just low intelligence that's a risk. People can be influenced by advertising, social pressure, information saturation, et cetera. Let's suppose we do open this banned goods shop. Are we going to make each and every customer fill out an essay question detailing exactly how they understand these items to be dangerous? I don't mean check a box or sign a paper, because that's like clicking "I Agree" on a EULA or a security warning, and we've all seen how well that's worked out for casual users in the computer realm, even though we constantly bombard them with messages not to do exactly the things that get them in trouble.

Is it Paternalist arrogance when the system administrator makes it impossible to download and open .exe attachments in Microsoft Outlook? Clearly, there are cases where system administrators are paternalist and arrogant; on the other hand, there are a great many cases where users trash their machines. The system administrator has a much better knowledge about safely operating the computer; the user knows more about what work they need to get done. These things are issues of balance, but I'm not ready to throw out top-down bans on dangerous-to-self products.

No amount of money can raise the dead. It's still more efficient to prevent people from dying in the first place.

All people are idiots at least some of the time. I don't accept the usage of Homeopathic Brake Pads as a legitimate decision, even if the person using them has $1 billion USD with which to compensate the innocent pedestrians killed by a speeding car. I'll accept the risk of occasional accident, but my life is worth more to me than the satisfaction some "alternative vehicle control systems" nut gets from doing something stupid.

Unfortunately we have not yet discovered a remedy by which court systems can sacrifice the life of a guilty party to bring back a victim party from the dead.

I, for one, imagine that I could easily walk into the Banned Shop, given the right circumstances. All it takes is one slip up - fatigue, drunkness, or woozy medication would be sufficient - to lead to permanent death.

With that in mind, I don't think we should be planting more minefields than this reality currently has, on purpose. I like the idea of making things idiot-proof, not because I think idiots are the best thing ever, but because we're all idiots at least some of the time.

Yeah, I thought the post was largely well-reasoned, but that that statement was reckless (largely because it seems ungrounded and plays to a positive self-image for this group.)

While I very much enjoy programming (look at my creations come to life!) and have been known to conduct experiments in video games to discover their rules, I am almost entirely disinterested in puzzles for their own sake.

I'm a programmer, though, not a scientist, but if puzzles that were largely free of context where solving them could be used to accomplish some goal were a large part of science curricula, I'd be concerned about possible side effects.

Not that I don't think there may be some merit to be mined here.

Forgive me if I'm just being oblivious, but did anything end up happening on this?

I just think it's a related but different field. Actually, solving these problems is something I want to apply some AI to (more accurate mapping of human behavior allowing massive batch testing of different forms of organization given outside pressures - discover possible failure modes and approaches to deal with them), but that's a different conversation.

Perhaps. But humans will lie, embezzle, and rationalize regardless of who programmed them. Besides, would the internals of a computer lie to itself? Does RAM lie to a processor? And yet humans (being the subcomponents of an organization) routinely lie to each other. No system of rules I can devise will guarantee that doesn't happen without some very serious side effects.


All of which are subject to the humans' interpretation and use. You can set up an organizational culture, but that won't stop the humans from mucking it up, as they routinely do in organizations across the globe. You can write process documents, but that doesn't mean they'll even follow them at all. If you specify a great deal of process, they may not even do so intentionally - they may just forget. With a computer, that would be caused by an error, but it's a controllable process. With a human? People can't just decide to remember arbitrary amounts of arbitrary information for arbitrary lengths of time and pull it off reliably.


So; on the one hand, I have a system being built where the underlying hardware is reliable and under my control, and generally does not create errors or disobey. On the other hand, I have a network of unreliable and forgetful intelligences that may be highly irrational and may even be working at cross purposes with each other or the organization itself. One requires extremely strict instructions, the other is capable of interpretation and judgment from context without specifying an algorithm in great detail. There are similarities between the two, but there are also great practical differences.

Load More