Comment author: IlyaShpitser 28 December 2012 11:22:53AM 6 points [-]

Here is my claim (contrary to Vassar). If you are worried about an unfriendly "foomy" optimizing process, then a natural way to approach that problem is to solve an easier related problem: make an existing unfriendly but "unfoomy" optimizing process friendly. There are lots of such processes of various levels of capability and unfriendliness: North Korea, Microsoft, the United Nations, a non-profit org., etc.

I claim this problem is easier because:

(a) we have a lot more time (no danger of "foom"),

(b) we can use empirical methods (processes already exist), to ground our theories.

(c) these processes are super-humanly intelligent but not so intelligent that their goals/methods are impossible to understand.

The claim is that if we can't make existing processes with all these simplifying features friendly, we have no hope to make a "foomy" AI friendly.

Comment author: cypher197 29 December 2012 04:13:54AM 0 points [-]

Those processes are built out of humans, with all the problems that implies. All the transmissions between the humans are lossy. Computers behave much differently. They don't lie to you, embezzle company funds, or rationalize their poor behavior or ignorance.

This is a very important field of study with some relation, and one I would very much like to pursue. OTOH, it's not that much like building an AI out of computers. Really, the complexity of building a self-sustaining, efficient, smart, friendly organization out of humans is quite possibly more difficult due to the "out of humans" constraint.

Comment author: [deleted] 01 December 2012 05:29:20PM 16 points [-]

Nobody likes to face the more painful question, What Made the Dogs Want to Leave?

TheThomason

In response to comment by [deleted] on Rationality Quotes December 2012
Comment author: cypher197 06 December 2012 10:13:46PM 2 points [-]

I read that as meaning something along the lines of, "if Nature is truly so wonderful, why did dogs leave it (to become domesticated)?"

Comment author: lessdazed 29 October 2011 04:27:50AM 1 point [-]

"effectively conveying thought or feeling,"

I think this is present in military planning, and inferable from outcomes.

art is typically the creation of something new, rather than the destruction of something existing. One could argue that they are creating new corpses

That's not at all how it seems to me. There is a good deal of inferential distance here.

Supreme excellence consists in breaking the enemy's resistance without fighting.

In the practical art of war, the best thing of all is to take the enemy's country whole and intact; to shatter and destroy it is not so good.

There is no instance of a nation benefiting from prolonged warfare.

--Sun Tzu, translated

The art lies in reducing the number of corpses,etc.

we are talking about clustering and relative degrees of similarity

Excellent, yes! I agree that in English "art", unmodified, does not refer to war and should not be used to refer to war or a broader category of art of which one thinks war is a member. However, this is significantly due to historical use, rather than being the simplest stroke circumscribing a concept in concept-space. Excluding the art of war from "art" is somewhat like considering dolphins "fish".

I acknowledge there is some tugging involved, but what hasn't been shown to my satisfaction is that less tugging is involved for modern art, or other things generally considered art.

Comment author: cypher197 28 September 2012 02:38:54AM 0 points [-]

Your stretching pulls the word over so large an area as to render it almost meaningless. I feel as though it exists to further some other goal.

The last time I heard art defined, it was as "something which has additional layers of meaning beyond the plain interpretation", or something like that. I'm not sure even that's accurate.

However, if you're going to insist on calling a spec ops team in action "art", then that level of stretching is such that so could designing a diesel locomotive, or any number of other purely practical exercises which are not performed for their aesthetic value. A "found object", or Jackson Pollock painting, or what-have-you, is created primarily for aesthetic value and/or communication of additional layers of meaning.

Comment author: Prismattic 22 October 2011 12:01:12AM 7 points [-]

The only anime I've really enjoyed is Fullmetal Alchemist . I suspect there are, in fact, plenty of people on LW with no interest in anime -- that just passes unnoticed because they simply remain silent when the subject comes up.

Comment author: cypher197 27 September 2012 05:43:21PM 2 points [-]

If you're a Transhumanist, you should give Ghost in the Shell: Standalone Complex a try. It's excellent Postcyberpunk in general.

Comment author: Dmytry 18 March 2012 12:06:02PM 4 points [-]

The fundamental problem of both FDA and such non-regulatory body is that the drug industry got the money to fake the signals. The valid argumentation must be substantially more effective at convincing public than invalid argumentation, for it to work at all.

(I do not think btw that people must be protected from themselves.)

Comment author: cypher197 29 July 2012 06:11:25PM *  3 points [-]

This is basically the primary issue. It is possible for a hostile or simply incompetent drug company to spam the information sources of people with false or misleading information, drowning out the truth. The vast majority of humans in our society aren't experts in drugs, and becoming an expert in drugs is very expensive, so they rely on others to evaluate drugs for them. The public bureaucrats at least have a strong counter-incentive to letting nasty drugs out into the wild.

Furthermore, it can take some time to realize a drug isn't working, and the placebo effect is going to be in full force to make that even harder. By the time you realize you were sold snake oil, you may already be dead. "Reputation" may not be of use here, as fake drugs are much cheaper to develop than real ones, so the cost of throwing an old trademark or company shell under the bus every few years is minimal, especially compared to the cost of discovering that for individuals.
Consider also the time in man-hours that must be spent hunting for information and evaluating safety, not just of the drugs themselves, but also the reputations of the private verification firms, by all individuals that need drugs. The FDA is cheaper.

Edit: I should say that "in my estimation, the FDA is cheaper." It's only back-of-the-napkin math.
I generally take the position that we should protect people from themselves to the degree that it is reasonably practical to do so. We have all failed due to ignorance, irrationality, or inattention at some point. Of course, when someone tries to break open your high-voltage power line to steal the copper inside, well...

Comment author: cypher197 29 July 2012 05:51:43PM 3 points [-]

One thing I desperately want to devise is some method, at least partial, of incentivizing bureaucrats (public or private) to act in the most useful manner. This is, by its very nature, a difficult challenge with lots of thorny sub-problems. However, I think it's something LWers have been thinking about, even if not always explicitly.

Comment author: evand 28 July 2012 06:06:33PM 1 point [-]

And then the other bidder bids $3, and promises to give you $1 if he wins the auction. It seems you still haven't avoided the problem of privileging one player's choices.

Comment author: cypher197 29 July 2012 05:30:31PM 0 points [-]

What if you bid $1, explain the risk of a bidding war resulting in a probable outcome of zero or net negative dollars, then offer to split your winnings with whoever else doesn't bid?

Comment author: Yvain 24 July 2012 07:13:33PM *  8 points [-]

This seems much like the Prisoners' Dilemma. Yes, you can avoid it easily if you can talk beforehand and trust everyone to go through with their precommitments. If you can't talk or you don't trust what they say, then it's much harder to avoid. After all, if the first two directors cooperated with the plan by voting no, then the second two directors would have a very high incentive to defect and vote yes.

In practice people are usually able to solve these for much the same reasons they can usually solve prisoners' dilemmas - things like altruism and reputational penalties.

Comment author: cypher197 29 July 2012 05:22:09PM 5 points [-]

What occurred to me when I read it is "Why is this guy allowed to propose a motion which changes its actions based on how many people voted in favor of, or against, it?" While it's likely the company's bylaws don't specifically prohibit it, I'm not sure what a lawyer would make of it, and even if it worked, I don't think these sort of meta-motions would remain viable for long. I suspect the other members of the board would either sign a contract with each other, (gaining their own certainty of precommitment,) or refuse to acknowledge it on the grounds that it isn't serious.

View more: Prev