Posts

Sorted by New

Wiki Contributions

Comments

shrink12y20

If you want to maximize your win, it is a relevant answer.

For the risk estimate per se, I think one needs not so much methods as a better understanding of the topic, which is attained by studying the field of artificial intelligence - in non cherry picked manner - and takes a long time. If you want easier estimate right now, you could try to estimate how privileged is the hypothesis that there is the risk. (There is no method that would let you calculate the wave from spin down and collision of orbiting black holes without spending a lot of time studying GR, applied mathematics, and computer science. Why do you think there's a method for you to use to tackle even harder problem from first principles?)

Best yet, ban of thinking of it as risk (we have introduced, for instrumental reasons, the burden of proof on those whom say there is no risk, when it comes to new drugs etc, and we did so solely because introduction of random chemicals into a well evolved system is much more often harmful than beneficial. In general there is no reason to put burden of proof on those whom say there is no wolf, especially not when people screaming wolf get candy for doing so), and think of it as prediction of what happens in 100 years. Clearly, you would not listen to philosophers whom use ideals for predictions.

shrink12y-10

The rationality and intelligence are not precisely same thing. You can pick e.g. those anti vaccination campaigners whom have measured IQ >120, and put them in a room, and call that a very intelligent community, that can discuss a variety of topics besides the vaccines. Then you will get some less insane people whom are interested in safety of vaccines coming in and getting terribly misinformed, which just is not a good thing. You can do that with almost any belief, especially using the internet to be able to get the cases from the pool of a billion or so.

shrink12y00

It was definitely important to make animals come, or to make it rain, tens thousands years ago. I'm getting a feeling that as I tell you that your rain making method doesn't work, you aren't going to give up trying if I don't provide you with an airplane, a supply of silver iodide, flight training, runway, fuel, and so on (and even then the method will only be applicable to some days, while the pray for rain is applicable any time).

As for the best guess, if you suddenly need a best guess on a topic because someone told you of something and you couldn't really see a major flaw in vague reasoning of the sort that can arrive at anything via a minor flaw on every step, that's a backdoor other agents will exploit to take your money (those agents will likely also opt to modify their own beliefs somewhat, because, hell, it feels a lot better to be saving mankind than to be scamming people). What is actually important to you, is your utility, and the best reasoning here is strategic: do not leave backdoors open.

shrink12y00

I think you have somewhat simplistic idea of justice... there is the "voluntary manslaughter", there's the "gross negligence", and so on. I think SIAI falls under the latter category.

How are they worse than any scientist fighting for a grant based on shakey evidence?

Quantitatively, and by a huge amount. edit: Also, the of beliefs, that they claim to hold, when hold honestly, result in massive loss of resources such as moving to cheaper country to save money, etc etc. I dread to imagine what would happen to me if I honestly were this mistaken about AI. The erroneous beliefs damage you.

The lying is about having two sets of incompatible beliefs, that are picked between based on convenience.

edit: To clarify, the justice is not about the beliefs held by the person. It is more about the process that the person is using to arriving at the actions (see the whole 'reasonable person' stuff). If A wants to kill B and A edits A's beliefs to be "B is going to kill me", and then acts in self defense and kills B, if the justice system had a log of A's processing, the A would go for premeditated murder. Even though at the time of the murder A is honestly acting in self defense. (Furthermore, lacking a cross neurophysiological anomaly, it is a fact of reality that the justice can only act based on inputs and outputs of agents)

shrink12y-10

You are declaring everything gray here so that verbally everything is equal.

There are people with no knowledge in physics and no inventions to their name, whose first 'invention' is a perpetual motion device. You really don't see anything dishonest about holding an unfounded belief that you're this smart? You really see nothing dishonest about accepting money under this premise without doing due diligence such as trying yourself at something testable, even if you think you're this smart?

There are scientists whom are trying very hard to follow processes that are not prone to error, people trying to come up with ways to test their beliefs, do you really see them as all equal in the level of dishonesty?

There are people whom are honestly trying to make a perpetual motion device, whom sink their money into it, and never produce anything that they can show to investors, because they are honest and don't use hidden wires etc. (The equivalent would have Eliezer moving out to a country with a very cheap living, canceling his cryonics subscription, and so on, to maximize the money available for doing the very important work in question)

You can talk all day in qualitative terms how it is the same, state unimportant difference as the only one, and assert that you 'don't see the moral difference', but this 'counter argument' you're making is entirely generic and equally applicable to any form of immoral or criminal conduct. A court wouldn't be the least bit impressed.

Also, I don't go philosophical. I don't care what's going on inside the head unless i'm interested in neurology. I know that the conduct is dishonest, and the beliefs under which the honest agent would have such conduct lack foundation, there isn't some honest error here that did result in belief that leads honest agent to adopt such conduct. The convincing liars don't seem to work by thinking 'how could I lie', they just adopt the convenient falsehood as high priority axiom for talk and as low priority axiom for walk, as to resolve contradictions in most useful way, and that makes it very very murky as to what they actually believe.

You can say that it is honest to act on a belief, but that's an old idea, and nowadays things are more sophisticated and it is a get out of jail free card for almost all liars, whom first make up a very convenient, self serving false belief with not a trace of honesty to the belief making process, and then act on it.

shrink12y20

That's how religions were created, you know - they could not actually answer why lightning is thundering, why sun is moving through the sky, etc. So they did look way 'beyond' the non-faulty reasoning, in search for answers now (being inpatient), and got answers that were much much worse than no answers at all. I feel LW is doing precisely same thing with AIs. Ultimately, when you can't compute the right answer in the given time, you will either have no answer or compute a wrong one.

On the orthogonality thesis, it is the case that you can't answer this question given limited knowledge and time (got to know AI's architecture first), and any reasonable reasoning tells you this, while LW's pseudo-rationality keeps giving you wrong answers (that aren't any less wrong than anyone else including the mormon church and any other weird religious group), I don't quite sure what you guys are doing wrong; maybe the focus on biases and conflation of biases with stupidity did lead to a fallacy that lack of (known) biases will lead to non stupidity, i.e. smartness, and if only you won't be biased you'll have a good answer. It doesn't work like this. It leads to another wrongness.

shrink12y10

Did they make a living out of those beliefs?

See, what we have here is a belief cluster that makes the belief-generator feel very good (saving the world, the other smart people are less smart, etc etc) and pays his bills. That is awfully convenient for a reasoning error. Not saying that it is entirely impossible to have a serendipitously useful reasoning error, but doesn't seem likely.

edit: note, I'm not speaking about some inconsequential honesty in idle thought, or anything likewise philosophical. I'm speaking of not exploiting others for money. There's nothing circular about the notion that honest person would not be talking a friend into paying him upfront to fix the car when that honest person does not have any discernible objective reason what so ever to think that he could fix the car, and a dishonest person would talk friend into paying. Now, if we were speaking of a very secretive person that doesn't like to talk of himself, there would've been probability of some big list of impressive accomplishments we haven't heard of...

shrink12y00

Would you take criticism if it is not 'positive' and doesn't give you alternative method to use for talking about same topic? Faulty reasoning has unlimited domain of application - you can 'reason' about purpose of the universe, number of angels that fit on a tip of a pin, of what superintelligences would do, etc. In those areas, non-faulty reasoning can not compete in terms of providing a sort of pleasure from reasoning, or in terms of interesting sounding 'results' that can be obtained with little effort and knowledge.

You can reason what particular cognitive architecture can do on a given task given N operations; you can reason what the best computational process can do in N operations. But that will involve actually using mathematics, and results will not be useful for unintelligent debates in the way in which your original statement is useful (I imagine you could use it to reply to someone who believes in absolute morality, as a soundbite; i really don't see how it could have any predictive power what so ever about the superintelligence though).

Load More