Comment author: wedrifid 14 April 2012 08:10:01AM 3 points [-]

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

Comment author: Dmytry 14 April 2012 08:26:45AM *  -3 points [-]

I didn't go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn't 'cultish', it's just practical.

I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.

edit: botched the link.

Comment author: Dmytry 14 April 2012 08:13:59AM *  0 points [-]

Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI's experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can't trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .

Comment author: John_Maxwell_IV 14 April 2012 07:48:09AM 0 points [-]

It's not irrational, it's just weak evidence.

I'm not sure exactly what you're asking with the second paragraph. In any case, I don't think the Singularity Institute is dogmatically in favor of friendliness; they've collaborated with Nick Bostrom on thinking about Oracle AI.

Comment author: Dmytry 14 April 2012 07:54:21AM *  1 point [-]

It's not irrational, it's just weak evidence.

Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can't process everything.

Comment author: CarlShulman 14 April 2012 01:51:51AM *  5 points [-]

This exact attitude is rare. Much more common is the "let the AIs do their own thing, even if it eats humanity for breakfast, rather than shackling them to human-derived values" attitude, at least among AI folk (David Dalrymple, in the recent comment thread here, is one of many examples).

Also, he isn't saying humanity will be overlooked, but cheaply taken care of as specimens, zoo/nature reserve animals, and possibly ransom or to get in good with more powerful protectors of humanity (aliens or simulators). Or that AIs that don't care about us will be successfully constrained.

Comment author: Dmytry 14 April 2012 07:32:25AM *  0 points [-]

This another example of method of thinking I dislike - thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it's charity which is set at constant percentage point. Someone else winning doesn't imply you are losing.

Comment author: Wei_Dai 14 April 2012 12:13:37AM 16 points [-]

For example, if I point out that the AI has good reasons not to kill us all due to it not being able to determine if it is within top level world or a simulator or within engineering test sim. It is immediately conjectured that we will still 'lose' something because it'll take up some resources in space.

To back up Carl's claim, see Outline of possible Singularity scenarios (that are not completely disastrous) and further links in the comments there. You know, I keep hoping that you'd update your evaluation of this community and especially your estimate of how much we've already thought about these things, but maybe it's time for me to update...

Comment author: Dmytry 14 April 2012 07:14:14AM *  -4 points [-]

It's just that I don't believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)

On top of that, if you fear WBEs self improving - don't we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can't predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there's the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don't think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don't think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.

Comment author: CarlShulman 13 April 2012 11:37:47PM *  19 points [-]

Yeah, no one is being hired to code AGI at SIAI right now. Software developers are for the "Center for Modern Rationality"/LessWrong side, as I understand it, e.g. creating little programs to illustrate Bayes' rule and the like.

Eliezer wants an FAI team to undertake many years of theoretical CS and AI research before trying to code an AGI, and that research group has not even been assembled and is not currently in operation. Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Comment author: Dmytry 14 April 2012 07:05:40AM *  3 points [-]

Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer's individual biases.

Not if there is self selection for coincidence of their biases with Eliezer's. Even worse if the reasoning you outlined is employed to lower risk estimates.

Comment author: John_Maxwell_IV 14 April 2012 05:02:13AM *  2 points [-]

The 'random mind design space' is probably the worst offender.

My understanding was that this wasn't any attempt to rigorously formulate the idea of a randomly chosen mind, just suggest the possibility of a huge number of possible reasoning architectures that didn't share human goals.

There isn't a solid consequentialist reason to think that FAI effort decreases chance of doomsday as opposed to absence of FAI effort. It may increase the chances as easily as decrease.

This is one of those points you really should've left out... If you got something to say on this topic, say it, we all want to hear it (or at least I do). Of course it's not obvious that in FAI effort will certainly be helpful, but empirically, people trying to do things seems to make it more likely that they get done.

It appears to me that will to form most accurate beliefs about the real world, and implement solutions in the real world, is orthogonal to problem solving itself.

Have you heard of G?

Intelligent people tend to be impractical because of bugs in human brains that we shouldn't expect to appear in other reasoning architectures.

Of course general intelligence is a complicated multifaceted thing, but that doesn't mean it can't be used to improve itself. Humans are terrible improving ourselves because we don't have access to our own source code. What if that changed?

Foom scenario is especially odd in light of the above. Why would optimizing compiler that can optimize it's ability to optimize, suddenly emerge will? It could foom all right, but it wouldn't get out and start touching itself from outside; and if it would, it would wirehead rather than add more hardware; and it would be incredibly difficult to prevent it from doing so.

You seem awfully confident. If you have a rigorous argument, could you share it?

You might wish to read someone who disagrees with you:

http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf

(and why I am posting this: looking at the donations received by SIAI and having seen talk of hiring software developers, I got pascal-wagered into explaining it)

Thanks a lot; one of the big problems with fringe beliefs is that folks rarely take the time to counterargue since there isn't much in it for them.

Even better is if such criticism can take the form of actually forming a consistent contrasting point of view supported by rigorous arguments, without making drama out of the issue, but I'll take what I can get.

If you care for future of mankind, and if you believe in AI risks, and if a software developer, after an encounter with you, becomes less worried of the AI risk, then clearly you are doing something wrong.

That seems wrong to me. When I hear that some group espouses some belief, I give them a certain amount of credit by default. If I hear their arguments and find them less persuasive than I expected, my confidence in their position goes down.

I have certainly seen some of what I would consider privileging of the hypothesis being done by AGI safety advocates. However, groupthink is not all or nothing; better to extract the best of the beliefs of others then throw them out wholesale.

Some of your arguments are very weak in this post (e.g. the lone genius point basically amounts to ad hominem) and you seem to refer frequently to points that you've made in the past without linking to them. Do you think you could assemble a directory of links to your thoughts on this topic ordered from most important/persuasive to least?

Comment author: Dmytry 14 April 2012 06:53:43AM *  2 points [-]

e.g. the lone genius point basically amounts to ad hominem

But why it is irrational, exactly?

but empirically, people trying to do things seems to make it more likely that they get done.

As long as they don't use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?

Comment author: fubarobfusco 14 April 2012 03:33:41AM 1 point [-]

The unknown-origin beliefs that I have been exposed to previously trace back to bad ideas, and as I update, those unknown-origin ideas get lower weight.

Could you expand on this?

Comment author: Dmytry 14 April 2012 06:49:14AM *  0 points [-]

The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.

Comment author: CarlShulman 13 April 2012 11:32:34PM 15 points [-]

It's clear that Eliezer has been the driving force behind SIAI existing as an organization. He founded it, his writings have been its most visible and influential face, he wants to organize an FAI team of which he would be a member, and so forth. The Singularity Summit is basically independent of him, and he was not particularly involved in the Visiting Fellows and rationality camp events that have occurred so far, proceeded mostly without him), but "driving force" remains very fair.

However, a number of SIAI folk like Michael Vassar and myself were independently interested in AI risk (and benefit) before coming in contact with Eliezer or his work, and would have likely continued to pursue other paths to affect this area sans EY. I think that this is important for the underlying question about independence of beliefs, and the two were bundled together.

Also, Nick Bostrom's work has been influential for a number of people, especially his papers on the ethics of astronomical waste and superintelligence.

Comment author: Dmytry 14 April 2012 06:47:23AM *  -6 points [-]

EY founded it. Everyone else is self selected for joining (as you yourself explained), and represents extreme outliers as far as I can tell.

Comment author: wedrifid 14 April 2012 04:55:21AM *  7 points [-]

Now, do not think of it in terms of fixing the good idea's argument, please. Treat it as evidence that the idea is, actually, bad, and process it as to make a better idea - which may or may not coincide with original idea. You can't right now know if your idea is in fact good or not - rather than fixing you should make a new idea. To do anything else is not rationality. It is rationalization. It is to become even more wrong by making even more privileged hypotheses, and make even worse impression on the engineers whom you try to convince.

You seem to be grossly overvaluing the weight we should place on your personal testimony as a "Software Developer". It is most certainly not 'irrational' to not abandon an idea simply because you say so very frequently and very assertively. About half the people here are software developers, many more are mathematicians as well. I've also seen the intellectual work some of them output - which is what you declared we should evaluate people on - and it is orders of magnitude more impressive than impressive than what we have seen from you.

It does not require gross failures of rationality to not update drastically and abandon ideas based on one anecdote of a software developer with little knowledge of this field making ultimatums. This is, indeed "evidence that the idea is, actually, bad" but it is overwhelmingly weak evidence and it would be a mistake to treat it as more.

Comment author: Dmytry 14 April 2012 06:39:54AM *  0 points [-]

For every one of those people you can have one, or ten, or a hundred, or a thousand, that dismissed your cause. Don't go down this road for confirmation, that's how self reinforcing cults are made.

View more: Prev | Next