Comment author: fowlertm 14 November 2014 03:18:59PM 0 points [-]

I'm sorry I missed this and hope it went well. Work has been chaotic lately, but I absolutely support a LW presence in Denver. I've tried once before to get a similar group off the ground, and would be happy to help this one along with presentations, planning, rationalist game nights, whatever.

Comment author: fowlertm 08 November 2014 02:56:55PM 1 point [-]

I'll try to be there.

Comment author: KFinn 22 September 2014 01:38:42AM 0 points [-]

I couldn't find this meetup group. Does it still exist?

Comment author: fowlertm 21 October 2014 02:49:35PM 0 points [-]

Actually, I folded it into another group called the Boulder Future Salon, which doesn't deal exclusively with x-risk but which has other advantages going for it, like a pre-existing membership.

Comment author: chaosmage 29 August 2014 05:49:28PM 1 point [-]

Sure MIRI isn't a cult, but I didn't say it was. I pointed out that Eliezer does play a huge role in it and he's unusually vulnerable to ad hominem attack. If anyone does that, your going with "whatever his flaws" isn't going to sound great to your audience.

Comment author: fowlertm 30 August 2014 02:19:17AM 1 point [-]

How would you recommend responding?

Comment author: chaosmage 27 August 2014 05:10:13PM *  2 points [-]

I can't think of rational arguments, even steelmanned ones, beyond those Holden already gave. Maybe I'm too close to the whole thing, but I think that when viewed rationally, MIRI is on pretty solid ground.

If I wanted to make people wary of supporting MIRI, I'd simply go ad hominem . Start with selected statements from supporters about how much MIRI is about Eliezer, and from Eliezer about how he can't get along with AI researchers, how he can't do straight work for more than two hours per day and how "this is a cult". Quote a few of the psychotic sounding parts from Eliezer's old autobiography piece. Paint him as a very skilled writer/persuader whose one great achievement was to get Peter Thiel to throw him a golden bone. Describe the whole Friendliness issue as an elaborate excuse from someone who claimed ability to code an AGI fifteen years ago, and hasn't.

Of course that's a lowly and unworthy style of argument, but it'd get attention from everyone there, and I wonder how you'd defend against it.

Comment author: fowlertm 29 August 2014 04:42:33PM 1 point [-]

I think I'm basically prepared for that line of attack. MIRI is not a cult, period. When you want to run a successful cult you do it Jim-Jones-style, carting everyone to a secret compound and carefully filtering the information that makes it in or out. You don't work as hard as you can to publish your ideas in a format where they can be read by anyone, you don't offer to publicly debate William Lane Craig, and you don't seek out the strongest versions of criticisms of your position (i.e. those coming from Robin Hanson).

Eliezer hasn't made it any easier on himself by being obnoxious about how smart he is, but then again neither did I; most smart people eventually have to learn that there are costs associated with being too proud of some ability or other. But whatever his flaws, the man is not at the center of a cult.

Comment author: fowlertm 29 August 2014 04:28:16PM 4 points [-]

"Note that AI is certainly not a great filter: an AI would likely expand through the universe itself"

I was confused by this, what is it supposed to mean? Off the top of my head it certainly seems like there is sufficient space between 'make and AI that causes the extinction of the human races or otherwise makes expanding into space difficult' and 'make an AI that causes the extinction of the human race but which goes on to colonize the universe' for AI to be a great filter.

Comment author: fowlertm 25 August 2014 04:52:51PM *  2 points [-]

This comment is a poorly-organized brain dump which serves as a convenient gathering place for what I've learned after several days of arguing with every MIRI critic I could find. It will probably get it's own expanded post in the future, and if I have the time I may try to build a near-comprehensive list.

I've come to understand that criticisms of MIRI's version of the intelligence explosion hypothesis and the penumbra of ideas around it fall into two permeable categories:

Those that criticize MIRI as an organization or the whole FAI enterprise (people making these arguments may or may not be concerned about the actual IE) and those that attack object-level claims made by MIRI.

Broad Criticisms

1a) Why worry about this now, instead of in the distant future, given the abysmal performance of attempts to predict AI?

1b) Why take MIRI seriously when there are so many expert opinions that diverge?

1c) Aren't MIRI and LW just an Eliezer-worshipping cult?

1d) Is it even possible to do this kind of theoretical work so far in advance of actual testing and experimentation?

1e) The whole argument can be dismissed as it pattern matches other doomsday scenarios, almost all of which have been bullshit.


Specific Criticisms

2a) General intelligence is what we're worried about here, and it may prove much harder to build than we're anticipating.

2b) Tool AIs won't be as dangerous as agent AIs.

2c) Why not just build an Oracle?

2d) the FOOM will be distributed and slow, not fast and localized.

2e) Dumb Superintelligence, i.e. nothing worth of the name could possibly misinterpret a goal like 'make humans happy'

2f) Even FAI isn't a guarantee

2g) A self-improvement cascade will likely hit a wall at sub-superintelligent levels.

2h) Divergence Issue: all functioning AI systems have built-in sanity checks which take short-form goal statements and unpack them in ways that take account of constraints and context (???). It is actually impossible to build an AI which does not do this (???), and thus there can be no runaway SAI which is given a simple short-form goal and then carries it to ridiculous logical extremes (I WOULD BE PARTICULARLY INTERESTED IN SOMEONE ADDRESSING THIS).

Comment author: TheAncientGeek 22 August 2014 04:30:24PM *  1 point [-]

A wrinkle in the foom argument, re: source code readability

There is a sense in which a programme can easily read its own source code. The whole point of a compiler is to scan and process source code. A C compiler can compile its own source code, providing it is written in C.

The wrinkle is the ability of a programme to knowingly read its own source code. Any running process can be put inside a sandbox or simulated environment, such that there is no surely technical way if circumventing it. A running process accesses its environment using system calls, for instance gettime() or openfile(), and it has to take their results on faith. The gettime() function doesn't have to return the real system time, and in a visualized process, attempts to access the file system do not access the real file system. There is no isthis_real() call, or at least, no unfakeable one. (Homoiconicity is no magic fix - even if a LISP programme can easily process it's own code once it has obtained it, it still has to trust the subroutine that advertises itself as returning it)

Therefore, a programme can easily be prevented from accessing and modifying its own code.

It could be argued that an intelligent agent could use social engineering to cajole a human operator into getting the source code out of a locked cabinet, or whatever. This is a variation on the standard MIRI claim that an AI could talk it's way out of a box. However,in this case the AI needs to talk it's way out before it starts to recursively self improve, because it needs it's source code. This suggests that an AI that is below a certain level of intelligence can be maintained there.

Comment author: fowlertm 25 August 2014 04:07:56PM 2 points [-]

A good point, I must spend some time looking into the FOOM debate.

Comment author: Luke_A_Somers 20 August 2014 03:01:55PM *  6 points [-]

They certainly originated, or at least were strongly influenced by these memes

Originated? Citation needed, seriously.

What we observe, instead, is that singulariarians ideas strongly pattern-match to Christian millenarianism and similar religious beliefs, mixed with popular scifi tropes (cryonics, AI revolt, etc.).

Not very strong pattern match. In Christian millenarianism, you have the good being separated from the bad. And this is considered good, even with all of the horror. Also, the humans don't cause the good and bad things. It's God. Also, it's prophesied and certain to happen in a particular way.

In a typical FOOM scenario, everyone shares their fate regardless of any personal beliefs. And if it's bad for people, it's considered bad - no excuses for any horror. And humans create whatever it is that makes the rest happen, so that 'no excuses' is really salient. There are many ways it could work out, there is no roadmap. This produces pretty much diametrically opposite attitude - 'be really careful and don't trust that things are going to work out okay'.

So the pattern-match fails on closer inspection. "We are heading towards something dangerous but possibly awesome if we do it just right" just isn't like "God is going to destroy the unbelievers and elevate the righteous, you just need to believe!" in any relevant way.

Comment author: fowlertm 20 August 2014 04:54:18PM 1 point [-]

I've heard the singularity-pattern-matches-religious-tropes argument before and hadn't given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I'm acquainted with. I'm less sure that it's true of Kurzweil's brand of futurism.

Comment author: pianoforte611 19 August 2014 03:27:20AM *  4 points [-]

It may be more useful to ask actual critics what they think (rather than asking proponents what they think critics are trying to say). Robin Hanson criticizes foom here. I don't actually know what he thinks of MIRI.

Comment author: fowlertm 19 August 2014 11:50:59AM 2 points [-]

Correct, I've been pursuing that as well.

View more: Prev | Next