Comment author: Mark_Friedenbach 05 July 2015 05:08:26PM 2 points [-]

I saw this news and came back just to say congrats Kaj! I'm looking forward to reading about your thesis work.

Comment author: dxu 28 May 2015 03:16:40PM *  0 points [-]

That doesn't address the question of why magic exists (not to mention it falls afoul of Occam's Razor). You seem to be answering a completely different question.

Comment author: Mark_Friedenbach 28 May 2015 03:42:17PM 1 point [-]

The question in the story and in this thread was "why purposeful complexity?" not "why magic?"

Comment author: dxu 28 May 2015 01:27:22AM *  0 points [-]

His argument here really is exactly the same as an intelligent designer: "magic is too complicated and arbitrary to be the result of some physical process."

He actually does kind of address that, by pointing out that there are only two known processes that produce purposeful effects:

There were only two known causes of purposeful complexity. Natural selection, which produced things like butterflies. And intelligent engineering, which produced things like cars.

Magic didn't seem like something that had self-replicated into existence. Spells were purposefully complicated, but not, like a butterfly, complicated for the purpose of making copies of themselves. Spells were complicated for the purpose of serving their user, like a car.

Some intelligent engineer, then, had created the Source of Magic, and told it to pay attention to a particular DNA marker.

So, yeah, I disagree strongly that the two arguments are "exactly the same". That's the sort of thing you say more for emphasis than for its being true.

Comment author: Mark_Friedenbach 28 May 2015 05:29:32AM *  0 points [-]

I stand by my claim that they are the same.

An intelligent designer says "I have exhausted every possible hypothesis, there must be a god creator behind it all" when in fact there was at least one perfectly plausible hypothosis (natural selection) which he failed to thoroughly consider.

Harry says essentially "I have exhausted every possible hypothesis--natural selection and intelligent design--and there must be an Atlantean engineer behind it all" when in fact there were other perfectly plausible arguments such as the coordinated belief of a quorum of wizardkind explanation that I gave.

Comment author: halcyon 27 May 2015 12:50:43PM 0 points [-]

The problem with your point regarding chimpanzees is that it is true only if the chimpanzee is unable to construct a provably friendly human. This is true in the case of chimpanzees because they are unable to construct humans period, friendly or unfriendly, but I don't think it has been established that present day humans are unable to construct a provably friendly superintelligence.

Comment author: Mark_Friedenbach 27 May 2015 03:24:50PM 1 point [-]

That's wholly irrelevant. The important question is this: which can be constructed faster: a provably-safe-by-design friendly AGI, or a fail-safe not-proven-friendly tool AI? Lives hang in the balance: about 100,000 a day to be exact.

(There's an aside about whether an all-powerful "friendly" AI outcome is even desirable--I don't think it is. But that's a separate issue.)

Comment author: John_Maxwell_IV 27 May 2015 06:55:27AM *  2 points [-]

Within the physics community (I am a trained physicist), Einstein's story is retold more often as a cautionary tale than a model to emulate.

...huh? Correct me if I'm wrong here, but Einstein was a great physicist who made lots of great discoveries, right?

The right cautionary tale would be to cite physicists who attempted to follow the same strategy Einstein did and see how it mostly only worked for Einstein. But if Einstein was indeed a great physicist, it seems like at worst his strategy is one that doesn't usually produce results but sometimes produces spectacular results... which doesn't seem like a terrible strategy.

I have a very strong (empirical!) heuristic that the first thing people should do if they're trying to be good at something is copy winners. Yes there are issues like regression to the mean and stuff, but it provides a good alternative perspective vs thinking things through from first principles (which seems to be my default cognitive strategy).

Comment author: Mark_Friedenbach 27 May 2015 03:18:40PM 1 point [-]

The thing is Einstein was popular, but his batting average was less than his peers. In terms of advancing the state of the art, the 20th century is full of theoretical physicists that have a better track record for pushing the state of the art forward than Einstein, most of whom did not spend the majority of their career chasing rabbits down holes. They may not be common household names, but honestly that might have more to do with the hair than his physics.

Comment author: Kaj_Sotala 24 May 2015 11:18:20PM *  2 points [-]

I agree with Jacob that it is and should be concerning

That depends on whether you believe that machine intelligence researchers are the people who are currently the most likely to produce valuable progress on the relevant research questions.

One can reasonably disagree on MIRI's current choices about their research program, but I certainly don't think that their choices are concerning in the sense of suggesting irrationality on their part. (Rather the choices only suggest differing empirical beliefs which are arguable, but still well within the range of non-insane beliefs.)

Comment author: Mark_Friedenbach 26 May 2015 06:23:12PM 3 points [-]

On the contrary, my core thesis is that AI risk advocates are being irrational. It's implied in the title of the post ;)

Specifically I think they are arriving at their beliefs via philosophical arguments about the nature of intelligence which are severely lacking in empirical data, and then further shooting themselves in the foot by rationalizing reasons to not pursue empirical tests. Taking a belief without evidence, and then refusing to test that belief empirically--I'm willing to call a spade a spade: that is most certainly irrational.

Comment author: RobbBB 25 May 2015 03:05:56PM *  5 points [-]

Thought experiments aren't a replacement for real empiricism. They're a prerequisite for real empiricism.

"Intuitive mojo" is just calling a methodology you don't understand a mean name. However Einstein repeatedly hit success in his lifetime, presupposing that it is an ineffable mystery or a grand coincidence won't tell us much.

Why not derive probability theory in terms of confirmation.?

I already understand probability theory, and why it's important. I don't understand what you mean by "confirmation," how your earlier statement can be made sense of in quantitative terms, or why this notion should be treated as important here. So I'm asking you to explain the less clear term in terms of the more clear term.

Comment author: Mark_Friedenbach 26 May 2015 06:06:29PM *  2 points [-]

Einstein repeatedly hit success in his lifetime

Actually he did not. He got lucky early in his career, and pretty much coasted on that into irrelevance. His intuition allowed him to solve problems related to relativity, the photoelectric effect, Brownian motion, and a few other significant contributions within the span of a decade, early in his career. And then he went off the deep end following his intuition down a number of dead-ending rabbit holes for the rest of his life. He died in Princeton in 1955 having made no further significant contributions to physics after is 1916 invention of general relativity. Within the physics community (I am a trained physicist), Einstein's story is retold more often as a cautionary tale than a model to emulate.

Comment author: ESRogs 26 May 2015 02:04:46PM 0 points [-]

Would you be surprised if they funded MIRI?

Comment author: Mark_Friedenbach 26 May 2015 05:39:15PM *  2 points [-]

Depends on what you mean. Kaj Sotala, a research associate at MIRI, has a proposal he submitted to FLI that I think really deserves to be funded (it's the context for the modeling concept formation posts he has done recently). I think it has a good chance and I would be very disappointed if it wasn't funded. I'm not sure if you would count that as MIRI getting funded or not since the organization is technically not on the proposal, I think.

If you mean MIRI getting funded to do the sorts of things MIRI has been prominently pushing for in its workshops lately -- basic mathematical research on incomputable models of intelligence, Löbian obstacles, decision theory, etc. -- then I would be very, very disappointed and if it was a significant chunk of the FLI budget I would have to retract my endorsement. I would be very surprised, but I don't consider it an impossibility. I actually think it quite possible that MIRI could get selected for a research grant on something related to, but slightly different from what they would have been working on anyways (I have no idea if they have submitted any proposals). I do think it unlikely and surprising if FLI funds were simply directed into the MIRI technical research agenda.

To clarify, my understanding is that FLI was founded largely in response to Nick Bostrom's Superintelligence drumming up concern over AI risk, so there is a shared philosophical underpinning between MIRI and FLI--based on the same arguments I object to in the OP! But if Musk et al believed the MIRI technical agenda was the correct approach to deal with the issue, and MIRI itself was capable of handling the research, then they would have simply given their funds to MIRI and not created their own organization. There is a real different between how these two organizations are approaching the issue, and I expect to see that reflected in the grant selections.

FLI is doing the right thing but for the wrong reasons. What you do matters more than why you did it, so as long as FLI continues to do the right thing, they'll have my endorsement.

Comment author: ESRogs 25 May 2015 02:36:51AM 0 points [-]

Future of Life Institute is the only existential-risk AI organization which is actually doing meaningful evidence-based research into artificial intelligence.

Has FLI done any research to date? My impression is that they're just disbursing Musk's grant (which for the record I think is fantastic).

It sounds like they're trying to disburse the grant broadly and to encourage a variety of different types of research including looking into more short term AI safety problems. This approach seems like it has the potential to engage more of the existing computer science and AI community, and to connect concerns about AI risk to current practice.

Is that what you like about it?

Comment author: Mark_Friedenbach 25 May 2015 04:10:26AM 1 point [-]

Yes. I'm inferring a bit about what they are willing to fund due to the request for proposals that they have put out, and statements that have been made by Musk and others. Hopefully there won't be any surprises when the selected grants are announced.

Comment author: CellBioGuy 24 May 2015 07:39:28PM *  2 points [-]

100% of my charitable donations are going to SENS. Why they do not get more play in the effective altruism community is beyond me.

Probably because they're unlikely to lead to anything special over and above general biology research.

Comment author: Mark_Friedenbach 24 May 2015 10:44:57PM 0 points [-]

That's very cynical. What makes you say that?

View more: Next