Comment author: gjm 20 June 2016 02:38:26PM -1 points [-]

Ah, interesting. Do you know how that timeline interacts with the growth of the rationalist community in Berkeley?

Comment author: AlexMennen 20 June 2016 09:56:39PM 2 points [-]

If I remember correctly, the Berkeley rationalist community was largely seeded by members of the Silicon Valley rationalist community moving to Berkeley, which began shortly before MIRI moved, but mostly happened when and after MIRI moved.

Comment author: gjm 19 June 2016 12:47:37AM -1 points [-]

I have no special insider knowledge. My impression, which I will gladly have corrected by those who know more, is that

  • MIRI (formerly SIAI) was founded in Berkeley because that's where Eliezer was.
  • Most of the rationalists in Berkeley are not MIRI employees.
  • Most of the rationalists in Berkeley did not move to Berkeley because of MIRI or Eliezer or other rationalists.

But, again, this is vague impressions and guesswork and assumptions rather than actual knowledge. So let's assume for a moment that I'm entirely wrong and the Berkeley rationalist community is a consequence of MIRI. MIRI was founded about 16 years ago, and I think it's only in the last few years that the Berkeley rationalist community has been a big thing. That would suggest that the "build a rationalist community by starting an institution there" strategy takes 10 years or so to work.

If so, then good places to consider might be places that already have kinda-MIRI-like institutions. Perhaps Oxford (home of the Future of Humanity Institute, and also of Giving What We Can if you're the EA sort of rationalist) and to a lesser extent Cambridge (home of the Centre for the Study of Existential Risk). I think the FHI and the CSER are the nearest non-MIRI things to MIRI.

Comment author: AlexMennen 20 June 2016 12:25:32AM 5 points [-]

Nope. SIAI was founded in Georgia, because that's where Eliezer was, then moved to Silicon Valley soon afterwards, and moved again to Berkeley just a few years ago (around the time it changed its name to MIRI iirc).

Comment author: turchin 13 May 2016 10:55:23PM 0 points [-]

By the way if OpenAI were suggested before Musk, it would likely be regarded as such shaky idea.

Comment author: AlexMennen 14 May 2016 12:13:57AM 1 point [-]

Many people do regard OpenAI as a shaky idea.

Comment author: turchin 13 May 2016 11:11:55AM 1 point [-]

I think that too much investment could result in more noise in the field. First of all because it will result in large number of published materials, which could exceed capacity of other researchers to read it. In result really interesting works will be not read. It will also attract in the field more people than actually clever and dedicated people exist. If we have 100 trained ai safety reserchers, which is overestimation , and we hire 1000 people, than real reasesrchers will be dissolved. In some fields like nanotech overinvestment result even in expel of original reaserchers because they prevent less educated ones to spent money as they want. But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

Comment author: AlexMennen 14 May 2016 12:09:13AM 1 point [-]

But most dangerous thing is creating of many incomparable theories of friendliness, and even AIs based on them which would result in AI wars and extinction.

I strongly disagree.

First, because there are multiple reasons that the creation of many distinct theories of friendliness would not be dangerous: The first one to get to superintelligence should be able to establish a monopoly on power, and then we wouldn't have to worry about the others. Even if that didn't happen, a reasonable decision theory should be able to cooperate with other agents with different reasonable decision theories, when it is in both of their interests to do so. And even if we end up with multiple friendly AIs that are not great at cooperation, it is a particularly easy problem to cooperate with agents that have similar goals (as is implied by all of them being friendly). And even if we end up with a "friendly AI" that is incapable of establishing a monopoly on power but that will cause a great deal of destruction when another similarly capable but differently designed agent comes into existence, even if both agents have broadly similar goals (I would not call this a successful friendly AI), convincing people not to create such AIs does not actually get much easier if the people planning to create the AI have not been thinking about how to make it friendly, so preventing people from developing different theories of friendliness still doesn't help.

But beyond all that, I would also say that not creating many incomparable theories of friendliness is dangerous. If there is only one that anyone is working on, it will likely be misguided, and by the time anyone notices, enough time may have been wasted that friendliness will have lost too much ground in the race against general AI.

Comment author: AlexMennen 03 May 2016 06:56:21AM 1 point [-]

Source?

Comment author: AlexMennen 25 April 2016 10:11:31PM 1 point [-]

Trying posting here since I don't see how to post to https://agentfoundations.org/.

People who haven't been given full membership in the forum can post links to things they have written elsewhere, but cannot make posts on the forum itself.

Is this kind of reasoning covered by already known desiderata for logical uncertainty?

It sounds similar to the Gaifman condition. Say you have a Pi_1 sentence, meaning a sentence of the form "for all x: phi(x)", where phi(x) is computable. If you've checked all values of x up to some large number, and phi(x) has always been true so far, you might think that this means that phi(x) is probably true for all the other values of x too. The Gaifman condition says that the probability that you assign to "for all x: phi(x)" should go to 1 as the range of values of x you've checked goes to infinity.

But it turns out that any computable way of handling logical uncertainty that satisfies the Gaifman condition also must give probabilities that go to 0 for some other true sentences. https://intelligence.org/files/Pi1Pi2Problem.pdf This may sound alarming, but I don't think it is too surprising; after all, the theory of the natural numbers is not computable, so any logical uncertainty engine will not be able to rule out different theories even in the limit.

Comment author: Lumifer 11 April 2016 05:16:34PM 2 points [-]

That's a funny comment. It does exactly the same thing twice: Please tell us where we didn't do too well, oh, and you are COMPLETELY WRONG because we did everything very well.

Comment author: AlexMennen 11 April 2016 10:39:29PM 0 points [-]

In context, it makes a lot of sense for him to do that. He's working for Signal now, so presumably is interested in how to improve the program, and he was a participant at the same time as Fluttershy, so he got an impression of the program as a participant.

Comment author: SquirrelInHell 09 April 2016 08:50:52AM *  3 points [-]

Of course you are right, but it would just be a linear transformation of the whole diagram, so it doesn't change anything in the result. I've built the diagram starting from a square, so I can't change this easily... just imagine the whole thing scaling on the X axis, OK?

Edit: since two people asked for this, I remade the diagram and now you can put in any values of P(E|H) and P(E|~H)

Comment author: AlexMennen 09 April 2016 05:34:07PM *  0 points [-]

When I drag the dot for P(E|~H), it only changes P(E|~H), but when I drag the dot for P(E|H), it still keeps P(E|H)+P(E|~H) conserved, which is a little weird. I think it would be better if changing either of them did not affect the other.

Comment author: AlexMennen 17 February 2016 04:20:29AM 1 point [-]

Humans are already biased towards thinking that various positive characteristics are correlated with each other. Keeping track of an explicit "goodness" variable would make that even worse. So while I don't see anything wrong with comparing specific characteristics between people or groups of people, I endorse the norm that it is not acceptable to make statements of the form "Person A is better than Person B" or "Group A is better than Group B". "Quality of character" is nowhere near specific enough.

Comment author: AlexMennen 13 January 2016 08:37:34AM 2 points [-]

I suspect that constraining a superintelligence from creating subagents will be much harder than designing AI control methods that leave no incentive to subvert them through creation of subagents.

View more: Prev | Next