Comment author: Julia_Galef 12 December 2014 11:07:50PM *  17 points [-]

Perhaps this is silly of me, but the single word in the article that made me indignantly exclaim "What!?" was when he called CFAR "overhygienic."

I mean... you can call us nerdy, weird in some ways, obsessed with productivity, with some justification! But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

[Edit: The author has clarified above that "overhygienic" was meant to refer to epistemic hygiene, not literal hygiene.]

Comment author: devi 12 December 2014 11:29:47PM 1 point [-]

But how can you take issue with our insistence that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

This is not something that would cross my mind if I was organizing such a retreat. Making sure people who handled food washed their hands with soap, yes, but not hand sanitizer. Perhaps this is a cultural difference between (parts of) US and Europe.

In response to comment by devi on MIRI Research Guide
Comment author: So8res 08 November 2014 02:45:22AM 6 points [-]

The recommended order for the papers seems really useful.

Thanks! :-D Let me know if you want any tips/advice if and when you start on another read-through.

The old course list mentioned many more courses ... Is this change mainly due to the different aims of the guides, or does it reflect an opinion in MIRI that those areas are not more likely to be useful than what a potential researcher would have studied otherwise?

Mostly different aims of the guides. I think Louie's criterion was "subjects that seem useful or somewhat relevant to FAI research," and was developed before MIRI pivoted towards examining the technical questions.

My criterion is "prerequisites that are directly necessary to learning and understanding our active technical research," which is a narrower target.

(esp. there is no AI book mentioned)

This is representative of the difference --- it's quite nice to know what modern AI can do, but that doesn't have too much relevance to the current open technical FAI problems, which are more geared towards things like putting foundations under fields where it seems possible to get "good enough to run but not good enough to be safe" heuristics. Knowing how MDPs work is useful, but it isn't really necessary to understand our active research.

I also notice that within the subfields of Logic, Model Theory seems to be replaced by Type Theory.

Not really. Rather, that particular Model Theory textbook was rather brutal, and you only need the first two or so chapters to understand our Tiling Agents research, and it's much easier to pick up that knowledge using an "intro to logic" textbook. The "model theory" section is still quite important, though!

if you're interested in Type Theory in the foundational sense the Homotopy Type Theory book is probably more exciting

It may be more exciting, but the HoTT book has a bad habit of sending people down the homotopy rabbit hole. People with CS backgrounds will probably find it easier to pick up other type theories. (In fact, Church's "simple type theory" paper may be enough instead of an entire textbook... maybe I'll update the suggestions.)

But yeah, HoTT certainly is pretty exciting these days, and the HoTT book is a fine substitute for the one in the guide :-)

In response to comment by So8res on MIRI Research Guide
Comment author: devi 09 November 2014 09:36:31PM 4 points [-]

It may be more exciting, but the HoTT book has a bad habit of sending people down the homotopy rabbit hole. People with CS backgrounds will probably find it easier to pick up other type theories. (In fact, Church's "simple type theory" paper may be enough instead of an entire textbook... maybe I'll update the suggestions.)

Yeah, it could quite easily sidetrack people. But simple type theory, simply wouldn't do for foundations since you can't do much mathematics without quantifiers, or dependent types in the case of type theory. Further, IMHO, the univalence axiom is the largest selling point of type theory as foundations. Perhaps a reading guide to the relevant bits of the HoTT book would be useful for people?

In response to MIRI Research Guide
Comment author: devi 08 November 2014 01:15:10AM 7 points [-]

The recommended order for the papers seems really useful. I was a bit lost about where to start last time I tried reading a chunk of MIRI's research.

The old course list mentioned many more courses, in particular ones more towards Computer Science rather than Mathematics (esp. there is no AI book mentioned). Is this change mainly due to the different aims of the guides, or does it reflect an opinion in MIRI that those areas are not more likely to be useful than what a potential researcher would have studied otherwise?

I also notice that within the subfields of Logic, Model Theory seems to be replaced by Type Theory. Is this reprioritization due to changed beliefs about which is useful for FAI, or due to differences in mathematical taste between you and Louie?

Also, if you're interested in Type Theory in the foundational sense the Homotopy Type Theory book is probably more exciting since that project explicitly has this ambition.

Comment author: KatjaGrace 16 September 2014 01:19:55AM 2 points [-]

Common sense and natural language understanding are suspected to be 'AI complete'. (p14) (Recall that 'AI complete' means 'basically equivalent to solving the whole problem of making a human-level AI')

Do you think they are? Why?

Comment author: devi 16 September 2014 03:23:38AM 5 points [-]

I think AI-completeness is a quite seductive notion. Borrowing the concept of reduction from complexity/computability theory makes it sound technical, but unlike those fields I haven't seen anyone actually describing eg how to use an AI with perfect language understanding to produce another one that proved theorems or philosophized.

Spontaneously it feels like everyone here should in principle be able to sketch the outlines of such a program (at least in the case of a base-AI that has perfect language comprehension that we want to reduce to), probably by some version of trying to teach the AI as we teach a child in natural language. I suspect that the details of some of these reductions might still be useful, especially the parts that don't quite seem to work. For while I don't think that we'll see perfect machine translation before AGI, I'm much less convinced that there is a reduction from AGI to perfect translation AI. This illustrates what I suspect might be an interesting differences between two problem classes that we might both want to call AI-complete: the problems human programmers will likely not be able to solve before we create superintelligence, and the problems whose solutions we could (somewhat) easily re-purpose to solve the general problem of human-level AI. These classes look the same as in we shouldn't expect to see problems from any of them solved without an imminent singularity, but differ in that the problems in the latter class could prove to be motivating examples and test-cases for AI work aimed at producing superintelligence.

I guess the core of what I'm trying to say is that arguments about AI-completeness has so far sounded like: "This problem is very very hard, we don't really know how to solve it. AI in general is also very very hard, and we don't know how to solve it. So they should be the same." Heuristically there's nothing wrong with this, except we should keep in mind that we could be very mistaken about what is actually hard. I'm just missing the part that goes: "This is very very hard. But if we knew it this other thing would be really easy."

Comment author: ciphergoth 15 September 2014 05:35:25PM 4 points [-]

I want a deep understanding of elliptic curve cryptography. This led me to study algebraic geometry, which led me to study category theory. I think I'm ready to go back to algebraic geometry now.

Comment author: devi 16 September 2014 12:14:28AM 2 points [-]

This sounds like an interesting project. I've studied quite some category theory myself, though mostly from the "oh pretty!" point of view, and dipped my feet into algebraic geometry because it sounded cool. I think that reading algebraic geometry with the sight set on cryptography would be more giving than the general swimming around in its sea that I've done before. So if you want a reading buddy, do tell. A fair warning though: I'm quite time limited these coming months, so will not be able to keep a particularly rapid pace.

View more: Prev