You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

klkblake comments on AI-related honours projects? - Less Wrong Discussion

3 Post author: klkblake 20 September 2013 03:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: klkblake 25 September 2013 01:00:25PM *  1 point [-]

Sorry for the late reply; my mid-semester break just started, which of course meant I came down with a cold :). I've (re-)read the recent papers, and was rather surprised at how much of the maths I was able to understand. I'm feeling less confidant about my mathematical ability after reading the papers, but that is probably a result of spending a few hours reading papers I don't fully understand rather an accurate assessment of my ability. Concept learning seems to be a good backup option, especially since it sounds like something my supervisor would love (except for the part where it's a form of supervised learning, but that's unlikely to be a problem).

I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?

Do you know which of the open problems MIRI is likely to attack first? I'd like to avoid duplication of effort, though I know with the unpredictability of mathematical insight that's not always feasible.

UPDATE: I just had a meeting with my supervisor, and he was pretty happy with all of the options I presented, so that won't be a problem. An idea I had this morning, which I'm pretty excited about, is potentially applying the method from the probabalistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure.

Comment author: [deleted] 30 September 2013 06:48:59PM *  0 points [-]

I vaguely remember EY mentioning something about there needing to be research into better operating systems and/or better programming languages (in terms of reliability/security/correctness), but this may have been a while ago. I have quite a bit of interest in this area, and some experience as well. Is this something that you think would be valuable (and if so, how valuable compared to work on the main open problems)?

Three areas I would look into are distributed capability based security systems (example: Amoeba), formally verified kernels (example: seL4), and formal verification of user programs (example: Singularity OS). Programming language research isn't really needed - haskell is the language I would choose, for its practical and theoretical advantages, but there are other options too. Where the work would be needed is in integration: making the GHC compiler output proofs (haskell is well suited to this, but there is not to my knowledge a complete framework for doing so), making the operating system / distributed environment runtime verify them, and most importantly of all, choosing what invariants to enforce.

Comment author: lukeprog 29 September 2013 05:34:22PM 0 points [-]

better operating systems and/or better programming languages

I doubt this is worth pushing on now. If it's useful, it'll be useful when we're closer to doing engineering rather than philosophy and math.

Do you know which of the open problems MIRI is likely to attack first?

In the immediate future we'll keep tackling problems addressed in our past workshops. Other than that, I'm not sure which problems we'll tackle next. We'll have to wait and see what comes of Eliezer's other "open problems" write-ups, and which ideas workshop participants bring to the November and December workshops.

potentially applying the method from the probabilistic reflection paper to the Halting Problem, since it seems to share the same self-referential structure

The participants of our April workshop checked this, and after some time decided they could probably break Tarski, Godel, and Lob with probabilistic reflection, but not the Halting Problem, despite the similarities in structure. You could ask (e.g.) Qiaochu if you want to know more.