Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: lukeprog 22 March 2015 10:05:42PM *  22 points [-]

Working on MIRI's current technical agenda mostly requires a background in computer science with an unusually strong focus on logic: see details here. That said, the scope of MIRI's research program should be expanding over time. E.g. see Patrick's recent proposal to model goal stability challenges in a machine learning system, which would require more typical AI knowledge than has usually been the case for MIRI's work so far.

MIRI's research isn't really what a mathematician would typically think of as "math research" — it's more like theory-heavy computer science research with an unusually significant math/logic component, as is the case with a few other areas of computer science research, e.g. program analysis.

Also see the "Our recommended path for becoming a MIRI research fellow" section on our research fellow job posting.

Comment author: richard_reitz 19 March 2015 10:31:18PM *  2 points [-]

"Baby Rudin" refers to "Principles of Mathematical Analysis", not "Real and Complex Analysis" (as was currently listed up top.) (Source)

Comment author: lukeprog 19 March 2015 11:08:36PM 1 point [-]

Fixed, thanks!

Comment author: lukeprog 18 March 2015 11:37:39PM 8 points [-]

I tried this earlier, with Great Explanations.

Comment author: quinox 15 March 2015 10:52:47AM 8 points [-]

I can't mail that address, I get a failure message from Google:

We're writing to let you know that the group you tried to contact (errata) may not exist, or you may not have permission to post messages to the group.

I'll post my feedback here:

Hello,

I got the book "Rationality: From AI to Zombies" via intelligence.org/e-junkie for my Kindle (5th gen, not the paperwhite/touch/fire). So far I've read a dozen pages, but since it will take me a while to get to the end of the book I'll give some feedback right away:

  • The book looks great! Some other ebooks I have don't use page-breaks at the end of a chapter, don't have a Table of Content, have inconsistent font types/sizes etc. The PDF version is very pretty as well.

  • The filename "Rationality.mobi" (AI-Zombie) is the same as "rationality.mobi" (HPMOR)

  • A bunch of inter-book links such as "The Twelve Virtues of Rationality"/"Predictably Wrong"/"Fake Beliefs"/"Noticing Confusion" (all from Biases: An introduction) don't work: On my Kindle I have the option to "Follow link", but when I choose it the page refreshes and I'm still at the same spot.

    Inspecting the .mobi source with Calibre e-book reader I see:

    < a href="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" >Noticing Confusion< /a>

    The links from the TOCs and some other chapters do work properly.

  • Due to the lack of quotation marks and the nonuse of italics I didn't realize the part "As a consequence, it might be necessary" was a quote (Biases: An introduction). The extra margin left and right do indicate something special, but with the experience of so many bad ebooks my brain assumed it was just a broken indentation level.

  • The difference between a link going to the web and one going to a location within the book aren't obvious: one is only a slighter darker grey than the other. In Calibri the links are a nice green/blue, but my kindle doesn't have colours.

Cheers,

Comment author: lukeprog 15 March 2015 06:27:14PM 5 points [-]

I can't mail that address, I get a failure message from Google

Oops. Should be fixed now.

Comment author: Nanashi 13 March 2015 06:09:41PM 1 point [-]

Re: 0%, that's fair. Originally I included 0% because certain questions are either unanswerable (due to being blank, contextless, or whatnot) but even then there's still a non-zero possibility of guessing the right answer out of a near-infinite number of choices.

Re: Calibration across multiple sessions. Good idea. I'll start with a local-based solution because that would be easiest and then eventually do an account-based thing.

Re: Blank questions. Yeah, I should probably include some kind of check to see if the question is blank and skip it if so.

Comment author: lukeprog 13 March 2015 09:57:50PM 2 points [-]

Thanks! BTW, I'd prefer to have 1% and 0.1% and 99% and 99.9% as options, rather than skipping over the 1% and 99% options as you have it now.

Comment author: owencb 13 March 2015 07:56:56PM 2 points [-]

I think it's misleading to just drop in the statement that 0 and 1 are not probabilities.

There is a reasonable and arguably better definition of probabilities which excludes them, but it's not the standard one, and it also has costs -- for example probabilities are a useful tool in building models, and it is sometimes useful to use probabilities 0 and 1 in models.

(aside: it works as a kind of 'clickbait' in the original article title, and Eliezer doesn't actually make such a controversial statement in the post, so I'm not complaining about that)

Comment author: lukeprog 13 March 2015 09:55:55PM 1 point [-]

Fair enough. I've edited my original comment.

(For posterity: the text for my original comment's first hyperlink originally read "0 and 1 are not probabilities".)

Comment author: alexvermeer 13 March 2015 04:40:42PM 5 points [-]

Approximately 600,000 words!

Comment author: lukeprog 13 March 2015 09:48:28PM 9 points [-]

Which is roughly the length of War and Peace or Atlas Shrugged.

Comment author: RowanE 13 March 2015 05:37:15PM 11 points [-]

I think the problem here is with many trivia questions you either know the answer or you don't; the dominant factor in my results so far is that I either have no answer in mind, assign 0 probability to my being right and am correctly calibrated there, and then all of my answers at other levels of certainty have turned out right so far so my calibration curve looks almost rectangular.

I might just be getting accurate information that I'm drastically underconfident, but I think this might be one of the worse types of questions to calibrate on. I mean, even if the problem is just that I'm drastically underconfident on trivia questions and shouldn't be assigning less than 50% probability to any of my answers when I have an answer, that sounds sufficiently unrepresentative of most areas where you need calibration, and how most people perform on other calibration tests, for this to be a pretty bad measure of calibration.

Perhaps it would be better as a multiple choice test, so one can have possible answers raised to attention that may or may not be right, and assign probabilities to those?

Comment author: lukeprog 13 March 2015 06:03:23PM 6 points [-]

0% probability is my most common answer as well, but I'm using it less often than I was choosing 50% on the CFAR calibration app (which forces a binary answer choice rather than an open-ended answer choice). The CFAR app has lots of questions like "Which of these two teams won the Superbowl in 1978" where I just have no idea. The trivia database Nanashi is using has, for me, a greater proportion of questions on which my credence is something more interesting than an ignorance prior.

Comment author: RowanE 13 March 2015 05:44:46PM -1 points [-]

It's possible to be, to some extent, certain that you haven't thought of a correct answer (if not certain you don't know the answer), because you don't have any answer in mind and yet are not considering the answer "this is a trick question" or "there is no correct answer". Is this something that should be represented, making "0%" correct to include, or am I confused?

I got one blank question, which I think was an error with loading since the answer came up the same as the previous question, and the one after it took a couple seconds to appear on-screen.

Comment author: lukeprog 13 March 2015 05:59:17PM *  0 points [-]

I'd prefer not to allow 0 and 1 as available credences. But if 0 remained as an option I would just interpret it as "very close to 0" and then keep using the app, though if a future version of the app showed me my Bayes score then the difference between what the app allows me to choose (0%) and what I'm interpreting 0 to mean ("very close to 0") could matter.

Comment author: lukeprog 13 March 2015 05:39:19PM *  8 points [-]

Awesome!

I've been dying for something like this after I zoomed through all the questions in the CFAR calibration app.

Notes so far:
* The highest-available confidence is 99%, so the lowest-available confidence should be 1% rather than 0%. Or even better, you could add 99.9% and 0.1% as additional options.
* So far I've come across one question that was blank. It just said Category: jewelry and then had no other text. Somehow the answer was Ernest Hemingway.
* Would be great to be able to sign up for an account so I could track my calibration across multiple sessions.

View more: Next