Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: shminux 31 March 2014 06:04:15PM 6 points [-]

I do not understand the point of the essay http://yudkowsky.net/rational/the-simple-truth/ . The preface says that it "is meant to restore a naive view of truth", but all I see is strawmanning everything Eliezer dislikes. What is that "naive view of truth"?

Comment author: Larks 31 March 2014 11:15:00PM 1 point [-]

The naive version (with added parenthesis) :

  • ('Snow is white' is true) if and only if (snow is white)
Comment author: Nemarus 28 March 2014 01:57:44PM 0 points [-]

I'll be there again, hopefully, unless last-minute homework is assigned.

Comment author: Larks 29 March 2014 03:09:44PM 0 points [-]

Awesome!

Comment author: Larks 25 March 2014 12:33:52AM 0 points [-]

Me + 1, 60%

Comment author: Bayeslisk 24 March 2014 04:05:58AM 0 points [-]

I am unfortunately engaged all that day and thus will be unlikely to be able to show up.

Comment author: Larks 25 March 2014 12:33:15AM 0 points [-]

That is quite alright.

In response to Optimal Exercise
Comment author: Larks 24 March 2014 02:05:28AM 0 points [-]

People often accuse LW of being good at talking about rationality but not very good at acting. So thank you for writing this post; it provided me with the impetus to buy a pullup bar.

Meetup : Princeton NJ Meetup

1 Larks 23 March 2014 12:22AM

Discussion article for the meetup : Princeton NJ Meetup

WHEN: 29 March 2014 01:00:00PM (-0400)

WHERE: Small World Coffee, 14 Witherspoon St. Princeton, NJ 08540

Come one, come all! Yes, that's right, the third Princeton LW meetup is here, a mere month or two after it was promised. That's four years less procrastination than I did on cryonics!

Here's the plan:

1) Chatting and socialising 2) Discussion of Reason as Memetic Immune Disorder. You don't have to have read the article to attend the meetup though! 3) Some sort of fun rationality game! We did paranoid debating last time, but am open to other games this time.

As ever, everyone is welcome.

Discussion article for the meetup : Princeton NJ Meetup

Comment author: lukeprog 18 March 2014 04:07:35PM 28 points [-]

Also, I might as well share the approximate text of my short talk from that evening:

Hi everyone,

As most of you know, my name is Luke Muehlhauser, I’m the Executive Director at MIRI, and our mission is to ensure that the creation of smarter-than-human intelligence has a positive impact.

I’m going to talk for about 5 minutes on what we’re doing at MIRI these days, and at the end I’m going to make an announcement that I’m very excited about, and then can we can all return to our pizza and beer and conversation.

I’m also going to refer to my notes regularly because I’m terrible at memorizing things.

So here’s what we’re doing at MIRI these days. The first thing is that we’re writing up descriptions of open problems in Friendly AI theory, so that more mathematicians and computer scientists and formal philosophers can be thinking about these issues and coming up with potential solutions and so on.

As a first step, we’re publishing these descriptions to LessWrong.com, and the posts have a nice mix of dense technical prose and equations but also large colorful cartoon drawings of laptops dropping anvils on their heads, which I think is the mark of a sober research article if there ever was one. That work is being led by Eliezer Yudkowsky and Robby Bensinger, both of whom are here tonight.

We’re also planning more research workshops like we did last year, except this year we’ll experiment with several different formats so we can get a better sense of what works and what doesn’t. For example the experiment for our May workshop is that it’s veterans-only — everyone attending it has been to at least one workshop before, so there won’t be as much need to bring people up to speed before diving into the cutting edge of the research.

Later this year we’ll be helping to promote Nick Bostrom’s book on machine superintelligence for Oxford University Press, which when it’s released this summer will be by far the most comprehensive and well-organized analysis of what the problem is and what we can do about it. I was hoping he could improve the book by adding cartoons of laptops dropping anvils on their heads, but unfortunately Oxford University Press might have a problem with that.

One thing I’ve been doing lately is immersing myself in the world of what I call “AI safety engineering.” These are the people who write the AI software that drives trains and flies planes, and they prove that they won’t crash into each other if certain conditions are met and so on. I’m basically just trying to figure out what they do and don’t know already, and I’m trying to find the people in the field who are most interested in thinking about long-term AI safety issues, so they can potentially contribute their skill and expertise to longer-term issues like Friendly AI.

So far, my experience is that AI safety engineers have much better intuitions about AI safety than normal AI folk tend to have. Like, I haven’t yet encountered anybody in this field who thinks we’ll get desirable behavior from fully autonomous systems by default. They all understand that it’s extremely difficult to translate into intuitively desirable behavior into mathematically precise design requirements. They understand that when high safety standards are required, you’ve got to build the system from the ground up for safety rather than slapping on a safety module near the end. So I’ve been mildly encouraged by these conversations even though almost none of them are thinking about the longer-term issues — at least not yet.

And lastly, I’d like to announce that we’ve now hired two workshop participants from 2013 as full-time Friendly AI researchers at MIRI, Benja Fallenstein and Nate Soares. Neither of them are here today, they’re in the UK and Seattle respectively, but they’ll be joining us shortly and I’m very excited. Some of you who have been following MIRI for a long time can mark this down on your FAI development timeline: March 2014, MIRI starts building its Friendly AI team.

Okay, that’s it! Thanks everyone for coming. Enjoy the pizza and beer.

Comment author: Larks 19 March 2014 11:18:32PM *  18 points [-]

This deserves a top level post, at least in discussion. I assume MIRI just can't afford to hire anyone to make LW posts. As such, I've just made a $2,000 donation, earmarked for just that purpose.*

*Not actually earmarked for silly things.

Comment author: Jiro 07 March 2014 03:34:13PM *  6 points [-]

Well, my first thought was Bertrand Russell being fired from CUNY, which was around 1940, although that was mostly because of his beliefs about sex (which are still directly related to his disbelief in religion). Religion classes in public schools were legal until 1948, and compulsory school prayer was legal until 1963. "In God We Trust" was declared the national motto of the US in 1956.

Comment author: Larks 16 March 2014 02:48:32PM -2 points [-]

Bertrand Russell wasn't a materialist; he believed in Universals. I think you are confusing "materialist" with "people I agree with".

Comment author: Larks 23 February 2014 04:02:22PM 1 point [-]

I currently use Mnemosyne 1.2.2, and have a deck of over 2,000 cards, including pictures, html, LaTex, etc. Ideally I'd like to be able to review these on my android phone, but whenever I've tried this, I've run into sufficient problems that I've given up.

  • I tried upgrading from Mnemosyne 1 to Mnemosyne 2, but a large fraction of the cards were damaged in the translation (for example, I'd often made double-sided cards, then deleted one side, in Mnemosyne 1, but Mnemosyne 2 was not happy about this).
  • I've had trouble getting Mnemogogo to work, and it seems (but I'm not sure) that Mnemododo assumes you have Mnemosyne 2?
  • Should I move over to anki?

This could save me a huge amount of time, but I have developed a big ug field around it and would appreciate someone knowledgeable would give me an easy, taskified solution.

In response to comment by Larks on White Lies
Comment author: SaidAchmiz 19 February 2014 03:03:33AM *  -1 points [-]

One difference there is that the charity case would be an instance of illegal fraud. I say this, not by way of arguing that anything illegal is thereby immoral, but only to point out that due to the existence of laws against such fraud, the contributors have a reasonable expectation that their money will go to the advertised cause. Because you, the hypothetical charity organizer, know this, secretly donating to a different cause constitutes wilful deception.

On the other hand, there's no law against taking your parents' money and spending on anything you like. Your parents have no basis for a reasonable expectation that you won't do this — none, that is, except the natural degree of trust that accompanies (or should accompany) the parent-child relationship.

But if your parents take a stance that (they may reasonably expect) will undermine or destroy that trust in certain circumstances — circumstances that are not the child's fault — then the basis for a reasonable expectation of transparency is likewise undermined or destroyed.

In such a case, you, the parent, no longer have any reasonable expectation that your child will be honest with you. As such, when your child is in fact dishonest with you, there is nothing immoral about that.

In response to comment by SaidAchmiz on White Lies
Comment author: Larks 20 February 2014 02:06:57AM 1 point [-]

Parents who, having never noticed any signs of homosexuality in their child, and being aware of the base rates, would seem to have a reasonable expectation that the child be heterosexual.

View more: Next