Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

AALWA: Ask any LessWronger anything

24 Post author: Will_Newsome 12 January 2014 02:18AM

If you want people to ask you stuff reply to this post with a comment to that effect.

More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.

If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.

Comments (597)

Comment author: James_Miller 12 January 2014 07:00:11AM 15 points [-]

Ask me anything. I'm the author of Singularity Rising.

Comment author: somervta 12 January 2014 07:56:19AM 8 points [-]

What, if anything, do you think a lesswrong regular who's read the sequences and all/most of MIRI's non-technical publications will get out of your book?

Comment author: James_Miller 12 January 2014 06:14:49PM 6 points [-]

Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.

Comment author: Anatoly_Vorobey 12 January 2014 10:17:12AM 6 points [-]

How much time did it take you to write the singularity book? How much money has it brought you?

Same question about your microeconomics textbook. Also, what motivated you to write it given that there must be about 2^512 existing ones on the market?

Comment author: James_Miller 12 January 2014 06:25:27PM 10 points [-]

Hard to say about the time because I worked on both books while also doing other projects. I suspect I could have done the Singularity book in about 1.5 years of full time effort. I don't have a good estimate for the textbook. Alas, I have lost money on the singularity book because the advance wasn't all that big, and I had personal expenses such as hiring a research assistant and paying a publicist. The textbook had a decent advance, still I probably earned roughly minimum wage for it. Surprisingly, I've done fairly well with my first book, Game Theory at Work, in part because of translation rights. With Game Theory at Work I've probably earned several times the minimum wage. Of course, I'm a professor and part of my salary from my college is to write, and I'm not including this.

I wanted to write a free market microeconomics textbook, and there are very few of these. I was recruited to write the textbook by the people who published Game Theory at Work. Had the textbook done very well, I could have made a huge amount of money (roughly equal to my salary as a professor) indefinitely. Alas, this didn't happen but the odds of it happening were well under 50%. Since teaching microeconomics is a big part of my job as a college professor, there was a large overlap between writing the textbook and becoming a better teacher. My textbook publisher sent all of my chapters to other teachers of microeconomics to get their feedback, and so I basically got a vast amount of feedback from experts on how I teach microeconomics.

Comment author: AlexMennen 13 January 2014 02:10:56AM 3 points [-]

Why did you decide to run for Massachusetts State Senate in 2004? Did you ever think you had a chance of winning?

Comment author: James_Miller 13 January 2014 03:01:05AM 5 points [-]

No. I ran as a Republican in one of the most Democratic districts in Massachusetts, my opponent was the second most powerful person in the Massachusetts State Senate, and even Republicans in my district had a high opinion of him.

Comment author: AlexMennen 13 January 2014 03:18:45AM 3 points [-]

Why did you run?

Comment author: James_Miller 13 January 2014 03:22:20AM 11 points [-]

I wanted to get more involved in local Republican politics and no one was running in the district and it was suggested that I run. It turned out to be a good decision as I had a lot of fun debating my opponent and going to political events. Since winning wasn't an option, it was even mostly stress free.

Comment author: VAuroch 13 January 2014 08:08:24AM 2 points [-]

I have a political question/proposition I have been pondering, and you, an intelligent semi-involved Massachusetts Republican, are precisely the kind of person who could answer it usefully. May I ask it to you in a private message?

Comment author: James_Miller 13 January 2014 03:52:54PM 3 points [-]

Yes

Comment author: Will_Newsome 12 January 2014 02:18:22AM 12 points [-]

Discussion of this post goes here.

Comment author: ephion 12 January 2014 03:54:39PM 11 points [-]

I think this is a really cool post idea. LW has a well-above-average user base, and sharing knowledge and ideas publicly can be a great boon to the community as a whole.

Comment author: David_Gerard 12 January 2014 11:21:32PM 1 point [-]

Yes, this is a really nice open thread that seems to be working well.

Comment author: Wei_Dai 15 March 2014 03:30:08AM *  11 points [-]

I've been getting an increasing number of interview requests from reporters and book writers (stemming from my connection with Bitcoin). In the interest of being lazy, instead of doing more private interviews I figure I'd create an entry here and let them ask questions publicly, so I can avoid having to answer redundant questions. I'm also open to answering any other questions of LW interest here.

In preparation for this AMA, I've updated my script for retrieving and sorting all comments and posts of a given LW user, to also allow filtering by keyword or regex. So you can go to http://www.ibiblio.org/weidai/lesswrong_user.php, enter my username "Wei_Dai", then (when the page finishes loading) enter "bitcoin" in the "filter by" box to see all of my comments/posts that mention Bitcoin.

Comment author: Wei_Dai 16 March 2014 06:14:00AM 4 points [-]

I received a PM from someone at a Portuguese newspaper who I think meant to post it publicly, so I'll respond publicly here.

You have contacted Satoshi Nakamoto. Does it seem to you only one person or a group of developers?

I think Satoshi is probably one person.

Does bitcoin seem cyberpunk project to you? In that case, can one expect they ever disclose identity?

Not sure what the first part of the question means. I don't expect Satoshi to voluntarily reveal his identity in the near future, but maybe he will do so eventually?

In that case, the libertarian motivation wouldn't be a risk to anyone who invest in the community? Like one this gets all formal and legal, it blow?

Don't understand this one either.

Is it important to know right now its origins? The author from the blog LikeinMirrorr, who says the most probable name is Nick Szabo, argues there is a concern on risk: if Szabo/ciberpunk is the source no risk, but it maybe this bubble - pump-and-dump scheme to enrich its original miners - or a project from federal goverment to track underground transactions. What is your view on this?

I'm pretty sure it's not a pump-and-dump scheme, or a government project.

Do you also think Szabo is the most probable name?

No I don't think it's Szabo or anyone else whose name is known to me. I explained why I don't think it's Szabo to a reporter from London's Sunday Times who wrote about it in the March 2 issue. I'll try to find and quote the relevant section.

How long have you start working on your ideas of criptocurrency? Have you used other pseudonyms online? Are you Szabo?

I worked on it from roughly 1995 to 1998. I've used pseudonyms only on rare (probably less than 10) occasions. I'm not Szabo but coincidentally we attended the same university and had the same major and graduated within a couple years of each other. Theoretically we could have seen each other on campus but I don't think we ever spoke in real life.

In your opinion, why has bitcoin succeed?

To be honest I didn't initially expect Bitcoin to make as much impact as it has, and I'm still at a bit of a loss to explain why it has succeeded to the extent that it has. In my experience lots of promising ideas especially in the field of cryptography never get anywhere in practice. But anyway, it's probably a combination of many things. Satoshi's knowledge and skill. His choice of an essentially fixed monetary base which ensures early adopters large windfalls if Bitcoin were to become popular, and which appeals to people who distrust flexible government monetary policies. Timing of the introduction to coincide with the economic crisis. Earlier discussions of related ideas which allowed his ideas to be more readily accepted. The availability of hardware and software infrastructure for him to build upon. Probably other factors that I'm neglecting.

(Actually I'd be interested to know if anyone else has written a better explanation of Bitcoin's success. Can anyone reading this comment point me to such an explanation?)

Finaly, what do you see as future? Wall Street has announced they wil start accepting applications for bitcoin and other digital currency exchanges. How do you see this milestone?

Don't have much to say on these. Others have probably thought much more about these questions over the past months and years and are more qualified than I am to answer.

Comment author: gwern 16 March 2014 06:08:19PM *  3 points [-]

No I don't think it's Szabo or anyone else whose name is known to me. I explained why I don't think it's Szabo to a reporter from London's Sunday Times who wrote about it in the March 2 issue. I'll try to find and quote the relevant section.

I had the article jailbroken recently, and the relevant parts (I hope I got it right, my version has scrambled-up text) are:

Nonetheless, the original bitcoin white paper is written in an academic style, with an index of sources at the end. I go to Wei Dai, an original cypherpunk, the proposer of a late-1990s e-currency called b-moneyand an early correspondent of Satoshi. When, in the first of several late-night chats, I ask him how many people would have the necessary competencies to create something like bitcoin, he tells me:

"Coming up with bitcoin required someone who, a) thought about money on a deep level, and b) learnt the tools of cryptography, c) had the idea that something like Bitcoin is possible, d) was motivated enough to develop the idea into something practical, e) was technically skilled enough to make it secure, f) had enough social skills to build and grow a community around it. The number of people who even had a), b) and c) was really small -- ie, just Nick Szabo and me -- so I'd say not many people could have done all these things."

A sudden frisson. Szabo, an American computer scientist who has also served as law professor at George Washington University, developed a system for "bit gold" between 1998 and 2005, which has been seen as a precursor to Bitcoin. Is he saying that Szabo is Satoshi? "No, I'm pretty sure it's not him." you, then? "No. When I said just Nick and me, I meant before Satoshi" So where could this person have come from? "Well, when I came up with b-money I was still in college, or just recently graduated, and Nick was at a similar age when he came up with bit gold, so I think Satoshi could be someone like that." "Someone young, with the energy for that kind of commitment?" "yeah, someone with energy and time, and that isn't obligated to publish papers under their real name."

...I go back to Szabo's pal, Wei Dai. "Wei," I say, "the other night you said you were sure Nick Szabo wasn't Satoshi. What made you sure?" "Two reasons," he replies. "One: in Satoshi's early emails to me he was apparently unaware of Nick Szabo's ideas and talks about how bitcoin 'expands on your ideas into a complete working system' and 'it achieves nearly all the goals you set out to solve in your b-money paper'. I can't see why, if Nick was Satoshi, he would say things like that to me in private. And, two: Nick isn't known for being a C++ programmer."

Perversely, a point in Szabo's favour. But Wei forwards me the relevant emails, and it's true: Satoshi had been ignorant of Szabo's bit-gold plan until Wei mentioned it. Furthermore, a trawl through Szabo's work finds him blogging and fielding questions about bit gold on his Unenumerated blog on December 27, 2008, while Satoshi was preparing bitcoin to meet the world a week later. Why? Because Szabo didn't know about bitcoin: almost no one outside the Cryptography Mailing List did, and I can find no evidence of him ever having been there. Indeed, by 2011, the bit-gold inventor is blogging in defence of bitcoin, pointing out several improvements on the system he devised.

I actually meant to email you about this earlier, but is there any chance you could post those emails (you've made them half-public as it is, and Dustin Trammell posted his a while back) or elaborate on Nick not knowing C++?

I've been trying to defend Szabo against the accusations of being Satoshi*, but to be honest, his general secrecy has made it very hard for me to rule him out or come up with a solid defense. If, however, he doesn't even know C or C++, then that massively damages the claims he's Satoshi. (Oh, one could work around it by saying he worked with someone else who did know C/C++, but that's pretty strained and not many people seriously think Satoshi was a group.)

* on Reddit, HN, and places like http://blog.sethroberts.net/2014/03/11/nick-szabo-is-satoshi-nakamoto-the-inventor-of-bitcoin/ or https://likeinamirror.wordpress.com/2013/12/01/satoshi-nakamoto-is-probably-nick-szabo/ (my response) / http://likeinamirror.wordpress.com/2014/03/11/occams-razor-who-is-most-likely-to-be-satoshi-nakamoto/

Comment author: Wei_Dai 17 March 2014 12:02:40AM *  2 points [-]

I actually meant to email you about this earlier, but is there any chance you could post those emails (you've made them half-public as it is, and Dustin Trammell posted his a while back)

Sure, I have no objection to making them public myself, and I don't see anything in them that Satoshi might want to keep private, so I'll forward them to you to post on your website. (I'm too lazy to convert the emails into HTML myself.)

elaborate on Nick not knowing C++?

Sorry, you misunderstood when I said "Nick isn't known for being a C++ programmer". I didn't mean that he doesn't know C++. Given that he was a computer science major, he almost certainly does know C++ or can easily learn it. What I meant is that he is not known to have programmed much in C or C++, or known to have done any kind of programming that might have kept one's programming skills sharp enough to have implemented Bitcoin (and to do it securely to boot). If he was Satoshi I would have expected to see some evidence of his past programming efforts.

But the more important reason for me thinking Nick isn't Satoshi is the parts of Satoshi's emails to me that are quoted in the Sunday Times. Nick considers his ideas to be at least an independent invention from b-money so why would Satoshi say "expands on your ideas into a complete working system" to me, and cite b-money but not Bit Gold in his paper, if Satoshi was Nick? An additional reason that I haven't mentioned previously is that Satoshi's writings just don't read like Nick's to me.

Comment author: gwern 01 April 2014 01:26:30AM 2 points [-]

so I'll forward them to you to post on your website.

Done: http://www.gwern.net/docs/2008-nakamoto

(Sorry for the delay, but a black-market was trying to blackmail me and I didn't want my writeup to go live so I was delaying everything.)

Comment author: gwern 18 March 2014 01:15:38AM 2 points [-]

so I'll forward them to you to post on your website.

Thanks.

I didn't mean that he doesn't know C++. Given that he was a computer science major, he almost certainly does know C++ or can easily learn it. What I meant is that he is not known to have programmed much in C or C++, or known to have done any kind of programming that might have kept one's programming skills sharp enough to have implemented Bitcoin (and to do it securely to boot). If he was Satoshi I would have expected to see some evidence of his past programming efforts.

I see. Unfortunately, this damages my defense: I can no longer say there's no evidence Szabo doesn't even know C/C++, but I have to confirm that he does. Your point about sharpness is well-taken, but the argument from silence here is very weak since Szabo hasn't posted any code ever aside from a JavaScript library, so we have no idea whether he has been keeping up with his C or not.

why would Satoshi say "expands on your ideas into a complete working system" to me, and cite b-money but not Bit Gold in his paper, if Satoshi was Nick?

Good question. I wonder if anyone ever asked Satoshi about what he thought of Bit Gold?

An additional reason that I haven't mentioned previously is that Satoshi's writings just don't read like Nick's to me.

I've seen people say the opposite! This is why I put little stock in people claiming Satoshi and $FAVORITE_CANDIDATE sound alike (especially given they're probably in the throes of confirmation bias and would read in the similarity if at all possible). Hopefully someone competent at stylometrics will at some point do an analysis.

Comment author: frizzers 21 March 2014 09:12:23AM 1 point [-]

I've been working hard on this in my book. (Nearly there by the way). I posted this on Like In A Mirror but put it here as well in case it doesn't get approved.

Yes, the writing styles of Szabo and Satoshi are the same.

Apart from the British spelling.

And the different punctuation habits.

And the use of British expressions like mobile phone and flat and bloody.

And Szabo’s much longer sentences.

And the fact that Szabo doesn’t make the same spelling mistakes that Satoshi does.

Ooh and the fact that Szabo’s writing has a lot more humour to it than Satoshi’s.

Szabo is one of the few people that has the breadth, depth and specificity of knowledge to achieve what Satoshi has, agreed. He is the right age, has the right background and was in the right place at the right time. He ticks a lot of the right boxes.

But confirmation bias is a dangerous thing. It blinkers.

And you need to think about the dangers your posts are creating in the life of a reclusive academic.

Satoshi is first and foremost a coder, not a writer. Szabo is a writer first and coder second. To draw any serious conclusions you need to find some examples of Szabo’s c++ coding.

You also need to find some proof a Szabo’s hacking (or anti-hacking) experience. Satoshi has rather a lot of this.

And you need to consider the possibility that Satoshi learnt his English on both sides of the Atlantic. And that English was not his first language. I don’t think it was.

Comment author: gwern 21 March 2014 07:03:30PM 1 point [-]

Yes, the writing styles of Szabo and Satoshi are the same. Apart from the British spelling. And the different punctuation habits. And the use of British expressions like mobile phone and flat and bloody. And Szabo’s much longer sentences. And the fact that Szabo doesn’t make the same spelling mistakes that Satoshi does. Ooh and the fact that Szabo’s writing has a lot more humour to it than Satoshi’s.

Szabo has extensively studied British history for his legal and monetary theories (it's hard to miss this if you've read his essays), so I do not regard the Britishisms as a point against Szabo. It's perfectly easy to pick up Britishisms if you watch BBC programs or read The Economist or Financial Times (I do all three and as it happens, I use 'bloody' all the time in colloquial speech - a check of my IRC logs shows me using it 72 times, and at least once in my more formal writings on gwern.net, and 'mobile phone' pops up 3 or 4 times in my chat logs; yet I have spent perhaps 3 days in the UK in my life). And Satoshi is a very narrow, special-purpose pseudonymic identity which has one and only one purpose: to promote and work on Bitcoin - Bitcoin is not a very humorous subject, nor does it really lend itself to long essays (or long sentences). And I'm not sure how you could make any confident claims about spelling mistakes without having done any stylometrics, given that both Szabo and Satoshi write well and you would expect spelling mistakes to be rare by definition.

Comment author: frizzers 22 March 2014 07:46:07AM 1 point [-]

Points noted. All well made. Mine was a heated rebuttal to the Like IN A Mirror post.

I could only find one spelling mistake in all Satoshi's work and a few punctuation quibbles. It's a word that is commonly spelt wrong - but that Szabo spells right. I don't want to share it here because I'm keeping it for the book

Comment author: Jayson_Virissimo 16 March 2014 06:01:35AM 4 points [-]

Since the birth and early growth of Bitcoin, how has your view on the prospects for crypto-anarchy changed (if at all)? Why?

Comment author: Wei_Dai 17 March 2014 12:19:31AM *  4 points [-]

Since the birth and early growth of Bitcoin, how has your view on the prospects for crypto-anarchy changed (if at all)? Why?

My views haven't changed very much, since the main surprise of Bitcoin to me is that people find such a system useful for reasons other than crypto-anarchy. Crypto-anarchy still depends on the economics of online security favoring the defense over the offense, but as I mentioned in Work on Security Instead of Friendliness? that still seems to be true only in limited domains and false overall.

Comment author: frizzers 15 March 2014 10:09:48AM 4 points [-]

Good morning Wei,

Thank you for doing this. It seems like an excellent solution.

My name's Dominic Frisby. I'm an author from the UK, currently working on a book on Bitcoin (http://unbound.co.uk/books/bitcoin).

Here are some questions I'd like to ask.

  1. What steps, if any, did you take to coding up your b-money idea? If none, or very few, why did you go no further with it?

  2. You had some early correspondence with Satoshi. What do you think his motivation behind Bitcoin was? Was it, simply, the challenge of making something work that nobody had made work before? Was it the potential riches? Was it altruistic or political, maybe - did he want to change the world?

  3. In what ways do you think Bitcoin might change the world?

  4. How much of a bubble do you think it is?

  5. I sometimes wonder if Bitcoin was invented not so much to become the global reserve digital cash currency, but to prove to others that the technology can work. It was more gateway rather than final destination – do you have a view here?

That's more than enough to be going on with.

With kind regards

Dominic

Comment author: Wei_Dai 15 March 2014 08:34:19PM *  3 points [-]

1 - I didn't take any steps to code up b-money. Part of it was because b-money wasn't a complete practical design yet, but I didn't continue to work on the design because I had actually grown somewhat disillusioned with cryptoanarchy by the time I finished writing up b-money, and I didn't foresee that a system like it, once implemented, could attract so much attention and use beyond a small group of hardcore cypherpunks.

2 - It's hard for me to tell, but I'd guess that it was probably a mixture of technical challenge and wanting to change the world.

3 and 4 - Don't have much to say on these. Others have probably thought much more about these questions over the past months and years and are more qualified than I am to answer.

5 - I haven't seen any indication of this. What makes you suspect it?

Comment author: Wei_Dai 18 March 2014 06:17:57PM *  3 points [-]

I received this question via email earlier. Might as well answer it here as well.

In bmoney you say the PoW must have no other value. Why is that? Why wouldn't it be a good idea if it were also somehow made valuable like if perhaps protein folding could be made to fit the other required criteria?

In b-money the money creation rate is not fixed, but instead there are mechanisms that give people incentives to create the right amount of money to ensure price stability or maximize economic growth. I specified the PoW to have no other value in order to not give people an extra incentive to create money (beyond what the mechanism provides). But with Bitcoin this doesn't apply since the money creation rate is fixed. I haven't thought about this much though, so I can't say that it won't cause some other problem with Bitcoin that I'm not seeing.

Comment author: Wei_Dai 18 March 2014 08:39:29PM 2 points [-]

I received another question from this same interlocutor:

Also, I understand you haven't read the original bitcoind code but do you have any guess for why the author chose to lift your SHA256 implementation from Crypto++ when the project already required openssl-0.9.8h? Is there anything odd about the OpenSSL implementation that wouldn't be immediately obvious to someone who isn't a crypto expert?

Hmm, I’m not sure. I thought it might have been the optimizations I put into my SHA256 implementation in March 2009 (due to discussions on the NIST mailing list for standardizing SHA-3, about how fast SHA-2 really is), which made it the fastest available at the time, but it looks like Bitcoin 0.1 was already released prior to that (in Jan 2009) and therefore had my old code. Maybe someone could test if the old code was still faster than OpenSSL?

Comment author: frizzers 16 March 2014 11:00:02AM 3 points [-]

What do you make of the decision to use C++?

Do you have any opinions of the original coding beyond the 'inelegant but amazingly resilient' meme? Was there anything that stood out about it?

Comment author: Wei_Dai 17 March 2014 12:57:42AM 1 point [-]

What do you make of the decision to use C++?

It seems like a pretty standard choice for anyone wanting to build such a piece of software...

Do you have any opinions of the original coding beyond the 'inelegant but amazingly resilient' meme? Was there anything that stood out about it?

No I haven't read any of it.

Comment author: gsastry 16 March 2014 01:21:39AM 3 points [-]
  1. What do you think are the most interesting philosophical problems within our grasp to be solved?
  2. Do you think that solving normative ethics won't happen until a FAI? If so, why?
  3. You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy?
  4. Do you have any role models?
Comment author: Wei_Dai 16 March 2014 05:04:06AM 6 points [-]

What do you think are the most interesting philosophical problems within our grasp to be solved?

I'm not sure there is any. A big part of it is that metaphilosophy is essentially a complete blank, so we have no way of saying what counts as a correct solution to a philosophical problem, and hence no way of achieving high confidence that any particular philosophical problem has been solved, except maybe simple (and hence not very interesting) problems, where the solution is just intuitively obvious to everyone or nearly everyone. It's also been my experience that any time we seem to make real progress on some interesting philosophical problem, additional complications are revealed that we didn't foresee, which makes the problem seem even harder to solve than before the progress was made. I think we have to expect this trend to continue for a while yet.

If you instead ask what are some interesting philosophical problems that we can expect visible progress on in the near future, I'd cite decision theory and logical uncertainty, just based on how much new effort people are putting into them, and results from the recent past.

Do you think that solving normative ethics won't happen until a FAI? If so, why?

No I don't think that's necessarily true. It's possible that normative ethics, metaethics, and metaphilosophy are all solved before someone builds an FAI, especially if we can get significant intelligence enhancement to happen first. (Again, I think we need to solve metaethics and metaphilosophy first, otherwise how do we know that any proposed solution to normative ethics is actually correct?)

You argued previously that metaphilosophy and singularity strategies are fields with low hanging fruit. Do you have any examples of progress in metaphilosophy?

Unfortunately, not yet. BTW I'm not saying these are fields that definitely have low hanging fruit. I'm saying these are fields that could have low hanging fruit, based on how few people have worked in them.

Do you have any role models?

I do have some early role models. I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net. And then there was Hal Finney who probably came closest to an actual real-life version of Sandor at the Zoo, and Tim May who besides inspiring me with his vision of cryptoanarchy was also a role model for doing early retirement from the tech industry and working on his own interests/causes.

Comment author: gsastry 24 March 2014 06:53:51PM 2 points [-]

Thanks. I have some followup questions :)

  1. What projects are you currently working on?/What confusing questions are you attempting to answer?
  2. Do you think that most people should be very uncertain about their values, e.g. altruism?
  3. Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?
  4. Where do you hang out online these days? Anywhere other than LW?

Please correct me if I've misrepresented your views.

Comment author: Wei_Dai 24 March 2014 11:00:31PM 4 points [-]

What projects are you currently working on?/What confusing questions are you attempting to answer?

If you go through my posts on LW, you can read most of the questions that I've been thinking about in the last few years. I don't think any of the problems that I raised have been solved so I'm still attempting to answer them. To give a general idea, these include questions in philosophy of mind, philosophy of math, decision theory, normative ethics, meta-ethics, meta-philosophy. And to give a specific example I've just been thinking about again recently: What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?

As a side note, I don't know if it's good from a productivity perspective to jump around amongst so many different questions. It might be better to focus on just a few with the others in the back of one's mind. But now that I have so many unanswered questions that I'm all very interested in, it's hard to stay on any of them for very long. So reader beware. :)

Do you think that most people should be very uncertain about their values, e.g. altruism?

Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be. I make an exception of this for people who might be in a position to build an FAI, since if they're too confident about altruism then they're likely to be too confident about many other philosophical problems, but even then I don't stress it too much.

Do you think that your views about the path to FAI are contrarian (amongst people working on FAI/AGI, e.g. you believing most of the problems are philosophical in nature)? If so, why?

I guess there is a spectrum of concern over philosophical problems involved in building an FAI/AGI, and I'm on the far end of the that spectrum. I think most people building AGI mainly want short term benefits like profits or academic fame, and do not care as much about the far reaches of time and space, in which case they'd naturally focus more on the immediate engineering issues.

Among people working on FAI, I guess they either have not thought as much about philosophical problems as I have and therefore don't have a strong sense of how difficult those problems are, or are just overconfident about their solutions. For example when I started in 1997 to think about certain seemingly minor problems about how minds that can be copied should handle probabilities (within a seemingly well-founded Bayesian philosophy), I certainly didn't foresee how difficult those problems would turn out to be. This and other similar experiences made me update my estimates of how difficult solving philosophical problems is in general.

BTW I would not describe myself as "working on FAI" since that seems to imply that I endorse the building of an FAI. I like to use "working on philosophical problems possibly relevant to FAI".

Where do you hang out online these days? Anywhere other than LW?

Pretty much just here. I do read a bunch of other blogs, but tend not to comment much elsewhere since I like having an archive of my writings for future reference, and it's too much trouble to do that if I distribute them over many different places. If I change my main online hangout in the future, I'll note that on my home page.

Comment author: NancyLebovitz 11 September 2014 04:30:56PM 2 points [-]

What is pain exactly (e.g., in a mathematical or algorithmic sense) and why is it bad? For example can certain simple decision algorithms be said to have pain? Is pain intrinsically bad, or just because people prefer not to be in pain?

Pain isn't reliably bad, or at least some people (possibly a fairly proportion), seek it out in some contexts. I'm including very spicy food, SMBD, deliberately reading things that make one sad and/or angry without it leading to any useful action, horror fiction, pushing one's limits for its own sake, and staying attached to losing sports teams.

I think this leads to the question of what people are trying to maximize.

Comment author: Eugine_Nier 25 March 2014 03:58:40AM 1 point [-]

Yes, but I tend not to advertise too much that people should be less certain about their altruism, since it's hard to see how that could be good for me regardless of what my values are or ought to be.

One issue is that an altruist has a harder time noticing if he's doing something wrong. An altruist with false beliefs is much more dangerous than an egotist with false beliefs.

Comment author: ESRogs 18 March 2014 07:39:31PM *  2 points [-]

I recall wanting to be a real-life version of the fictional "Sandor Arbitration Intelligence at the Zoo" (from Vernor Vinge's novel A Fire Upon the Deep) who in the story is known for consistently writing the clearest and most insightful posts on the Net.

FWIW, I have always been impressed by the consistent clarity and conciseness of your LW posts. Your ratio of insights imparted to words used is very high. So, congratulations! And as an LW reader, thanks for your contributions! :)

Comment author: Lumifer 16 March 2014 06:19:53PM 2 points [-]

and Tim May

What is he doing, by the way? Wikipedia says he's still alive but he looks to be either retired or in deep cover...

Comment author: frizzers 18 March 2014 02:31:26PM 2 points [-]

The correct pronunciation of your name.

Wei - is it pronounced as in 'way' or 'why'?

And Dai - as in 'dye' or 'day'?

Thank you.

Comment author: Wei_Dai 18 March 2014 06:21:31PM *  7 points [-]

It's Chinese Pinyin romanization, so pronounced "way dye".

ETA: Since Pinyin is a many to one mapping, and as a result most Chinese articles about Bitcoin put the wrong name down for me, I'll take this opportunity to mention that my name is written logographically as 戴维.

Comment author: 9kv 11 September 2014 03:00:09PM *  1 point [-]

I'm doing a thesis paper on Bitcoin and was wondering if you, being specifically stated as one of the main influences on Bitcoin by Satoshi Nakamoto in his whitepaper references,could give me your take on how Bitcoin is today versus whatever project you imagined when you wrote "b-money". What is different? What is the same? What should change?

Comment author: Will_Newsome 13 January 2014 01:57:43AM *  11 points [-]

My primary interest is determining what the "best" thing to do is, especially via creating a self-improving institution (e.g., an AGI) that can do just that. My philosophical interests stem from that pragmatic desire. I think there are god-like things that interact with humans and I hope that's a good thing but I really don't know. I think LessWrong has been in Eternal September mode for awhile now so I mostly avoid it. Ask me anything, I might answer.

Comment author: Panic_Lobster 13 January 2014 08:04:13AM *  6 points [-]

Why do you believe that there are god-like beings that interact with humans? How confident are you that this is the case?

Comment author: Will_Newsome 14 January 2014 01:44:43AM *  6 points [-]

I believe so for reasons you wouldn't find compelling, because the gods apparently do not want there to be common knowledge of their existence, and thus do not interact with humans in a manner that provides communicable evidence. (Yes, this is exactly what a world without gods would look like to an impartial observer without firsthand incommunicable evidence. This is obviously important but it is also completely obvious so I wish people didn't harp on it so much.) People without firsthand experience live in a world that is ambiguous as to the existence or lack thereof of god-like beings, and any social evidence given to them will neither confirm nor deny their picture of the world, unless they're falling prey to confirmation bias, which of course they often do, especially theists and atheists. I think people without firsthand incommunicable evidence should be duly skeptical but should keep the existence of the supernatural (in the everyday sense of that word, not the metaphysical sense) as a live hypothesis. Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance. (I think LessWrong is especially prone to this kind of arrogance; see IlyaShpitser's comments on LessWrong's rah-rah-Bayes stance to see part of what I mean.)

As for me, and as to my personal decision policy, I am ninety-something percent confident. The scenarios where I'm wrong are mostly worlds where outright complex hallucination is a normal feature of human experience that humans are for some reason blind to. I'm not talking about normal human memory biases and biases of interpretation, I'm saying some huge fraction of humans would have to have a systemic disorder on the level of anosognosia. Given that I don't know how we should even act in such a world, I'm more inclined to go with the gods hypothesis, which, while baffling, at least has some semblance of graspability.

Comment author: Furcas 14 January 2014 10:38:16PM *  4 points [-]

Can you please describe one example of the firsthand evidence you're talking about?

Also, I honestly don't know what the everyday sense of supernatural is. I don't think most people who believe in "the supernatural" could give a clear definition of what they mean by the word. Can you give us yours?

Thanks.

Comment author: Will_Newsome 15 January 2014 02:07:58AM 2 points [-]

Can you please describe one example of the firsthand evidence you're talking about?

I realize it's annoying, but I don't think I should do that.

Can you give us yours?

I give a definition of "supernatural" here. Of course, it doesn't capture all of what people use the word to mean.

Comment author: Furcas 15 January 2014 02:22:12AM 3 points [-]

I realize it's annoying, but I don't think I should do that.

Why not?

Comment author: Apprentice 14 January 2014 02:24:32PM 4 points [-]

worlds where outright complex hallucination is a normal feature of human experience

What sort of hallucinations are we talking about? I sometimes have hallucinations (auditory and visual) with sleep paralysis attacks. One close friend has vivid hallucinatory experiences (sometimes involving the Hindu gods) even outside of bed. It is low status to talk about your hallucinations so I imagine lots of people might have hallucinations without me knowing about it.

I sometimes find it difficult to tell hallucinations from normal experiences, even though my reasoning faculty is intact during sleep paralysis and even though I know perfectly well that these things happen to me. Here are two stories to illustrate.

Recently, my son was ill and sleeping fitfully, frequently waking up me and my wife. After one restless episode late in the night he had finally fallen asleep, snuggling up to my wife. I was trying to fall asleep again, when I heard footsteps outside the room. "My daughter (4 years old) must have gotten out of bed", I thought, "she'll be coming over". But this didn't happen. The footsteps continued and there was a light out in the hall. "Odd, my daughter must have turned on the light for some reason." Then through the door came an infant, floating in the air. V orpnzr greevsvrq ohg sbhaq gung V jnf cnenylmrq naq pbhyq abg zbir be fcrnx. V gevrq gb gbhpu zl jvsr naq pel bhg naq svanyyl znantrq gb rzvg n fhoqhrq fuevrx. Gura gur rkcrevrapr raqrq naq V fnj gung gur yvtugf va gur unyy jrer abg ghearq ba naq urneq ab sbbgfgrcf. "Fghcvq fyrrc cnenylfvf", V gubhtug, naq ebyyrq bire ba zl fvqr.

Here's another somewhat older incident: I was lying in bed beside my wife when I heard movement in our daughter's room. I lay still wondering whether to go fetch her - but then it appeared as if the sounds were coming closer. This was surprising since at that time my daughter didn't have the habit of coming over on her own. But something was unmistakeably coming into the room and as it entered I saw that it was a large humanoid figure with my daughter's face. V erpbvyrq va ubeebe naq yrg bhg n fuevrx. Nf zl yrsg unaq frnepurq sbe zl jvsr V sbhaq gung fur jnfa'g npghnyyl ylvat orfvqr zr - fur jnf fgnaqvat va sebag bs zr ubyqvat bhe qnhtugre. Fur'q whfg tbggra bhg bs orq gb srgpu bhe qnhtugre jvgubhg zr abgvpvat.

The two episodes play our very similarly but only one of them involved hallucinations.

I've sort of forgotten where I was going with this, but if Will would like to tell us a bit more about his experiences I would be interested.

Comment author: TheOtherDave 14 January 2014 06:37:39PM 3 points [-]

Assigning less than 5% probability to it is, in my view, a common but serious failure of social epistemic rationality, most likely caused by arrogance.

Where does the 5% threshold come from?

Comment author: Will_Newsome 15 January 2014 02:14:45AM 4 points [-]

Psychologically "5%" seems to correspond to the difference between a hypothesis you're willing to consider seriously, albeit briefly, versus a hypothesis that is perhaps worth keeping track of by name but not worth the effort required to seriously consider.

Comment author: TheOtherDave 15 January 2014 03:01:30AM 1 point [-]

(nods) Fair enough.

Do you have any thoughts about why, given that the gods apparently do not want their existence to be common knowledge, they allow selected individuals such as yourself to obtain compelling evidence of their presence?

Comment author: Will_Newsome 24 February 2014 10:29:16AM 2 points [-]

I don't have good thoughts about that. There may be something about sheep and goats, as a general rule but certainly not a universal law. It is possible that some are more cosmically interesting than others for some reason (perhaps a matter of their circumstances and not their character), but it seems unwise to ever think that about oneself; breaking the fourth wall is always a bold move, and the gods would seem to know their tropes. I wouldn't go that route too far without expectation of a Wrong Genre Savvy incident. Or, y'know, delusionally narcissistic schizophrenia. Ah, the power of the identity of indiscernibles. Anyhow, it is possible such evidence is not so rare, especially among sheep whose beliefs are easily explained away by other plausible causes.

Comment author: Leonhart 15 January 2014 04:02:12PM 2 points [-]

and thus do not interact with humans in a manner that provides communicable evidence

Could a future neuroscience in principle change this, or do you have a stronger notion of incommunicability?

Comment author: gjm 15 January 2014 02:39:29AM 2 points [-]

You are arguing, if I understand you aright, (1) that the gods don't want their existence to be widely known but (2) that encounters with the gods, dramatic enough to demand extraordinary explanations if they aren't real, are commonplace.

This seems like a curious combination of claims. Could you say a little about why you don't find their conjunction wildly implausible? (Or, if the real problem is that I've badly misunderstood you, correct my misunderstanding?)

Comment author: Eugine_Nier 14 January 2014 01:36:05AM *  3 points [-]

Where are you posting these days?

Comment author: Will_Newsome 14 January 2014 01:49:50AM 4 points [-]

I mostly don't, but when I do, Twitter. @willdoingthings mostly; it's an uninhibited drunken tweeting account. I also participate on IRC in private channels. But in general I've become a lot more secretive and jaded so I post a lot less.

Comment author: khafra 14 January 2014 01:37:02PM 1 point [-]

A while back, you mentioned that people regularly confuse universal priors with coding theory. But minimum message length is considered a restatement of occam's razor, just like solomonoff induction; and MML is pretty coding theory-ish. Which parts of coding theory are dangerous to confuse with the universal prior, and what's the danger?

Comment author: Will_Newsome 15 January 2014 02:25:19AM *  3 points [-]

The difference I was getting at is that when constructing a code you're taking experiences you've already had and then assigning them weight, whereas the universal prior, being a prior, assigns weight to strings without any reference to your experiences. So when people say "the universal prior says that Maxwell's equations are simple and Zeus is complex", what they actually mean is that in their experience mathematical descriptions of natural phenomena have proved more fruitful than descriptions that involve agents; the universal prior has nothing to do with this, and invoking it is dangerous as it encourages double-counting of evidence: "this explanation is more probable because it is simpler, and I know it's simpler because it's more probable". When in fact the relationship between simplicity and probability is tautologous, not mutually reinforcing.

This error really bothers me, because aside from its incorrectness it's using technical mathematics in a surface way as a blunt weapon verbose argument that makes people unfamiliar with the math feel like they're not getting something that they shouldn't in fact get nor need to understand.

(I've swept the problem of "which prefix do I use?" under the rug because there are no AIT tools to deal with that and so if you want to talk about the problem of prefixes, you should do so separately from invoking AIT for some everyday hermeneutic problem. Generally if you're invoking AIT for some object-level hermeneutic problem you're Doing It Wrong, as has been explained most clearly by cousin_it.)

Comment author: jsteinhardt 12 January 2014 06:59:18AM 11 points [-]

I'm a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.

Comment author: Mark_Friedenbach 12 January 2014 07:15:53AM 5 points [-]

What do you feel are the most pressing unsolved problems in AGI?

Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)?

How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?

Comment author: jsteinhardt 12 January 2014 10:09:27PM 2 points [-]

What do you feel are the most pressing unsolved problems in AGI?

In AGI? If you mean "what problems in AI do we need to solve before we can get to the human level", then I would say:

  • Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
  • Ways to cope with domain adaptation and model mis-specification.
  • Robust and modular statistical procedures that can be fruitfully fit together.
  • Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).

To some extent this reflects my own biases, and I don't mean to say "if we solve these problems then we'll basically have AI", but I do think it will either get us much closer or else expose new challenges that are not currently apparent.

Do you believe AGI can "FOOM" (you may have to qualify what you interpret FOOM as)?

I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).

In general I think this is one of many possible scenarios, e.g. it's also possible that sub-human AI would already have control of much of the world's resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn't stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.

How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?

Not viable.

Comment author: Anatoly_Vorobey 12 January 2014 10:40:29AM 4 points [-]

Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what's your estimate?

Comment author: jsteinhardt 12 January 2014 10:17:51PM 3 points [-]

I don't personally work on AGI and I don't think the majority of "AGI progress" comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don't even know if thinking in terms of "number of AI researchers" is the right framing. That said, I'll try to answer your question.

I'm worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the "human-level" AI vision, but also there are many people who are in the field who don't go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn't aware of or didn't think of as being AI-relevant, and who turn out to be working on problems that are important to me.

Comment author: Benito 12 January 2014 08:51:51AM 2 points [-]

How did you come up with the course content for SPARC?

Comment author: jsteinhardt 13 January 2014 04:49:18AM 2 points [-]

We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn't really know what we were doing and just wanted to get something off the ground.

Over time we've asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don't spend teaching.

Comment author: CAE_Jones 12 January 2014 09:32:33AM 10 points [-]

I'm an unemployed legally blind mostly white American who may have at one point been good at math and programming, who is just smart enough to get loads of spam from MIT, but not smart enough to avoid putting my foot in my mouth an average of monthly on Lesswrong. I've been talking about blindness-related issues a lot over the past year mostly because I suddenly realized that they were relevant, but my aim is to solve these problems as quickly as possible so I can get back to getting better at things that actually matter. On the off chance that you have questions, feel free to AMA.

Comment author: Anatoly_Vorobey 12 January 2014 10:05:40AM 6 points [-]

How blind are you, in layman terms of what you can/can't see? What's your prognosis?

Comment author: CAE_Jones 12 January 2014 01:38:22PM 3 points [-]

I'm not-quite completely blind; what little vision I have tends to fluctuate between effectively nonexistent and good enough to notice vague details maybe once or twice a year. I could see better up until I was 14, but my vision was still too poor to get out of using braille and a cane (given thick glasses and enough time, I could possibly have read size 20 font; even with the much larger font used in movie subtitles, I had to pause the video and put my face against the screen to read them).

I don't know my official acuity/diagnoses (It's been a few years since I saw an eye doctor), but I appear to have started out with retinal detachment and scarring, and later developed uveitis. The latter seems to be the primary cause for the dramatic decline starting from age 14.

Comment author: Anatoly_Vorobey 12 January 2014 06:32:57PM 2 points [-]

Are these problems likely to be correctable/improvable with medicine, but you have no money/insurance to get medical help? Or are they of a kind that basically can't be helped, and that's why you haven't been to a doctor in years? Or is it something else?

Do you use a reader program to browse the web and this site? Do you touch-type or dictate your comments?

(I realize that my questions are callous; please feel free to ignore if they're too invasive)

Comment author: CAE_Jones 12 January 2014 08:58:21PM 4 points [-]

The retinal issues are unlikely to be fixable in the immediate future (though the latest developments on that front seem potentially promising). There may be a treatment for the more annoying issue, but I don't know if it's too late/what I should do to learn more, and so I'm waiting until life in general is more favorable to dig into it further. (Which I expect means I'll be putting it off until 2015, since I expect to be fairly occupied during most of 2014.)

For using the internet/computers in general, I use Nonvisual Desktop Access, a free screen reader which only recently attained comparable status to Jaws for Windows, which I'd been using prior to 2011. These work well with plaintext, and have trouble with certain types of controls/labels and images and such (I had to Skype someone a screenshot to get past the CAPCHA to register here. I was using a trial of a CAPCHA-solving add-on at the time, but it was unable to locate the CAPCHA on Lesswrong.). Since NVDA is open source, users frequently develop useful add-ons and plugins, such as a CPU usage monitor and the ability to summon a Google Translation of copied text with a single keystroke. (It supposedly includes an optical character recognition feature, but I've never figured out how to use it.).

I touch-type. I'm not much of a fan of dictation, though I'm not sure why.

Comment author: jkaufman 13 January 2014 02:35:20AM 9 points [-]

I'm a programmer at Google in Boston doing earning to give, I blog about all sorts of things, and I play mandolin in a dance band. Ask me anything.

Comment author: jobe_smith 15 January 2014 07:09:22PM 3 points [-]
  1. What are you working on at google?

  2. How much do you earn?

  3. How much do you give, and to where?

Comment author: jkaufman 15 January 2014 08:31:06PM *  11 points [-]

What are you working on at google?

ngx_pagespeed and mod_pagespeed. They are open source modules for nginx and apache that rewrite web pages on the fly to make them load faster.

How much do you earn?

$195k/year, all things considered. (That's my total compensation over the last 19 months, annualized. Full details: http://www.jefftk.com/money)

How much do you give, and to where?

Last year Julia and I gave a total of $98,950 to GiveWell's top charities and the Centre for Effective Altruism. (Full details: http://www.jefftk.com/donations)

Comment author: AlexSchell 13 January 2014 05:05:00PM 2 points [-]

Did you ever get down to trying fumaric acid? How does it compare to citric and malic acids?

Comment author: jkaufman 13 January 2014 06:45:23PM 4 points [-]

I've added an update to that post: http://www.jefftk.com/p/citric-acid

I ended up ordering malic and fumaric acids as well. I like the malic acid a lot, but the fumaric acid is really hard to taste. Not being soluble in water it just sits on the tongue being slightly sour. I probably just haven't found the right use for it yet.

Comment author: Vaniver 15 January 2014 04:47:00PM 2 points [-]

The best part of sour patch kids was the white powder left over at the bottom of the wrapper.

I once had a one-pound bag of Sour Skittles, and after eating all of them, consumed the entirety of the white powder left over in the bag at once. Simply thinking about that experience is sufficient to produce a huge burst of saliva.

Comment author: Leonhart 13 January 2014 10:58:04PM 2 points [-]

THANK YOU WHY DID I NEVER THINK OF DOING THAT THIS IS GOING TO MAKE ALL JAM EDIBLE FOREVER

Comment author: IlyaShpitser 12 January 2014 07:11:25AM 8 points [-]

I write about causality sometimes.

Comment author: somervta 12 January 2014 07:59:34AM 4 points [-]

How significant/relevant is the mathematical work on causality to philosophical work/discussion? If someone was talking about causality in a philosophical setting and had never heard of the relevant math, how badly would/should that reflect on them? Does it make a difference if they've heard of it, but didn't bother to learn the math?

Comment author: IlyaShpitser 12 January 2014 08:42:04PM 5 points [-]

I am not up on my philosophical literature (trying to change this), but I think most analytic philosophers have heard of Pearl et al. by now. Not every analytic philosopher is as mathematically sophisticated as e.g. people at the CMU department. But I think that's ok!

I don't think it's a wise social move for LW to beat on philosophers.

Comment author: AlexSchell 13 January 2014 04:39:47PM 2 points [-]

Can you point out some cool/insightful applications of broadly Pearlian causality ideas to applied problems in, say, epidemiology or econometrics?

Comment author: IlyaShpitser 16 January 2014 08:19:51AM *  5 points [-]

"Pearlian causality" is sort of like "Hawkingian physics." (Not to dismiss the amazing contributions of both Pearl and Hawking to their respective fields).


I am not sure what cool or insightful is for you. What seems cool to me is that proper analysis of causality and/or missing data (these two are related) in observational data in epidemiology is now more or less routine. The use of instrumental variables for getting causal effects is also routine in econometrics.

The very fact that people think about a causal effect as a formal mathematical thing, and then use proper techniques to get it in applied/data analysis settings seems very neat to me. This is what success of analytic philosophy ought to look like!

Comment author: Anatoly_Vorobey 12 January 2014 10:21:25AM 2 points [-]

Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)

Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches? Does there exist a reasonably neutral high-level summary of the field?

Comment author: IlyaShpitser 12 January 2014 08:29:02PM *  12 points [-]

Which academic disciplines care about causality? (I'm guessing statistics, CS, philosophy... anything else?)

On some level any empirical science cares, because the empirical sciences all care about cause-effect relationships. In practice, the 'penetration rate' is path-dependent (that is, depends on the history of the field, personalities involved, etc.)

To add to your list, there are people in public health (epidemiology, biostatistics), social science, psychology, political science, economics/econometrics, computational bio/omics that care quite a bit. Very few philosophers (excepting the CMU gang, and a few other places) think about causal inference at the level of detail a statistician would. CS/ML do not care very much (even though Pearl is CS).

Is there anything like a mainstream agreement on how to model/establish causality? E.g. does more or less everyone agree that Pearl's book, which I haven't read, is the right approach? If not, is it possible to list the main competing approaches?

I think there is as much agreement as there can reasonably be for a concept such as causality (that is, a philosophically laden concept that's fun to argue about). People model it in lots of ways, I will try to give a rough taxonomy, and will tell you where Pearl lies


Interventionist vs non-interventionist

Most modern causal inference folks are interventionists (including Pearl, Rubin, Robins, etc.). The 'Nicene crede' for interventionists is: (a) an intervention (forced assignment) is key for representing cause/effect, (b) interventions and conditioning are not the same thing, (c) you express interventions in terms of ordinary probabilities using the g-formula/truncated factorization/manipulated distribution (different names for the same thing). The concept of an intervention is old (goes back to Neyman (1920s), I think, possibly even earlier).

To me, non-interventionists fall into three categories: 'naive,' 'abstract', and 'indifferent.' Naive non-interventionists are not using interventions because they haven't thought about things hard enough, and will thus get things wrong. Some EDT folks are in this category. People who ask 'but why can't we just use conditional probabilities' are often in this set. Abstract non-interventionists are not using interventions because they have in mind some formalism that has interventions as a special case, and they have no particular need for the special case. I think David Lewis was in this camp. Joe Halpern might be in this set, I will ask him sometime. Indifferent non-interventionists operate in a field where there is little difference between conditioning and interventions (due to lack of interesting confounding), so there is no need to model interventions explicitly. Reinforcement learning people, and people who only work with RCT data are in this set.


Counterfactualists vs non-counterfactualists

Most modern causal inference folks are counterfactualist (including Pearl, Rubin, Robins, etc.). To a counterfactualist it is important to think about a hypothetical outcome under a hypothetical intervention. Obviously all counterfactualists are interventionist. A noted non-counterfactualist interventionist is Phil Dawid. Counterfactuals are also due to Neyman, but were revived and extended by Rubin in the 70s.


Graphical vs non-graphical

Whether you like using graphs or not. Modern causal inference is split on this point. Folks in the Rubin camp do not like graphs (for reasons that are not entirely clear -- what I heard is they find them distracting from important statistical modeling issues (??)). Folks in the Pearl/SGS/Robins/Dawid/etc. camp like graphs. You don't have to have a particular commitment to any earlier point to have an opinion on graphs (indeed lots of graphical models are not about causality at all). In the context of causality, graphs were first used by Sewall Wright for pedigree analysis (1920s). Lauritzen, Pearl, etc. gave a modern synthesis of graphical models. Spirtes/Glymour/Scheines and Pearl revived a causal interpretation of graphs in the 90s.


"Popperians" vs "non-Popperians"

Whether you restrict yourself to testable assumptions. Pearl is non-Popperian, his models make assumptions that can only be tested via a time machine or an Everett branch jumping algorithm. Rubin is also non-Popperian because of "principal stratification." People that do "mediation analysis" are generally non-Popperian. Dawid, Robins, and Richardson are Popperians -- they try to stick to testable assumptions only. I think even for Popperians, some of their assumptions must be untestable (but I think this is probably necessary for statistical inference in general). I think Dawid might claim all counterfactualists are non-Popperian in some sense.


I am "a graphical non-Popperian counterfactualist" (and thus interventionist).

Does there exist a reasonably neutral high-level summary of the field?

We are working on it.

Comment author: jaibot 13 January 2014 02:07:16PM 2 points [-]

Why?

Comment author: Eugine_Nier 12 January 2014 09:10:01PM 1 point [-]

Are you aware of any attempts to assign a causality(-like?) structure to mathematics?

There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure), but the probability based definition of causality fails when all the probabilities are 0 or 1.

Comment author: IlyaShpitser 12 January 2014 09:12:49PM 2 points [-]

There are certainly areas of mathematics where it seems like there is an underlying causality structure (frequently orthogonal or even inverse to the proof structure)

Can you give a simple example of/pointer to what you mean?

Comment author: Sewing-Machine 14 January 2014 05:57:18PM 2 points [-]

I don't know if this is what Nier has in mind, but it reminds me of Cramer's random model for the primes. There is a 100 per cent chance that 758705024863 is prime, but it is very often useful to regard it as the output of a random process. Here's an example of the model in action.

Comment author: CellBioGuy 12 January 2014 10:23:46PM 6 points [-]

Biology/genetics graduate student here, studying the interaction of biological oscillations with each other in yeast, quite familiar with genetic engineering due to practical experience and familiar with molecular biology in general. Fire away.

Comment author: Apprentice 12 January 2014 09:48:28AM 6 points [-]

You can ask me things if you like. At Reddit, some of the most successful AMAs are when people are asked about their occupation. I have a PhD in linguistics/philology and currently work in academia. We could talk about academic culture in the humanities if someone is interested in that.

Comment author: Anatoly_Vorobey 12 January 2014 10:27:02AM 4 points [-]

Can you talk about your specific field in linguistics/philology? What it is, what are the main challenges?

Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?

Comment author: Apprentice 12 January 2014 12:40:48PM 11 points [-]

Can you talk about your specific field in linguistics/philology?

I've mucked about here and there including in language classification (did those two extinct tribes speak related languages?), stemmatics (what is the relationship between all those manuscripts containing the same text?), non-traditional authorship attribution (who wrote this crap anyway?) and phonology (how and why do the sounds of a word "change" when it is inflected?). To preserve some anonymity (though I am not famous) I'd rather not get too specific.

what are the main challenges?

There are lots of little problems I'm interested in for their own sake but perhaps the meta-problems are of more interest here. Those would include getting people to accept that we can actually solve problems and that we should try our best to do so, Many scholars seem to have this fatalistic view of the humanities as doomed to walk in circles and never really settle anything. And for good reason - if someone manages to establish "p" then all the nice speculation based on assuming "not p" is worthless. But many would prefer to be as free as possible to speculate about as much as possible.

Do you have a stake/an opinion in the debates about the Chomskian strain in syntax/linguistics in general?

Yes. I think the Chomskyan approach is based on a fundamentally mistaken view of cognition, akin to "good old fashioned artificial intelligence". I hope to write a top-level post on this at some point. But I'll say this for Chomsky: He's not a walk-around-in-circles obscurantist. He's a resolutely-march-ahead kind of guy. A lot of the marching was in the wrong direction, but still, I respect that.

Comment author: NancyLebovitz 12 January 2014 09:26:43AM 6 points [-]

Ask me anything. Like Vulture, I reserve the right to not answer.

Comment author: Anatoly_Vorobey 12 January 2014 10:08:12AM 3 points [-]

Is your button business really functioning, do you get a nontrivial number of orders? What do your buttons look like and why isn't there a single picture of one on your website?

Comment author: NancyLebovitz 12 January 2014 10:53:10AM 4 points [-]

It's still functioning to some extent-- I'll be at Arisia next weekend. As far as I can tell, I'm neglecting the website because of depression and inertia.

Comment author: NancyLebovitz 12 January 2014 05:00:12PM 3 points [-]
Comment author: JoshuaFox 12 January 2014 08:59:50AM *  16 points [-]

I'm a Research Associate at MIRI. I became a supporter in late 2005, then contributed to research and publication in various ways. Please, AMA.

Opinions I express here and elsewhere are mine alone, not MIRI's.

To be clear, as an Associate, I am an outsider to the MIRI team (who collaborates with them in various ways).

Comment author: James_Miller 12 January 2014 06:37:08PM 9 points [-]

When do you estimate that MIRI will start writing the code for a friendly AI?

Comment author: JoshuaFox 12 January 2014 07:06:53PM *  9 points [-]

Median estimate for when they'll start working on a serious code project (i.e., not just toy code to illustrate theorems) is 2017.

This will not necessarily be development of friendly AI -- maybe a component of friendly AI, maybe something else. (I have no strong estimates for what that other thing would be, but just as an example--a simulated-world sandbox).

Everything I say above (and elsewhere), is my opinion, not MIRIs. Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

Comment author: Eliezer_Yudkowsky 13 January 2014 10:59:05AM 9 points [-]

This is not a MIRI official estimate and you really should have disclaimed that.

Comment author: Lumifer 12 January 2014 07:34:32PM 3 points [-]

What are the error bars around these estimates?

Comment author: JoshuaFox 12 January 2014 07:41:51PM 4 points [-]

The first estimate: 50% probability between 2015 and 2020.

The second estimate: 50% probability between 2020 and 2035. (again, taking into account all the conditioning factors).

Comment author: Lumifer 13 January 2014 03:25:08AM 4 points [-]

Um.

2017

50% probability between 2015 and 2020.

The distribution is asymmetric for obvious reasons. The probability for 2014 is pretty close to zero. This means that there is a 50% probability that a serious code project will start after 2020.

This is inconsistent with 2017 being a median estimate.

Comment author: Furcas 14 January 2014 10:54:31PM *  2 points [-]

If some rich individual were to donate 100 million USD to MIRI today, how would you revise your estimate (if at all)?

Comment author: Tenoke 13 January 2014 07:26:51AM 3 points [-]

Median estimate for when they'll start working on friendly AI, if they get started with that before the Singularity, and if their direction doesn't shift away from their apparent current long-term plans to do so: 2025.

We're so screwed, aren't we?

Comment author: JoshuaFox 13 January 2014 08:53:11AM *  4 points [-]

Yes, but not because of MIRI. Along with FHI, they are doing more than anyone to improve our odds. As to whether writing code or any other strategy is the right one--I don't know, but I trust MIRI more than anyone to get that right.

Comment author: John_Maxwell_IV 12 January 2014 09:52:43PM *  8 points [-]

I've talked to a former grad student (fiddlemath, AKA Matt Elder) who worked on formal verification, and he said current methods are not anywhere near up to the task of formally verifying an FAI. Does MIRI have a formal verification research program? Do they have any plans to build programming processes like this or this?

Comment author: JoshuaFox 13 January 2014 09:09:59AM 3 points [-]

I don't know anything about MIRI research strategy than is publicly available, but if you look at what they are working on, it is all in the direction of formal verification.

After speaking to experts in formal verification of chips and of other systems, and they have confirmed what you learned from fiddlemath. Formal verification is limited in its capabilities: Often, you can only verify some very low-level or very specific assertions. And you have to be able to specify the assertion that you are verifying.

So, it seems that they are taking on a very difficult challenge.

Comment author: Anatoly_Vorobey 12 January 2014 07:06:43PM 4 points [-]

Your published dissertation sounds fascinating, but I swore off paper books. Can you share it in digital form?

Comment author: JoshuaFox 13 January 2014 07:25:40AM 3 points [-]

Sure, I'll send it to you. If anyone else wants it, please contact me. I always knew that Semitic Noun Patterns would be a best seller :-)

Comment author: XiXiDu 12 January 2014 11:58:17AM 2 points [-]

My question is similar to the one that Apprentice posed below. Here are my probability estimates of unfriendly and friendly AI, what are yours? And more importantly, where do you draw the line, what probability estimate would be low enough for you to drop the AI business from your consideration?

Comment author: JoshuaFox 12 January 2014 07:45:30PM *  2 points [-]

what probability estimate would be low enough for you to drop the AI business from your consideration?

Even a fairly low probability estimate would justify effort on an existential risk.

And I have to admit, a secondary, personal, reason for being involved is that the topic is fascinating and there are smart people here, though that of course does not shift the estimates of risk and of the possibilities of mitigating it.

Comment author: Apprentice 12 January 2014 10:17:41AM 2 points [-]

What probability would you assign to this statement: "UFAI will be relatively easy to create within the next 100 years. FAI is so difficult that it will be nearly impossible to create within the next 200 years."

Comment author: JoshuaFox 12 January 2014 01:57:45PM *  10 points [-]

I think that the estimates cannot be undertaken independently. FAI and UFAI would each pre-empt the other. So I'll rephrase a little.

I estimate the chances that some AGI (in the sense of "roughly human-level AI") will be built within the next 100 years as 85%, which is shorthand for "very high, but I know that probability estimates near 100% are often overconfident; and something unexpected can come up."

And "100 years" here is shorthand for "as far off as we can make reasonable estimates/guesses about the future of humanity"; perhaps "50 years" should be used instead.

Conditional on some AGI being built, I estimate the chances that it will be unfriendly as 80%, which is shorthand for "by default it will be unfriendly, but people are working on avoiding that and they have some small chance of succeeding; or there might be some other unexpected reason that it will turn out friendly."

Comment author: Apprentice 12 January 2014 03:11:24PM *  2 points [-]

Thank you. I didn't phrase my question very well but what I was trying to get at was whether making a friendly AGI might be, by some measurement, orders of magnitude more difficult than making a non-friendly one.

Comment author: JoshuaFox 12 January 2014 07:11:43PM 3 points [-]

Yes, it is orders of magnitude more different. If we took a hypothetical FAI-capable team, how much less time would it take them to make a UFAI than a FAI, assuming similar levels of effort, and starting at today's knowledge levels?

One-tenth the time seems like a good estimate.

Comment author: Eliezer_Yudkowsky 13 January 2014 11:01:08AM 2 points [-]

(Problem solved, comment deleted.)

Comment author: gjm 13 January 2014 02:56:24PM 4 points [-]

Meta: I think this was an important thing to say, and to say forcefully, but it might have been worth expending a sentence or so to say it more nicely (but still as forcefully). (I don't want to derail the thread and will say no more than this unless specifically asked.)

Comment author: ephion 12 January 2014 03:46:24PM *  5 points [-]

I'm heavily interested in instrumental rationality -- that is, optimizing my life by 1) increasing my enjoyment per moment, 2) increasing the quantity of moments, and 3) decreasing the cost per moment.

I've taught myself a decent amount and improved my life with: personal finance, nutrition, exercise, interpersonal communication, basic item maintenance, music recording and production, sexuality and relationships, and cooking.

If you're interested in possible ways of improving your life, I might have direct experience to help, and I can probably point you in the right direction if not. Feel free to ask me anything!

Comment author: knb 14 January 2014 11:29:55PM 4 points [-]

Feel free to ask me (almost) anything. I'm not very interesting, but here are some possible conversation starters.

  1. I'm a licensed substance abuse counselor and a small business owner (I can't give away too many specifics about the business without making my identity easy to find, sorry about this.)
  2. I'm a transhumanist, but mostly pessimistic about the future.
  3. I support Seasteading-like movements (although I have several practical issues with the Thiel/Friedman Seasteading Institute.
  4. I'm an ex-liberal and ex-libertarian. I was involved in the anti-war movement for several years as a teenager (2003-2009). I've read a lot of "neoreactionary" writings and find their political philosophy unconvincing.
Comment author: Moss_Piglet 15 January 2014 12:04:08AM 4 points [-]

Maybe you can give some common misconceptions about how people recover from / don't recover from their addictions? That's the sort of topic you tend to hear a lot of noise about which makes it tough to tell the good information from the bad.

Do you have any thoughts on wireheading?

Have you tried any 19th/20th century reactionary authors? Everyone should read Nietzsche anyway, and his work is really interesting if a little dense. His conception of Master/slave morality and nihilism is a much more coherent explanation for how history has turned out than the Cathedral, not to mention that the superman (I always translate it as posthuman in my head) as beyond good and evil is interesting from a transhumanist perspective.

Comment author: knb 15 January 2014 02:07:38AM 5 points [-]

Maybe you can give some common misconceptions about how people recover from / don't recover from their addictions? That's the sort of topic you tend to hear a lot of noise about which makes it tough to tell the good information from the bad.

I'm not sure if these are misconceptions, but here are some general thoughts on recovery:

  1. Neural genetics probably matters a lot. I don't know what to do with this, but I think neuroscience and genetics will produce huge breakthroughs in treatment of addiction in the next 20 years. People like me will probably be on the sidelines for this big change.
  2. People who feel coerced into entering counseling will almost certainly relapse, and they'll relapse faster and harder compared to people who enter willingly. However...
  3. ...this doesn't make coercion totally pointless--counselors can plant the seeds of a sincere recovery attempt, and give clients the mental tools to recognize their patterns.
  4. People who willingly enter counseling still usually relapse, multiple times. The people who keep coming back after a relapse stand a much better chance of getting to a high level of functioning. People who reenter therapy every time they relapse will usually succeed eventually. (I realize this is almost a tautology.)
  5. Clients with other diagnosed disorders are much less likely to fully recover.)

Do you have any thoughts on wireheading?

Wireheading is somewhat fuzzy as a term.... The extreme form (being converted into "Orgasmium") seems like it would be unappealing to practically everyone who isn't suicidally depressed (and even for them it would presumably not be the best option in a transhuman utopia in which wireheading is possible.)

I think a modest version of wireheading (changing a person's brain to raise their happiness set point) will be necessary if we want to bring everyone up to an acceptable level happiness.

Have you tried any 19th/20th century reactionary authors?

I've read a lot of excerpts and quotes, but not many full books. I read a large part of one of Carlyle's books and one late 19th Century travelogue of the United States which Moldbug approvingly linked to. (I've read a fair amount of Nietzsche's work, but I think calling him a reactionary is a bit like calling the Marquis de Sade a "libertarian.")

Comment author: NancyLebovitz 15 January 2014 04:04:25AM 2 points [-]

Why are you pessimistic about the future?

What are your practical issues about the Seasteading Institute? My major issue is that even if everything else works, governments are unlikely to tolerate real challenges to their authority.

What political theories, if any, do you find plausible?

Comment author: knb 16 January 2014 12:17:43AM *  2 points [-]

Why are you pessimistic about the future?

I worry about a regression to the historical mean (Malthusian conditions, many people starving at the margins) and existential risk. I think extinction or return to Malthusian conditions (including Robin Hanson's hardscrabble emulation future) are the default result and I'm pessimistic about the potential of groups like MIRI.

What are your practical issues about the Seasteading Institute?

As I see it, the main problem with SI is their over-commitment to small-size seastead designs because of their commitment to the principle of "dynamic geography." The cost of small-seastead designs (in complexity, coordination problems, additional infrastructure) will be huge.

I don't think dynamic geography is what makes seasteading valuable as a concept. The ability to create new country projects by itself is the most important aspect. I think large seastead designs (or even land-building) would be more cost-effective and a better overall direction.

My major issue is that even if everything else works, governments are unlikely to tolerate real challenges to their authority.

I've always thought the risk from existing governments isn't that big. I don't think governments will consider seasteading to be a challenge until/unless governments are losing significant revenues from people defecting to seasteads. By default, governments don't seem to care very much about things that take place outside of their borders. Governments aren't very agent-y about considering things that are good for the long term interests of the government.

Seasteads would likely cost existing governments mainly by people attracting revenue-producing citizens away from them and into seasteads, and it will take a long time before that becomes a noticeable problem. Most people who move to seasteads will still retain the citizenship of their home country (at least in the beginning), and for the US that means you must keep paying some taxes. Other than the US, there aren't a lot of countries that have the ability to shut down a sea colony in blue water. By the time the loss of revenue becomes institutionally noticeable, the seasteads are likely to be too big to easily shut down (i.e. it would require a long term deployment and would involve a lot of news footage of crying families being forced onto transport ships).

What political theories, if any, do you find plausible?

I like the overall meta-political ethos of seasteading. I think any good political philosophy should start with accepting that there are different kinds of people and they prefer different types of governments/social arrangements.You could call this "meta-libertarianism" or "political pluralism."

Comment author: [deleted] 14 January 2014 02:59:03AM *  4 points [-]

I understand ancient Greek philosophy really well. In case that has come up. I'm a PhD student in philosophy, and I'd be happy to talk about that as well.

Comment author: blacktrance 14 January 2014 06:08:29PM 2 points [-]

What do you think of Epicurus? What do you think of Epicurean ethics?

Comment author: Douglas_Knight 14 January 2014 01:48:27PM 2 points [-]

Do you have a sense of how the proportion of philosophy varied with place and time, both the proportion written and the proportion surviving? My impression is that there was a lot more philosophy in Athens than in Alexandria.

Comment author: [deleted] 14 January 2014 03:37:43PM 2 points [-]

I'm not sure I entirely understand the question. I'll try to give a history in three stages

1) Roughly, the earliest stages of philosophy were mathematics, and attempts at reductive, systematic accounts of the natural world. This was going on pretty broadly, and only by virtue of some surviving doxographers do we have the impression that Greece was at the forefront of this practice (I'm thinking of the pre-Socratic greek philosophers, like Thales and Anaxagoras and Pythagoras). It was everywhere, and the Greeks weren't particularly good at it. This got started with the Babylonians (very little survives), and when the Assyrian empire conquered Babylon (only to be culturally subjugated to it), they spread this practice throughout the Mediterranean and near-east. Genesis 1 is a good example of a text along these lines.

2) After the collapse of the Assyrians, locals on the frontiers of the former empire (like Greece and Israel) reasserted some intellectual control, often in the form of skeptical criticisms or radically new methodologies (like Parmenides very important arguments against the possibility of change, or the Pythagorean claim that everything is number). Socrates engaged in a version of this by eschewing questions of the cosmos and focusing on ethics and politics as independent topics. Then came Plato, and Aristotle, who between them got the western intellectual tradition going. I won't go into how, for brevity's sake.

3) After Plato and Aristotle, a flurry of philosophical activity overwhelmed the Mediterranean (including and especially in Alexandria), largely because of the conquests of Alexander and the active spread of Greek culture (a rehash of the thing with the Assyrians). This period is a lot like ours now: widespread interest in science, mathematics, ethics, political theory, etc. Many, many people were devoted to these things, and they produced more work in a given year during this period than every that had come before combined. But as a result of the sheer volume of this work, and as a result of the fact that it was built on the shoulders of Plato and Aristotle, very little of it really stands out. As a result, a lot was lost.

Comment author: Eugine_Nier 15 January 2014 01:53:13AM 0 points [-]

Well, with respect to mathematics at least one difference between the Greeks and everybody else, is that the Greeks provided proofs of the non-obvious results.

Comment author: Axel 12 January 2014 06:29:03PM 4 points [-]

I'm a 24-year-old guy looking for a job and have a great interest in science and game design. I read a lot of LW but I rarely feel comfortable posting. I wished there was a LW meetup group in Belgium and when nobody seemed to want to take the initiative I set one up my self. I didn't expect anyone to show, but now, two years later it's still going. Ask me anything you want, but I reserve the right not to answer.

Comment author: Alicorn 13 January 2014 12:19:36AM 8 points [-]

I have written various things, collected here, including what I think is the second most popular (or at least usually second-mentioned) rationalist fanfiction. I serve dinner to the Illuminati. AMA.

Comment author: TheOtherDave 14 January 2014 06:40:21PM 3 points [-]

Some LW-folks have in the past asked me questions about my stroke and recovery when it came up, and seemed interested in my answers, so it might be useful to offer to answer such questions here. Have at it! (You can ask me about other things if you want, too.)

Comment author: Daniel_Burfoot 13 January 2014 05:08:23AM *  3 points [-]

I wrote a book about a new philosophy of empirical science based on large scale lossless data compression. I use the word "comperical" to express the idea of using the compression principle to guide an empirical inquiry. Though I developed the philosophy while thinking about computer vision (in particular the chronic, disastrous problems of evaluation in that field), I realized that it could also be applied to text. The resulting research program, which I call comperical linguistics, is something of a hybrid of linguistics and natural language processing, but (I believe) on much firmer methodological ground than either. I am now carrying out research in this area, AMA.

Comment author: edanm 12 January 2014 01:05:41PM 3 points [-]

Sure. I run a Software Dev Shop called Purple Bit, based in Tel Aviv. We specialise in building Python/Angular.js webapps, and have done consulting for a bunch of different companies, from startups to large businesses.

I'm very interested in business, especially Startups and Product Development. Many of my closest friends are running startups, I used to run a startup, and I work with and advise various startups, both technically and business-wise.

AMA, although I won't/can't necessarily answer everything.

Comment author: djm 15 January 2014 03:48:36AM 2 points [-]

In terms of custom software, what do you see as the next big thing that business will want? More specifically do you get the feeling that more people are wanting to move away from cloud services to locally managed applications?

Comment author: edanm 17 January 2014 11:10:12AM 3 points [-]

This really depends on the field. My experiences are probably only relevant to about 1% of software projects out there - there's a lot of software in the world.

That said, in terms of Cloud vs. Local - definitely not. Most large (and small!) companies we've worked with use AWS. We also highly recommend Heroku/AWS to all our customers as the easiest and least expensive way to get started on building a custom application.

Of course, there are a lot of places where cloud still doesn't make sense. We have one client who has custom software deployed in hospitals, where all of the infrastructure is of course local to their site, not in any kind of cloud. But for the majority of people who don't have such a use case, everyone understands that cloud makes everything easier.

Comment author: joaolkf 12 January 2014 12:53:05PM 3 points [-]

I'll answer anything that will not affect negatively my academic career or violates anyone's privacy but mine (I never felt like I had one). I waive my right not to answer anything else that could be useful to anyone. I'm finishing a master’s on ethics of human enhancement in Brazil, and have just submitted an application for a doctorate in Oxford about moral enhancement.

Comment author: lmm 12 January 2014 12:25:43PM 3 points [-]

Sure, ask me if you want. Programmer/anime fan/LW reader and commenter.

Comment author: Anatoly_Vorobey 12 January 2014 10:10:10AM 3 points [-]

I work as a software engineer, married with two kids, live in Israel and blog mostly in Russian. AMA.

Comment author: Locaha 12 January 2014 06:26:21PM 0 points [-]

Why do you even waste time on lj-russians? The level of the discourse is lagging roughly two hundred years behind the western world.

Comment author: Anatoly_Vorobey 13 January 2014 11:24:34AM 5 points [-]

The quality of discourse in Russian LJ depends almost entirely on your immediate circle of readers. Incredible stupidity and mendacity happily coexist with fantastic blogs and interesting debates. The number and density of the latter has gone down over the years, but then again, blogging as a phenomenon has.

It comes down to this: the main reason I blog on LJ in Russian because I still have lots and lots of readers there who are smarter and knowledgeable than me in the many different areas I'm interested in. There's no single place I can blog or write in English that would give me as much, and as useful, feedback (and that certainly includes LW).

Comment author: Viliam_Bur 12 January 2014 09:55:32AM 3 points [-]

Here I am.

Comment author: rationalnoodles 12 January 2014 07:43:41PM 3 points [-]

Why do you live in Slovakia?

Comment author: Viliam_Bur 12 January 2014 08:32:57PM 7 points [-]

I was born here, and I never lived anywhere else (longer than two weeks). I dislike travelling, and I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language). Generally, I dislike changes -- I should probably work on that, but this is where I am now.

I could also provide some rationalization... uhh, I have friends here, I am familiar with how the society works here, maybe I prefer being a fish in a smaller pond -- okay the last one is probably honest, too.

Comment author: [deleted] 13 January 2014 05:15:13PM *  6 points [-]

I feel uncomfortable speaking another language (it has a cognitive cost, so I feel I sound more stupid than I would in my language).

Speaking in a language I'm not fluent in (and in a cultural context I'm not familiar with) makes me feel like an idiot savant, because it destroys my social skills while keeping my abstract reasoning/mental arithmetic skills intact.

Comment author: RomeoStevens 12 January 2014 08:37:52AM 3 points [-]

I believe that the things I do at any given time are reasonable for me to do, AMA.

Comment author: Mark_Friedenbach 12 January 2014 06:50:19AM 3 points [-]

I don't think I'm known around here, but sure why not. Ask me anything.

Comment author: Kaj_Sotala 12 January 2014 05:39:16AM 3 points [-]

Sure, you can ask me anything.

Comment author: eurg 12 January 2014 04:35:21PM *  6 points [-]

Ask me almost anything. I'm very boring, but I have recovered from depression with the help of CBT + pills, am a lurker since back from the OB days and know the orthodoxy here quite well, started to enjoy running (real barefoot if >7 degrees Celsius) after 29 years of no physical activity, am chairman of the local hackerspace (software dev myself, soon looking for a job again), and somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).

Comment author: pinyaka 12 January 2014 08:37:16PM 4 points [-]

What steps did you take to start enjoying running?

Comment author: eurg 12 January 2014 11:21:13PM *  4 points [-]

This was surprisingly simple: I got myself to want to run, started running, and patted myself on the back everytime I did it.

The want part was a bit of luck: I always thought I "should" do some sports, for physical and more importantly mental health reasons, and think that being able to do stuff is better than not being able, ceteris paribus. So I was thinking what kind of activity I might prefer.

I like my alone time (so team- or pair-sports are out), I dislike spending money when I expect it to be wasted (like Gym memberships, bikes, et al.). And I feel easily embarassed and ashamed, and like to get myself at least somewhat up to speed on my own.

Running fits those side requirements. Out of chance I got hold of "Born to Run", and even after the first quarter of the book I thought that it would be great if I could just go out on a bad day and spend an hour free of shit, or how it would be great that I could just reach some location a few kilometers away without any prep or machines or services.

I then decided that I will start running, and that my primary goal shall be that I like it and be able to do it even in old age if such would happen. With the '*' that I give myself an easy way out in case of physical pain or unexpected hatred against the activity, but not for any weasel reasons.

I didn't start running for another one and a half years, because Schweinehund, subtype Innerer. When my mood was getting slightly better (I was again able to do productive work), I started, with the "habit formation" mind-set. Also didn't tell anyone in the beginning. I think it helped that I already had some knowledge on how to train and run correctly, which especially in the beginning meant that I always felt like I could run further than I was "allowed" to.

And for good feedback: However it went, when I finished my training, I "said" to myself: I did good. I feel good. I feel better than before I started. I wrote every single run down on RunKeeper and Fitocracy, and always smiled at the "I'm awesome!" button of the latter one. I'm also quite sure that having at least one new personal best once a week helped. (Also, when you run barefoot, you get the "crazy badass" card for free, however slow you run. I like this.)

Once started, such a feedback loop is quite powerful. When I once barely trained for month, I was also surprised that getting back into regular running after that down-phase was so much easier. Now, after only seven months of training, I went from doing walk/run for 15 minutes to running 75 minutes, and having no problem with a cold-start 6% incline for the first two kilometers. I'm proud. Feels good (is quite new to me).

Comment author: Anatoly_Vorobey 12 January 2014 05:30:43PM 3 points [-]

What's your motivation for veganism?

What do you enjoy most in software development, and why are you going to be looking for a job again soon? What's your dream SW dev job?

Comment author: eurg 12 January 2014 11:51:23PM 1 point [-]

What's your motivation for veganism?

Moral reasons. All else equal, I think that inflicting pain or death is bad, and that the ability to feel pain and the desire to not die is very widespread. I also think that the intensity of pain in simpler animals is still very strong (I think humans did not evolve large brains because otherwise the pain was not strong enough). I also think that our ability to manage pain slighly reduces the impact of our having the ability to suffer more strongly and with more variety. But I give, for sanity check reasons, priority to the desires of "more complex" animals, like humans.

Due to our technical ability we can now produce supplements for micronutrient which are missing or insufficently available in plants[1], and so I see health concerns resolved. So all the pain and death that I would inflict would only be there for greated enjoyment of food. Although I love the taste of meat and animal products, the comparative enjoyment is not big enough that I would kill for it. That I can enjoy plant-based foods is partly based upon my not being afraid of using my kitchen, and having a good vegan/vegetarian self-service restaurant 100m from my apartment.

And than there are the environmental reasons, and the antibiotic use, etc. etc. They count, and might be even sufficent on their own, but I'll only investigate those in case my other concerns/reasons were invalidated.

[1] There is vegan vit B12, vit D3, EPA/DHA (omega3), and creatin powder.

Comment author: Daniel_Burfoot 12 January 2014 08:04:28PM *  4 points [-]

I'm very boring.... somehow established the acceptance of a vegan lifestyle in my conservative familiy (farmers).

That's not boring, it is impressive and admirable. Well done.

Comment author: eurg 12 January 2014 11:25:05PM 1 point [-]

Thanks!

Comment author: ahbwramc 14 January 2014 12:18:07AM 5 points [-]

I didn't think I had anything particularly interesting to offer, but then it occurred to me that I have a relatively rare medical disorder: my body doesn't produce any testosterone naturally, so I have to have it administered by injection. As a result I went through puberty over the age range of ~16-19 years old. If you're curious feel free to AMA.

(also, bonus topic that just came to mind: every year I write/direct a Christmas play featuring all of my cousins, which is performed for the rest of the family on Christmas Eve. It's been going on for over 20 years and now has its own mythology, complete with anti-Santa. It gets more elaborate every year and now features filmed scenes, with multi-day shoots. This year the villain won, Christmas was cancelled for seven years and Santa became a bartender (I have a weird family). It's...kind of awesome? If you're looking for a fun holiday tradition to start AMA)

Comment author: XiXiDu 12 January 2014 02:24:27PM 6 points [-]

You can ask me anything.

Comment author: Apprentice 12 January 2014 04:13:42PM 6 points [-]

Okay, I'll bite. Do you think any part of what MIRI does is at all useful?

Comment author: XiXiDu 12 January 2014 04:57:08PM *  26 points [-]

Do you think any part of what MIRI does is at all useful?

It now seems like a somewhat valuable research organisation / think tank. Valuable because they now seem to output technical research that is receiving attention outside of this community. I also expect that they will force certain people to rethink their work in a positive way and raise awareness of existential risks. But there are enough caveats that I am not confident about this assessment (see below).

I never disagreed with the basic idea that research related to existential risk is underfunded. The issue is that MIRI's position is extreme.

Consider the following fictive and actual positions people take with respect to AI risks in ascending order of perceived importance:

  1. Someone should actively think about the issue in their spare time.

  2. It wouldn’t be a waste of money if someone was paid to think about the issue.

  3. It would be good to have a periodic conference to evaluate the issue and reassess the risk every year.

  4. There should be a study group whose sole purpose is to think about the issue. All relevant researchers should be made aware of the issue.

  5. Relevant researchers should be actively cautious and think about the issue.

  6. There should be an academic task force that actively tries to tackle the issue.

  7. It should be actively tried to raise money to finance an academic task force to solve the issue.

  8. The general public should be made aware of the issue to gain public support.

  9. The issue is of utmost importance. Everyone should consider to contribute money to a group trying to solve the issue.

  10. Relevant researchers that continue to work in their field, irrespective of any warnings, are actively endangering humanity.

  11. This is crunch time. This is crunch time for the entire human species. And it’s crunch time not just for us, it’s crunch time for the intergalactic civilization whose existence depends on us. Everyone should contribute all but their minimal living expenses in support of the issue.

Personally, most of the time, I alternate between position 3 and 4.

Some people associated with MIRI take positions that are even more extreme than position 11 and go as far as banning the discussion of outlandish thought experiments related to AI. I believe that to be crazy.

Extensive and baseless fear-mongering might very well cause MIRI's value to be overall negative.

Comment author: Locaha 12 January 2014 06:32:31PM 1 point [-]

How should I fight a basilisk?

Comment author: XiXiDu 12 January 2014 06:54:49PM *  17 points [-]

How should I fight a basilisk?

Every basilisk is different. My current personal basilisk pertains measuring my blood pressure. I have recently been hospitalized as a result of dangerously high blood pressure (220 systolic, mmHg / 120 diastolic, mmHg). Since I left the hospital I am advised to measure my blood pressure.

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Should I stop measuring my blood pressure because the knowledge hurts me or should I measure anyway because knowing it means that I know when it reaches a dangerous level and thus requires me to visit the hospital?

Comment author: Lumifer 12 January 2014 07:28:36PM 26 points [-]

The problem I have is that measuring causes panic about the expected result, which increases the blood pressure. Then if the result turns out to be very high, as expected, the panic increases and the next measurement turns out even higher.

Measure every hour. Or every ten minutes. Your hormonal system can't sustain the panic state for long, plus seeing high values and realizing that you are not dead yet will desensitize you to these high values.

Comment author: fubarobfusco 12 January 2014 08:16:00PM 8 points [-]

As someone who's had both high blood pressure and excessive worrying — I second this advice.

Comment author: Dr_Manhattan 13 January 2014 02:23:14AM *  4 points [-]

I like the idea.

Here we go, things that might be interesting to people to ask about:

  • born in Kharkov, Ukraine, 1975, Jewish mother, Russian father

  • went to a great physics/math school there (for one year before moving to US), was rather average for that school but loved it. Scored 9th in the city's math contest for my age group largely due to getting lucky with geometry problems - I used to have a knack for them

  • moved to US

  • ended up in a religious high school in Seattle because I was used to having lots of Jewish friends from the math school

  • Became an orthodox Jew in high school

  • Went to a rabbinical seminary in New York

  • After 19 years, accumulation of doubts regarding some theological issues, Haitian disaster and a lot of help from LW quit religion

  • Mostly worked as a programmer for startups with the exception of Bloomberg, which was a big company; going back to startups (1st day at Palantir tomorrow)

  • self-taught enough machine learning/NLP to be useful as a specialist in this area

  • Married with 3 boys, the older one is a high-functioning autistic

  • Am pretty sure AI issues are important to worry about. MIRI and CFAR supporter

Comment author: Anatoly_Vorobey 13 January 2014 02:47:48PM 1 point [-]

How did your family handle your deconversion? Do you continue with the religious Jewish style of everyday life?

Do your kids speak Russian at all/fluently? If not, are you at all unhappy about that? What about Hebrew?

If you're comfortable discussing the HFA kid: at what age was he diagnosed? What kind of therapy did you consider/reject/apply? What are the most visible differences from neurotypical norm now?

Comment author: Dr_Manhattan 13 January 2014 09:50:19PM 1 point [-]

Hi Anatoly,

Initially it was a shock to my wife, but I took things very slowly as far as dropping practices. This helped a lot and basically I do whatever I want now (3.5 years later). Also transferred my kids to a good public school out of yeshiva. My wife remains nominally religious, it might take another 10 years :)

My kids don's speak Russian - my wife is American-born. I prefer English myself, so I'm not "unhappy" about them not speaking Russian in particular although I'd prefer them to be bilingual in general. They read a bit of Hebrew.

I'm happy to discuss my HFA kid via PM.

Comment author: ITakeBets 03 February 2014 01:53:18AM 2 points [-]

I'm a 30-year-old first-year medical student on a full tuition scholarship. I was a super-forecaster in the Good Judgment Project. I plan to donate a kidney in June. I'm a married polyamorous woman.

Comment author: philh 13 January 2014 02:31:00PM *  2 points [-]

Why not.

I attended CFAR's may 2013 workshop. I was the main organizer of the London LW group during approximately Nov 2012-April 2013, and am still an occasional organizer of it. I have an undergraduate MMath. My day job is software, I'm the only fulltime programmer on a team at Universal Pictures which is attempting to model the box office. AMAA.

Comment author: [deleted] 12 January 2014 08:01:38PM 2 points [-]

Self deprecating observations about my knowledge and interestingness, etc, but I have been reading this site for a while. So on the off chance then sure why not, ask me anything

Comment author: ChristianKl 12 January 2014 11:43:00AM 2 points [-]

In case anyone has question for myself I"m happy to answer.

Comment author: whales 12 January 2014 08:06:16PM 4 points [-]

What is the philosophy behind your prolific commenting?

Comment author: ChristianKl 12 January 2014 09:22:34PM 6 points [-]

In general online commenting is something I do out of habit. Higher return on time than completely passive media consumption such as watching TV but not that I book under time spent with maximum returns.

I generally think that a shift to massive information consumption of content via TV/radio in the 20st century was something that's bad for the general discourse of ideas in society. Active engagment helps learning.

I also prefer it over chatting in venues such as IRC, because it provides it provides deeper engagement with ideas and leaves more of a footprint. Created content is findable afterwards.

Lesswrong is also a choice to keep me intellecutally grounded. These days I do spent plenty of time thinking in mental frameworks that are not based on reductionist materalism. I do see value of being pretty flexible about changing the map I use to navigate the world and I don't want to lose access to the intellectual way of thinking.

In total I however spent more time than optimal on LW and frequently use it to procrastinate on some other task.

Comment author: drethelin 12 January 2014 04:33:29AM 2 points [-]

Why did you make this post Will? Wait I guess you didn't comment here volunteering to answer questions.

Anyway I guess I can answer questions but I'm pretty lazy and not very educated so ask at your own risk.

Comment author: Will_Newsome 12 January 2014 04:43:06AM 5 points [-]

You're asking me why? I did it 'cause I was bored.

I'll probably jump in if others do, otherwise it's too narcissistic as the creator of the post.

Comment author: IlyaShpitser 12 January 2014 07:12:20AM 9 points [-]

Will have you ever had an encounter with the divine?

Comment author: Leonhart 13 January 2014 10:38:11PM 4 points [-]

I upvoted you because I misread it as "Will you ever had" and thought you were making a joke about eternity, but now I suspect you just forgot the comma after "Will".

Keep the upvote, though, I want to know too.

Comment author: Vulture 12 January 2014 05:00:38AM *  4 points [-]

If anyone's interested (ha!), then sure, go ahead, ask me anything. (Of course I reserve the right not to answer if I think it would compromise my real-world identity, etc.)

N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.

Comment author: James_Miller 12 January 2014 06:45:33PM 4 points [-]

Why are you hiding your real identity? Don't you fear that in a few years programs, available to the general public, will be able to match writing patterns and identify you?

Comment author: Vulture 14 January 2014 02:46:16AM *  3 points [-]

I see it more as introducing a trivial inconvenience which keeps people I know in real life generally away from my (often frank) online postings. In some sense it's just psychological, since by nature I am a very reticent person and it makes me feel like I can jot out opinions and get feedback without having to agonize over it. (That's also why I'm not necessarily comfortable directly listing out personal details which could probably be inferred/collected from what I write.)

Comment author: Will_Newsome 13 January 2014 02:58:30AM 3 points [-]

N.B. I predict at ~75% that this thread will take off (i.e. get more than about 20 comments) iff Eliezer or another public figure decides to participate.

For what it's worth I posted this with my main account and not with a sockpuppet precisely to ensure the exclusion of Eliezer.

Comment author: Thomas 12 January 2014 11:36:37AM 3 points [-]

I am asking everybody here.

Do you have a plan of your own, to ignite the Singularity, the Intelligence explosion, or whatever you want to call it?

If so, when?

How?

Comment author: lmm 12 January 2014 12:24:02PM 4 points [-]

I have a plan. Posts here have convinced me that the singularity will most likely be a lose condition for most people. So I'll only activate my plan if I think other actors are getting close.

Comment author: djm 15 January 2014 01:53:18AM 2 points [-]

This post reminds me of Denethor saying the Ring was only to be used in utmost emergency at the bitter end

Comment author: MugaSofer 28 January 2014 05:43:21PM 1 point [-]

becomes wildly curious

Since you posted above that you're participating in the AMA, can you give some details of this plan? (Assuming step one isn't "tell people about this plan", in which case please don't end the world just because you precommitted to answering questions.)

Comment author: fubarobfusco 12 January 2014 08:05:46PM 2 points [-]

Sure, what the heck. Ask me stuff.

Professional stuff: I work in tech, but I've never worked as a developer — I have fifteen years of experience as a sysadmin and site reliability engineer. I seem to be unusually good at troubleshooting systems problems — which leaves me in the somewhat unfortunate position of being most satisfied with my job when all the shit is fucked up, which does not happen often. I've used about a dozen computer languages; these days I code mostly in Python and Go; for fun I occasionally try to learn more Haskell. I've occasionally tried teaching programming to novices, which is one incredible lesson in illusion of transparency, maybe even better than playing Zendo. I've also conducted around 200 technical interviews.

Personal stuff: I like cooking, but I don't stress about diet; I have the good fortune to prefer salad over dessert. I do container gardening. I've studied nine or ten (human) languages, but alas am only fluent in English; of those I've studied, the one I'd recommend as the most interesting is ASL. I'm polyamorous and in a settled long-term relationship. I get along pretty well with feminists — and think the stereotypes about feminists are as ridiculous as the stereotypes about libertarians. My Political Compass score floats around (1, –8) in the "weird libertarian" end of the pool. I play board games; I should probably play more Go, but am more likely to play more Magic. I was briefly a Less Wrong meetup organizer.

Comment author: David_Gerard 12 January 2014 10:23:12AM 2 points [-]

I am not interesting, but I've been here a few years.

Comment author: Apprentice 12 January 2014 10:48:54AM 4 points [-]

Back when you joined Wikipedia, in 2004, many articles on relatively basic subjects were quite deficient and easily improved by people with modest skills and knowledge. This enabled the cohort that joined then to learn a lot and gradually grow into better editors. This seems much more difficult today. Is this a problem and is there any way to fix it? Has something similar happened with LessWrong, where the whole thing was exciting and easy for beginners some years ago but is "boring and opaque" to beginners now?

Comment author: David_Gerard 12 January 2014 09:20:14PM 1 point [-]

My answer may be a bit generic :-)

Re: Wikipedia - This is pretty well-trodden ground, in terms of (a) people coming up with explanations (b) having little evidence as to which of them hold. There's all manner of obvious systemic problems with Wikipedia (maybe the easy stuff's been written, the community is frequently toxic, the community is particularly harsh to newbies, etc) but the odd thing is that the decline in editing observed since 2007 has also held for wikis that are much younger than English Wikipedia - which suggests an outside effect. We're hoping the Visual Editor helps, once it works well enough (at present it's at about the stage of quality I'd have expected; I can assure you that everyone involved fully understands that the Google+-like attempt to push everyone into using it was an utter disaster on almost every level). The Wikimedia Foundation is seriously interested in getting people involved, insofar as it can make that happen.

As for LessWrong ... it's interesting reading through every post on the site (not just the Sequences) from the beginning in chronological order - because then you get the comments. You can see some of the effect you describe. Basically, no-one had read the whole thing yet, 'cos it was just being written.

I'm not sure it was easier for beginners at all. Remember there was only "main" for the longest time - and it was very scary to write for (and still is). Right now you can write stuff in discussion, or in various open threads in discussion.

Comment author: Apprentice 12 January 2014 10:28:46PM 3 points [-]

Thank you. You brought up considerations I hadn't considered.

Comment author: Anatoly_Vorobey 12 January 2014 10:45:14AM 9 points [-]

Are there interesting reasons that some LW regulars feel disdain for RationalWiki, besides RW's unflattering opinion of LW/EY? Can you steelman that disdain into a short description of what's wrong with RW, from their point of view? (I'm asking as someone basically unfamiliar with RW).

Comment author: David_Gerard 12 January 2014 06:52:01PM *  8 points [-]

I think the main reason is that basically nobody in the wider world talks about LW, and RW is the only place that talks about LW even that much. And RW can't reasonably be called very interested in LW either (though many RW regulars find LW annoying when it comes to their attention). Also, we use the word "rational", which LW thinks of as its own - I think that's a big factor.

From my own perspective: RW has many problems. The name is a historical accident (and SkepticWiki.com/org is in the hands of a domainer). Mostly it hasn't enough people who can actually write. It's literally not run by anyone (same way Wikipedia isn't), so is not going to be fixed other than organically. Its good stuff is excellent and informative, but a lot of it isn't quite fit for referring outside fresh readers to.

It surprises me how popular it is (as in, I keep tripping over people using a particular page they like - Alexa 21,000 worldwide, 8800 US - and Snopes uses us a bit) - it turns out there's demand for something that can set out "no, actually, that's BS and here's why, point for point". Raising the sanity waterline does in fact also involve dredging the swamps and cleaning up toxic waste spills. Every time we have a fundraiser it finishes ridiculously quickly ('cos our expenses are literally a couple thousand dollars a year). We have readers who just love us.

On balance, though, I do think RW makes the world a better place rather than a worse one. (Or, of course, I wouldn't bother.)

FWIW, there's a current active discussion on What RW Is For, which I expect not to go anywhere much.

I'm not sure I could reasonably steelman LW opposition to RW as if either were a monolith and there were no crossover (which simply isn't the case). I will note that RW is piss-insignificant, and if you're spending any time whatsoever worrying what RW thinks of LW then you're wasting precious seconds.

(The discussion of RW on LW actually came up on the LW and RW Facebook groups this morning too.)

Comment author: Eugine_Nier 12 January 2014 09:35:07PM 0 points [-]

Because RW sucks at actually being rational. Rather they seem to have confused being "rational" with supporting whatever they perceive to be the official scientific position. Whereas LW has a number of contrarian positions, most notably cryonics and the Singularity, where it is widely believed the mainstream position is likely wrong and their argument for it is just silly.

Comment author: David_Gerard 13 January 2014 09:44:15PM 0 points [-]

It is worth noting that Eugene's main concern is that RW has no patience with "race realism", as its proponents call it.

Comment author: ArisKatsaris 14 January 2014 11:05:09AM 0 points [-]

I'm downvoting you not because I disagree, but rather because the question was addressed to David, not you.

Comment author: blacktrance 12 January 2014 05:41:23AM *  2 points [-]

In the unlikely even that anyone is interested, sure, ask me anything.

Edit: Ethics are a particular interest of mine.

Comment author: Tuxedage 12 January 2014 08:28:13AM 15 points [-]

Would you rather fight one horse sized duck, or a hundred duck sized horses?

Comment author: blacktrance 12 January 2014 05:43:43PM 7 points [-]

Depends on the situation. Do I have to kill whatever I'm fighting, or do I just have to defend myself? If it's the former, the horse-sized duck, because duck-sized horses would be too good at running away and hiding. If it's the latter, then the duck- horses, because they'd be easier to scatter.

Comment author: Moss_Piglet 12 January 2014 03:47:53PM 4 points [-]

Is this a fist-fight or can blacktrance use weapons?

Comment author: shminux 13 January 2014 04:31:22AM -1 points [-]

.