Comment author: lhopegood 18 November 2012 11:16:19AM 0 points [-]

I've searched the site, but I can't find any lw meetups in my area, Tampa, FL. Does anyone know if there are and how to find them?

Comment author: AlexanderD 18 November 2012 11:49:45AM 0 points [-]

You can find upcoming meetups here.

Comment author: [deleted] 16 November 2012 04:51:21PM -1 points [-]

No one said it was genetic.

People always assume that acknowledgeing a trait in a person requires you to have an explanation for it. And then they note that all possible explanations are politically controverisal, so they conclude that the trait does not actually exist. This is bad logic, as far as I can tell.

The fact is, race is a good predictor of things like civilization, intelligence, violence, etc. I offer no explanations.

In response to comment by [deleted] on Rationality Quotes November 2012
Comment author: AlexanderD 17 November 2012 09:01:46AM *  1 point [-]

What do you mean by "race?" I notice a lot of discussion below on this topic already, but the term is unclear to me, and I don't see how anyone can usefully disagree or agree without this information. Some people use "race" to indicate loose groupings based around skin color, whereas others mean much more strictly a specific genetic group.

Incidentally, there is no canonical "race," just generally-agreed upon loose labels that vary from person to person. Because of this, it is generally not useful for predicting anything, and should be avoided, I think. A "white person" from Sicily and a "white person" from Iceland do not have much more in common with each other than they might with a disparate other range of people, so it's not a meaningful grouping (except perhaps when speaking of historical things). It is wiser to be more exact.

There's the additional danger that you will be misunderstood, and that someone will (very reasonably) think that you are advocating simple-minded racism of a common sort. Saying "race is a good predictor of things like civilization, intelligence," etc. is a fairly specific sort of social code, and if you don't actually mean that "black people are dumb" or "Asians can't drive," (and I'm not saying that you necessarily do) then you should find another sort of phrasing.

Comment author: JaneQ 16 July 2012 02:07:09PM *  4 points [-]

I'm not sure why you think that such writings should convince a rational person that you have the relevant skill. If you were an art critic, even a very good one, that would not convince people you are a good artist.

This is not, in any way shape or form, the same skill as the ability to manage a nonprofit.

Indeed, but you are asking me to assume that the skills you display writing your articles are the same skill as the skills relevant to directing the AI effort.

edit: Furthermore, when it comes to works on rationality as 'applied math of optimization', the most obvious way to classify those writings is to look for some great success attributable to your writings - some highly successful businessmen saying how much the article on such and such fallacy helped them succeed, that sort of thing.

Comment author: AlexanderD 14 November 2012 07:27:10AM *  3 points [-]

It seems to me that the most obvious way to demonstrate the brilliance and excellent outcomes of the applied math of optimization would be to generate large sums of money, rather than seeking endorsements.

The Singularity Institute could begin this at no cost (beyond opportunity cost of staff time) by employing the techniques of rationality in a fake market, for example, if stock opportunities were the chosen venue. After a few months of fake profits, SI could set them up with $1,000. If that kept growing, then a larger investment could be considered.

This has been done, very recently. Someone on Overcoming Bias recently wrote of how they and some friends made about $500 each with a small investment by identifying an opportunity for arbitrage between the markets on InTrade and another prediction market, without any loss.

Money can be made, according to proverb, by being faster, luckier, or smarter. It's impossible to create luck in the market, and in the era of microsecond purchases by Goldman Sachs it's very nearly impossible to be faster, but an organization (or perhaps associated organizations?) devoted to defeating internal biases and mathematically assessing the best choices in the world should be striving to be smarter.

While it seems very interesting and worthwhile to work on existential risk from UFAI directly, it seems like the smarter thing to do might be to devote a decade to making an immense pile of money for the institute and developing the associated infrastructure (hiring money managers, socking a bunch away into Berkshire Hathaway for safety, etc.) Then hire a thousand engineers and mathematicians. And what's more, you'll raise awareness of UFAI an incredibly greater amount than you would have otherwise, plugging along as another $1-2m charity.

I'm sure this must have been addressed somewhere, of course - there is simply way too much written in too many places by too many smart people. But it is odd to me that SI's page on Strategic Insight doesn't have as #1: Become Rich. Maybe if someone notices this comment, they can point me to the argument against it?

Comment author: AlexanderD 14 November 2012 03:28:01AM 4 points [-]

Howdy. My name is Alexander. I've read a lot of LW, but only recently finally registered. I learned about LW from RationalWiki, where I am a mod. I have read most of the sequences, and many of them are insightful, although I am skeptical about the utility of such posts as the Twelve Virtues, which seeks to clothe a bit of good advice in the voluminous trappings of myth. HPMOR is also good. I don't anticipate engaging in much serious criticism of these things, however, because I have little experience in the sciences or mathematics, and often struggle to grasp things that appear easy for those accustomed to equations. The utility of Bayes' Theorem is one good example. I expect to ask questions, often.

My primary interest in LW are practical ones - discussions about AI and the singularity are interesting, but I am focused on improving my analytic ability and making good decisions.

View more: Prev