Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: katydee 24 March 2017 01:31:27AM 0 points [-]

Something like this also happened with Event Horizon, though the metamorphosis is not yet complete...

Comment author: Vaniver 26 March 2017 06:04:19PM 0 points [-]

It looks like it's finishing soon, though.

Comment author: Vaniver 23 March 2017 08:04:05AM 2 points [-]

Front page being reconfigured. For the moment, you can get to a page with the sidebar by going through the "read the sequences" link (not great, and if you can read this, you probably didn't need this message).

Comment author: Alicorn 19 March 2017 06:44:43AM 2 points [-]

I think this was helped along substantially by personal acquaintance with and HPMOR fandom of the landlord, which seems hard to replicate on purpose.

Comment author: Vaniver 21 March 2017 07:18:36PM 1 point [-]

My understanding is that most landlords want the friends of their good tenants to move in, because they'll likely be equally good and also living near friends will make people less likely to move out.

Comment author: Vaniver 19 March 2017 03:53:26AM 9 points [-]

Quoting myself from Facebook:

I think identifying the neoreactionaries with LessWrong is mostly incorrect. I know NRxers who found each other on LW and made side blogs, but that's because I know many more LWers than NRxers.

In the 2014 survey, only 2% of LWers called themselves neoreactionary, and I think that's dropped as they've mostly moved off LW to other explicitly neoreactionary sites that they set up. LW had a ban on discussing politics that meant there weren't any serious debates of NRx ideas. To the best of my knowledge, Moldbug didn't post on LW. It probably is the case that debiasing pushed some people in the alt-right direction, but it's still silly to claim it's the normal result.

Comment author: Alicorn 17 March 2017 01:46:56AM 19 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: Vaniver 19 March 2017 03:20:03AM 2 points [-]

I'm interested, and have been thinking for a while of how to structure it and where to put it / what properties to focus on (in Berkeley, at least). I think there's a pretty strong chance we can build a rationalist village or two (or three or...).

Comment author: rlpowell 17 March 2017 03:19:14PM 2 points [-]

I'm interested in theory, but in practice I am attached to living in SF proper that may be hard to overcome.

I'll mention that in South Bay there are housing complexes that have multiple nearly-adjacent units in shared space, and it might work well to just pick such a complex and progressively have like-minded people take over more and more of it. Noticeably less awesome, but also noticeably easier.

Comment author: Vaniver 19 March 2017 03:13:44AM 3 points [-]

I believe this is what happened with Godric's Hollow--a four unit building turned, one by one, into a four unit rationalist building.

Comment author: MrMind 15 March 2017 02:06:53PM *  2 points [-]

"Strong opinion loosely held".
I didn't know about the experiment, I'm glad to hear that they decided to show it anyway.

Comment author: Vaniver 17 March 2017 03:41:45PM 1 point [-]

While "strong opinion weakly held" is more traditional and widespread, I prefer replacing "strong" with "clear," since it points more crispy at the relevant feature.

Comment author: Fluttershy 17 February 2017 06:19:24PM 7 points [-]

There's actually a noteworthy passage on how prediction markets could fail in one of Dominic's other recent blog posts I've been wanting to get a second opinion on for a while:

NB. Something to ponder: a) hedge funds were betting heavily on the basis of private polling [for Brexit] and b) I know at least two ‘quant’ funds had accurate data (they had said throughout the last fortnight their data showed it between 50-50 and 52-48 for Leave and their last polls were just a point off), and therefore c) they, and others in a similar position, had a strong incentive to game betting markets to increase their chances of large gains from inside knowledge. If you know the probability of X happening is much higher than markets are pricing, partly because financial markets are looking at betting markets, then there is a strong incentive to use betting markets to send false signals and give competitors an inaccurate picture. I have no idea if this happened, and nobody even hinted to me that it had, but it is worth asking: given the huge rewards to be made and the relatively trivial amounts of money needed to distort betting markets, why would intelligent well-resourced agents not do this, and therefore how much confidence should we have in betting markets as accurate signals about political events with big effects on financial markets?

Comment author: Vaniver 18 February 2017 06:30:58PM 3 points [-]

Hanson's answer is that if you know someone is doing this, there's free money to pick up, and so the incentives push against this. (You don't have to specifically know that X is out to spike the market, you just have to look at the market and say "whoa, that price is off, I should trade.") There's still the problem of linking markets of different sizes--if the prediction market is less liquid and much smaller than the stock market, but the stock market is taking signals from the prediction market, then it makes sense to lose a million on the prediction market to gain a billion on the stock market.

(The solution there is to make the prediction market more liquid and bigger, which currently doesn't happen for regulatory reasons.)

Comment author: Lumifer 10 February 2017 05:56:02PM 9 points [-]

Let's define "stupidity" as "low IQ" where IQ is measured by some standard tests.

IQ is largely hereditary (~70%, IIRC) and polygenic. This mean that attempting to "cure" it by anything short of major genetic engineering will have quite limited upside.

There are cases where IQ is depressed from its "natural" level (e.g. by exposure to lead) and these are fixable or preventable. However if you're genetically stupid, drugs or behavioral changes won't help.

we could, for instance, sequence a lot of peoples' DNA, give them all IQ tests, and do a genome-wide association study, as a start.

We could and people do that. If you're interested in IQ research, look at Greg Cochran or James Thompson or Razib Khan, etc. etc.

We could see affirmative action for stupid people. Harvard would boast about how many stupid people it admitted.

That, ahem, is exactly what's happening already :-/

Comment author: Vaniver 10 February 2017 08:13:46PM 7 points [-]

IQ is largely hereditary (~70%, IIRC) and polygenic. This mean that attempting to "cure" it by anything short of major genetic engineering will have quite limited upside.

It is worth pointing out that the heritability estimates are determined from current variation, and thus are only weakly predictive of what interventions are possible but unknown. (I do expect that if there were an easy way to make improvements here, we would know about it already, but it's very possible that there are hard ways to do this.)

Comment author: Erfeyah 09 February 2017 11:40:18PM *  0 points [-]

Ok that makes sense. These approaches are trying to add considerations such as mine into the model. Not sure I see how that can solve the issue of "the truth missing from the hypothesis space". Or how accurate modelling of the agents can be achieved at our current level of understanding. Examples of real world applications instead of abstract formulations would be really helpful but I will study the article on Solomonoff induction.

Comment author: Vaniver 10 February 2017 12:10:45AM *  0 points [-]

Not sure I see how that can solve the issue of "the truth missing from the hypothesis space".

Solomonoff Induction contains every possible (computable) hypothesis; so long as you're in a computable universe (and have logical omniscience), the truth is in your hypothesis space.

But this is sort of the trivial solution, because while it's guaranteed to have the right answer it had to bring in a truly staggering number of wrong answers to get it. It looks like what people do is notice when their models are being surprisingly bad, and then explicitly attempt to generate alternative models to expand their hypothesis space.

(You can actually do this in a principled statistical way; you can track, for example, whether or not you would have converged to the right answer by now if the true answer were in your hypothesis space, and call for a halt when it becomes sufficiently unlikely.)

Most of the immediate examples that jump to mind are mathematical, but that probably doesn't count as concrete. If you have a doctor trying to treat patients, they might suspect that if they actually had the right set of possible conditions, they would be able to apply a short flowchart to determine the correct treatment, apply it, and then the issues would be resolved. And so when building that flowchart (i.e. the hypothesis space of what conditions the patient might have), they'll notice when they find too many patients who aren't getting better, or when it's surprisingly difficult to classify patients.

If people with disease A cough and don't have headaches, and people with disease B have headaches and don't cough, on observing a patient who both coughs and has a headache the doctor might think "hmm, I probably need to make a new cluster" instead of "Ah, someone with both A and B."

View more: Next