In response to comment by VAuroch on LessWrong 2.0
Comment author: satt 09 December 2015 12:24:13AM 0 points [-]

Is that true? How do we know?

In response to comment by satt on LessWrong 2.0
Comment author: VAuroch 11 December 2015 02:05:42AM *  2 points [-]

Well, no posts are deleted. If you look at Main and sort chronologically, you can go through and count articles per time and what fraction of them are math-heavy (which should be easy to check from a once-over skim).

I think this is pretty much accepted wisdom in the rationalsphere. Several people, online and in person, have said things to the effect of "Tumblr is for socializing, private blogs are for commenting on whatever the blogger writes about, and LessWrong is for math-heavy things, quotes threads, and meetup scheduling." But if you doubt it, you can absolutely check.

In response to comment by VAuroch on LessWrong 2.0
Comment author: Viliam 06 December 2015 11:25:58PM *  6 points [-]

Back when LW was more active, there was much lower math density in posts here.

Maybe because many people are not sure whether their topics are "LW-worthy", but when they do something mathematical they feel comfortable about posting it here. If I write my opinion about something, people will most likely disagree; but if I write an equation and solve it correctly, there is nothing to disagree with.

In response to comment by Viliam on LessWrong 2.0
Comment author: VAuroch 11 December 2015 01:59:16AM 0 points [-]

Yes, I agree completely. Honestly, I thought this line of reasoning was common knowledge in the rationalsphere, since I think I've seen it discussed a couple times on Tumblr and in person (IIRC, both in Portland, and in the Bay Area).

Comment author: helldalgo 03 December 2015 08:24:23AM *  12 points [-]

Less Wrong has a high barrier of entry if you're at all intimidated by math, idiosyncratic language, and the idea that ONE GUY has written most of its core content. I think the diaspora is good for mainstreaming the concepts on this site. I wish I had been an active member when it was still a catalyst for motion. The book's existence is good, and HPMoR will still bring people here. This site is important for archival and educational reasons.

Less Wrong might be in a good place to mature in several different directions. If other community members branch out in the way that CFAR and MIRI have, integrating the education-without-academia principles should be a priority in their organizations. It's not a stretch: Eliezer Yudkowsky does not have a degree, and he has done excellent work from a teaching point of view. He also seems to be respectable among academics for his theory work (I'm not knowledgeable enough to vet that personally).

Teaching people to use effective signaling of their competence, without resorting to Dark Arts, might be useful too.

I'm in favor of EA, but ingres is not wrong that embedding those principles could be off-putting. I don't know their personal reasons for feeling that way, but I know many people feel that utility-maximizing about human lives is "icky." To be more charitable, they believe that human life has inherent sacred properties. They also believe that assigning mathematical values to people signals that you're "cold." If someone comes to Less Wrong with those ideals, they have to a) digest a LOT of LW philosophy to be okay with EA principles, or b) stick around despite their distaste for certain core principles.

In response to comment by helldalgo on LessWrong 2.0
Comment author: VAuroch 06 December 2015 09:52:16PM 3 points [-]

Back when LW was more active, there was much lower math density in posts here.

Comment author: IlyaShpitser 16 September 2015 08:33:19PM *  1 point [-]

It is? How much energy are you going to need to run detailed sims of 10^100 people?

Comment author: VAuroch 17 September 2015 01:36:22AM 0 points [-]

Point, but not a hard one to get around.

There is a theoretical lower bound on energy per computation, but it's extremely small, and the timescale they'll be run in isn't specified. Also, unless Scott Aaronson's speculative consciousness-requires-quantum-entanglement-decoherence theory of identity is true, there are ways to use reversible computing to get around the lower bounds and achieve theoretically limitless computation as long as you don't need it to output results. Having that be extant adds improbability, but not much on the scale we're talking about.

Comment author: IlyaShpitser 16 September 2015 05:21:18PM 3 points [-]

I like Scott Aaronson's approach for resolving paradoxes that seemingly violate intuitions -- see if the situation makes physical sense.

Like people bring up "blockhead," a big lookup table that can hold an intelligent conversation with you for [length of time], and wonder whether this has ramifications for the Turing test. But blockhead is not really physically realizable for reasonable lengths.

Similarly for creating 10^100 happy lives, how exactly would you go about doing that in our Universe?

Comment author: VAuroch 16 September 2015 07:07:41PM 1 point [-]

It's easy if they have access to running detailed simulations, and while the probability that someone secretly has that ability is very low, it's not nearly as low as the probabilities Kaj mentioned here.

Comment author: Cyan 31 August 2015 02:22:41AM *  1 point [-]

you're ignoring critical information

No, it practical terms it's negligible. There's a reason that double-blind trials are the gold standard -- it's because doctors are as prone to cognitive biases as anyone else.

Let me put it this way: recently a pair of doctors looked at the available evidence and concluded (foolishly!) that putting fecal bacteria in the brains of brain cancer patients was such a promising experimental treatment that they did an end-run around the ethics review process -- and after leaving that job under a cloud, one of them was still considered a "star free agent". Well, perhaps so -- but I think this little episode illustrates very well that a doctor's unsupported opinion about the efficacy of his or her novel experimental treatment isn't worth the shit s/he wants to place inside your skull.

In response to comment by Cyan on Beautiful Probability
Comment author: VAuroch 02 September 2015 12:27:40AM 1 point [-]

Double-blind trials aren't the gold standard, they're the best available standard. They still don't replicate far too often, because they don't remove bias (and I'm not just referring to publication bias). Which is why, when considering how to interpret a study, you look at the history of what scientific positions the experimenter has supported in the past, and then update away from that to compensate for bias which you have good reason to think will show up in their data.

In the example, past results suggest that, even if the trial was double-blind, someone who is committed to achieving a good result for the treatment will get more favorable data than some other experimenter with no involvement.

And that's on top of the trivial fact that someone with an interest in getting a successful trial is more likely to use a directionally-slanted stopping rule if they have doubts about the efficacy than if they are confident it will work, which is not explicitly relevant in Eliezer's example.

In response to comment by [deleted] on Beautiful Probability
Comment author: Eliezer_Yudkowsky 15 December 2013 03:14:56PM *  -1 points [-]

It's worth revising your intuitions if you found if surprising that a fixed physical act had the same likelihood to data regardless of researcher thoughts. It is indeed possible to see the mathematical result as "obvious at a glance".

Comment author: VAuroch 30 August 2015 08:21:21PM 0 points [-]

You can claim that it should have the same likelihood either way, but you have to put the discrepancy somewhere. Knowing the choice of stopping rule is evidence about the experimenter's state of knowledge about the efficacy. You can say that it should be treated as a separate piece of evidence, or that knowing about the stopping rule should change your prior, but if you don't bring it in somewhere, you're ignoring critical information.

Comment author: Desrtopa 17 December 2010 04:56:49AM 2 points [-]

I've read all of them except the Tiffany Aching ones, and Night Watch is still my favorite.

I think it's better if you're already well familiar with the Night Watch books and the setting of Ankh Morpork before you read it though.

Comment author: VAuroch 05 August 2015 09:37:42PM 1 point [-]

Read the Tiffany Aching ones. They're not just for children, but especially read them if you have or ever expect to have children. These are the stories on which baby rationalists ought to be raised.

Comment author: Pancho_Iba 16 July 2015 04:07:14PM 0 points [-]

I'm sorry; re-reading my comment, I think it wasn't clear. I didn't intend to ask which is better, but to arise the following question: Is it possible that whenever I have to decide between rational or reasonable predominance, that decission entails an a-priori decission of one over the other, since each criterion might point towards itself?... it just seemed fun to think about it.

By the way, I'm curious about the Way to which you are referring with a capital W. Is that something like rationality commandments?

Comment author: VAuroch 20 July 2015 08:45:01AM *  1 point [-]

It's something Eliezer talks about in some posts; I associate it mainly with The Twelve Virtues and this:

Some people, I suspect, may object that curiosity is an emotion and is therefore "not rational". I label an emotion as "not rational" if it rests on mistaken beliefs, or rather, on irrational epistemic conduct: "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm."

Comment author: TheAncientGeek 28 June 2015 04:15:12PM 2 points [-]

I can see that private health insurance companies wouldn't want to take in bad risks. They don't in some US states, and are forced to by the federal government in others, I can also see that a profit driven GovCo would behave like a giant private insurer. But actual in-GovCo governments are incentivised to provide universally access to healthcare as they do to the law and education. I can vouch that where you have public healthcare, any hint that some group is excluded creates a stink. Health insurance provides better incentives than piecemeal provision, but public healthcare has better incentives than both.

Comment author: VAuroch 20 July 2015 08:39:54AM 0 points [-]

in-GovCo

un-GovCo, I believe?

View more: Prev | Next