Filter This year

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: morganism 29 August 2016 09:58:01PM 6 points [-]

Academic Torrents site, for large scale database transfers

http://academictorrents.com/

Comment author: James_Miller 22 August 2016 07:32:21PM 6 points [-]

Excellent. My personal theory is that the universe is fine-tuned for both life and for the Fermi paradox with a late great filter because across the multiverse most lifeforms such as us will exist in such universes in part because without a great filter intelligent life will quickly turn into something not in our reference class and then use all the resources of their universe and so make their universe inhospitable to life in our reference class.

Comment author: Daniel_Burfoot 20 August 2016 02:30:57AM 6 points [-]

Note that DeepMind's two big successes (Atari and Go) come from scenarios that are perfectly simulable in a computer. That means they can generate an arbitrarily large number of data points to train their massive neural networks. Real world ML problems almost all have strict limitations on the amount of training data that is available.

Comment author: turchin 11 August 2016 09:27:28PM 6 points [-]

Are you signed for cryonics?

Comment author: ChristianKl 08 August 2016 10:40:52AM *  6 points [-]

I don't think that gene drives are the best technology when you account for the politics and indeed the post by Luke that you link doesn't use the term. SIT seems to be effective enough from a cost-benefit analysis and can be used in a very controlled way.

I look a while ago into the issue and wrote an LW post about it. I think there's a fair chance that pushing for gene drives mosquitos to be released will mean that mosquito elemintion will happen later rather than sooner.

Oxitec has today the technology that produes "sterile mosquitio" sterile for them means that the mosquitos die when they are larves. That means they compete in the early larve stage against other mosquito larves. Oxitec also inserts color coding genes to be able to proof that all of the offspring of the mosquitos they produce really dies and the genes that they produce really die.

It would be worthwhile if people think of mosquito erradication as being about release sterile mosquitos and not about releasing mutant GMO mosquitos.

If you actively want to do something on the PR front I think it would be worthwhile to contact someone at Oxitec and ask them what they think would be helpful. Maybe invite someone from Oxitec for your podcast and have a discussion with them about the strategic implications?

It's worthwhile to remember that the Obama administration was very effective about reducing Mercury pollution but not very effective about reducing CO2 pollution. More publish attention isn't always worthwhile to getting policy passed. Especially the scenario where a Republican Trump advocates a gene drive might mean that you get opposition from liberals who are currently against GMO's on the topic that prevents real action from happening.

Comment author: Viliam 08 August 2016 10:38:57AM 4 points [-]

Make it obvious that we are targeting male mosquitoes, then the idea will also be acceptable for the other side.

Comment author: Elo 02 August 2016 10:30:06PM -1 points [-]

Note there are two seemingly conflicting strategies here. One is to solve part of the problem, focusing on it for a given time, trying to jumpstart a success spiral. But how would you differentiate this from bikeshedding? How can you be sure you're not focusing on irrelevant things?

You can't. That's where iterating comes in. even if you do spend 20minutes solving the leaking tap in the bathroom, which is maybe the most irrelevant problem, you only killed 20 minutes.

I would suggest hamming style questions too - to ask "what is the biggest problem?" a few times. Not just listing out the things that are bugging me.

This strategy is not going to work the same if you have a sharp deadline - i.e. an assignment due tomorrow. In a problem-situation with no deadlines - try to work on any one problem for a period of time will work on that one problem.

Comment author: gjm 02 August 2016 11:54:41AM *  -2 points [-]

Uncharitable paraphrase of previous post: "Here is someone who seems to have lots of problems. Please identify The Problem (tm)".

Uncharitable paraphrase of this post: "The Problem is that this person actually has not one problem but many problems."

So far as I can tell, introducing this business about "The Problem" adds nothing but obfuscation. You encountered someone who seemed to have lots of problems. You concluded that he has lots of problems. Amazing!

That doesn't mean that these posts are valueless! Even without the "TheProblem (tm)" framing, readers might be tempted to look for a single underlying problem, and being cautioned away from that is useful; and your proposed way of dealing with this sort of big-pile-of-mutually-reinforcing-problems situation may well be a good one. And it's interesting to read about how you approached the situation. So I'm glad I read these. I just don't see what the business about "The Problem" was for.

Comment author: buybuydandavis 28 July 2016 12:38:28PM 6 points [-]

I considered downvoting. I opted instead to ignore after reading the preamble, which told me nothing but

I talked to a guy about solving his problem. I don't think it worked. Tell me if you have an interesting insight.

while taking 3 paragraphs to do it, with page after page after page of dialogue following,

I'm generally for letting anyone share what they have to share, but the tone of the preamble screams of low budget wannabe internet crank [TheProblem(tm), among other issues of tone] , and given that many have a greater signal to noise threshold than I do, I suspect the downvotes were responses to having their crank detector pinged.

I struggled with responding to this, as I don't want to discourage people generally from sending in even the half baked, but this kind of thing also makes people leave LessWrong.

Comment author: turchin 28 July 2016 10:17:41AM 6 points [-]

X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.

Comment author: RomeoStevens 28 July 2016 02:11:11AM *  6 points [-]

Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).

Rationalists often fail to compartmentalize, even when it would be highly useful.

Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)

Rationalists don't even lift bro.

Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)

Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren't strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.

Comment author: Elo 26 July 2016 01:10:04PM -2 points [-]

Can you describe how you came to these ideas as TheProblem? Your thought process or your procedure for concluding these?

Comment author: Elo 24 July 2016 10:37:13PM -2 points [-]

interesting idea; I'd like to see this guy going up and down the abstraction levels.

In the video he described one faith while talking about another. Obviously it's not easy to put both faiths on the line at the same time and compare them from the inside of one of them. I'd like to see this guy give away the tools to do that. Other than that - this was excellent.

Also worth noting, he could be more generous with giving the position he is reasoning around but not agreeing with a "true position" when describing it. He was very careful to not allow her position to be true when he was talking about it. things like, "you believe X", "your belief is X", unreasonably take perspective on an unsuspecting person that their belief isn't 100% THEtruth. But he was still very good at what he did.

Unfortunately I don't think he used a convincing argument because religion is not dis-proveable in the conventional scientific evidence sense.

Comment author: The_Jaded_One 23 July 2016 12:58:00PM 6 points [-]

Just commenting to point out that I'm having a fabulous day, and have a very painless, enjoyable life. I struggle to even understand what suffering is, to be honest, so make a note of that any negative utilitarians who may be listening!

Comment author: Mac 22 July 2016 01:41:50PM *  6 points [-]

Foundational Research Institute promotes compromise with other value systems. See their work here, here, here, and quoted section in the OP.

Rest easy, negative utilitarians aren't coming for you.

Comment author: tut 19 July 2016 03:38:59PM 6 points [-]

Many serious accidents are single car crashes (more than half here). And a lot of collisions that aren't officially your fault you can still avoid if you pay attention.

Comment author: Vaniver 18 July 2016 01:26:29PM 6 points [-]

Why car safety is not advertise as its main quality?

I think there's a major selection effect when safety comes into play; that is, there is a sizable fraction of drivers who do prioritize safety, they buy the cars that are reputed to be safest, and then those cars appear even safer in the statistics. (For example, there are some engineering differences about the Subaru compared to other cars, but the differences between Subaru drivers and the drivers of other cars are probably larger.)

Comment author: Algernoq 17 July 2016 07:33:03AM *  4 points [-]

A lot of great topics here.

Elon Musk has risked his entire fortune for you.

I am a huge fan of Elon Musk.

I suspect a big reason Mr. Musk tries to make the greatest possible positive difference for humanity is to reduce his risk of being murdered by established players. He’s pissed off a lot of powerful people, but provided benefits to many more.

He was forced out of controlling PayPal...and his vision for PayPal was to make it a “full-service financial institution”. He wanted to “convert the financial system from a series of heterogeneous insecure databases to one database.” This is threatening to the global elite in a way that going to Mars is not. Thus, he was forced out.

While he risked his personal fortune on SpaceX in 2013 when it looked like they would run out of money, he also had plans to sell a large interest in Tesla to Google in order to acquire funding for additional SpaceX launches. The story he tells about betting all of his assets with no recourse is true but under-emphasizes his backup plans for additional launches.

Paul has written 1, 2, 3, 4, 5 essays that touch on the topic of why cooperators tend to get rich in Silicon Valley rather than defectors.

I am a huge fan of Paul Graham as well.

However, his advice can mislead young technical people into thinking that a startup is going to make them rich. He says, “If you wanted to get rich, how would you do it? I think your best bet would be to start or join a startup....you can think of a startup as a way to compress your whole working life into a few years”. But in reality most startups fail and waste 5 years of the founders’ time in the process. Plus, for every founder, there are dozens of joiners/employees who work for below-market-rate salary plus a small percentage (often 0.1%) of the new company. Thus, his advice to “start or join a startup” is dangerously misleading because his target audience is young technical people without the political or sales skills to actually succeed.

Arguing for the opposition is Michael Church. I was intrigued by The 3-Ladder System of Social Class in the US (summary: college-educated technologists learned how to make wealth, not how to own it) and his VC-istan series (summary: Silicon Valley got colonized by MBA financiers who extracted all the goodwill). He alleges he deleted his blog archive and left the Valley because of threats from financiers affiliated with Paul Graham.

The Forbes 400 does not and cannot track privately-held wealth. Thus, the Forbes 400 only includes rich people who wanted to appear there.

Sean Parker on his giving philosophy...

That’s a marketing piece. Rich people often hide their wealth, but if they can’t they market themselves as hyper-successful good people, not as the driven perfectionist tyrants they often are. As the MacLeod Hierarchy explains, rank-and-file workers work best if they think they’re climbing a career ladder. Rich people climbed a different ladder, then hid it.

There’s a tradition of robber barons giving away vast wealth to manage their public image.

highly intelligent people tend to be interested in things other than sex,

I agree with this.

But it's hard to tease out exactly why.

The truth is not just politically incorrect; the truth is disgusting and offensive.

The Red Pill says is clearly: women want good genes and good resources. This means that men with good genes have the opportunity for lots of sex, and men with good resources get strung along in sexless relationships, and men with neither good genes (looks) nor good resources (money/power) get nothing except shame. Women want the best they can get, so the top 10% of men have sex with the top 80% of women. Below-average men get nothing. Traditional marriage is illegal (“until death do us part” is legally unenforceable). Many women try to copy the behavior of the most attractive men -- promiscuous casual sex that they lie about -- and then settle down with someone much less attractive than their casual sex partners once they reach their late 20s. And for men: most men are weak, emotionally-manipulated, directionless sheep.

This explains why rock band front-men, criminals, and selfish finance bros enjoy lots of sex despite their toxic behavior: they have looks plus power.

If you want more sexual partners, a good first step is to start working out

Yup. Will do. To be clear, the advice is to develop a ripped body that generates tingly feelings in women’s vaginas, not to “be a good person” or “make a positive difference” or even “have a job”. We deserve the coming global Apocalypse.

you'll likely feel less insecure about who your girlfriends have slept with.

So it was OK for them to lie to me? Fuck you. If that’s how it works, I’m gonna go date 5 young women at the same time by telling them lies, then blame them for being insecure when the truth comes out.

Do not mistake my righteous anger for “insecurity”. That’s what old women do when they’re trying to shame a man into marrying them.

you can work on cool stuff like decreasing existential risk.

I don’t see how this would benefit me. “Cool” is a fossilized instinct for what is powerful. I’d rather go get what will really make me powerful: a shitload of money, and skill at building alliances I control.

Research also seems to indicate that having lots of sexual partners is associated with decreased happiness.

For women, definitely. For men, the data is inconclusive.

Those dominant "defector" types are often rejected by women for longer-term relationships

Did you start acting like a non-dominant non-defector type, and get dumped soon after? Or did you become less attractive/successful/high-status over time? The struggle is real.

Psychological research, insofar as it relates to this topic, is more mixed.

The Art of Manliness is clickbait for unsuccessful beta males. A psychological survey is a hilariously inaccurate methodology for gaining insight into a biological response.

If there's a particular sort of defection you are concerned about, you can work to change society in order to disincentivize it.

I don’t have the power to make a difference.

For example, I want to make it illegal to lie about one’s relationship status and sexual history. But, I can’t at my current power level. More specifically: I have met 3 different employees of a certain investment bank, who all were more sexually successful than me despite routinely lying to women to get sex. One tried to seduce my girlfriend at the time despite having one “girlfriend” and several “casual sex partners” who were unaware of each other, and who he implied possible long-term relationship potential with. Another tried to set me up with a woman he was tired of seeing (she wanted a relationship; he just wanted sex) without disclosing that he had had sex with them. A 3rd talked to me about startup projects while badly hiding the micro-expressions for “smugness/contempt” and “duping delight” and then predictably failed to follow up. I’m pretty sure at least 2 of these guys are into spreading genital herpes. But, I looked up the slander laws and it’s illegal for me to publicly shame these selfish men or their firm without recorded evidence (there’s a presumption of innocence), and it’s illegal for me to collect that evidence (two-party consent required for recording, and they avoid using email for their games). Thus, they win, and I lose, and their sex partners lose, and the people they do business with lose (their attitude carries over to their business dealings...it’s all about wealth extraction.).) Check out Wall Street Playboys for a description of the “finance bro attitude” including advice about being attractive enough for someone in a relationship to want to cheat with. I’ve thought about creating some sort of morality Leviathan app, to track people’s “trust graphs” over time to provide a permanent record of who burned who, but this has the potential to go badly wrong.

Thus, I figured the best thing to do was to pull a Voldemort and go all-in on selfishness. Investing in other people and in relationships is a bad deal because the relationships inevitably end. Successful people only invest in relationships that they control. God is OK with animals violently killing each other all the time, with zero regard for suffering or fair play, and who am I to question God? The 48 Laws of Power (by Robert Greene) has some fascinating ideas about how to find common grounds to shit in.

This probably isn't the best example, but I've always wondered why we don't punish rapists (and maybe other criminals) with castration.

False convictions. “Cruel and unusual” punishments are illegal because they make people angry in a way just locking up the wrong person doesn’t. Can you imagine the rage of the Black Lives Matter movement if the US Government was routinely castrating rapists? Or, more accurately, rich people are against physical punishments because they can’t be undone (whereas a long prison sentence + enough expensive lawyers = freedom).

what we want to happen

Taking a step back here...I shouldn’t be this angry for this long with this little forward progress.

Better to choose a specific dream and make it happen.

Undisciplined flailing with no single clear goal has kept me middle-class for a decade.

Comment author: gwern 15 July 2016 05:15:00PM 6 points [-]

Their motivation is public education & outreach:

Vipul and I ultimately want to get a better sense of the value of a Wikipedia pageview (one way to measure the impact of content creation), and one way to do this is to understand how people are using Wikipedia. As we focus on getting more people to work on editing Wikipedia – thus causing more people to read the content we pay and help to create – it becomes more important to understand what people are doing on the site.

This is a topic I've wondered about myself, as I occasionally spend substantial amounts of time trying to improve Wikipedia articles; most recently GCTA, liability threshold model, result-blind peer review, missing heritability problem, Tominaga Nakamoto, & debunking urban legends (Rutherford, Kelvin, Lardner, bicycle face, Feynman IQ, MtGox). Even though I've been editing WP since 2004, it can be deeply frustrating (look at the barf all over the result-blind peer review right now) and I'm never sure if it's worth the time.

Results:

  • most people in LW/SSC/WP/general college-educated SurveyMonkey population/Vipul Naik's social circles read WP regularly (with a skew to reading WP a huge amount), have some preference for it in search engines & sometimes search on WP directly, every few months is surprised by a gap in WP which could be filled (sounding like a long tail of BLPs and foreign material; the latter being an area that the English WP has always been weak in)

    • reading patterns in the total sample match aggregate page-view statistics fairly well; respondents tend to have read the most popular WP articles
  • they primarily skim articles; reading usage tends to be fairly superficial, with occasional use of citations or criticism sections but not any more detailed evaluation of the page or editing process

At face value, this suggests that WP editing may not be that great a use of time. Most people do not read the articles carefully, and aggregate traffic suggests that the sort of niche topics I write on is not reaching all the people one might hope. For example, take threshold models & GCTA traffic statistcs - 74/day and 35/day respectively, or maybe 39k page views a year total. (Assuming, of course, that my contributions don't get butchered.) This is not a lot in general - I get more like 1k page views a day on gwern.net. A blogpost making it to the front page of Hacker News frequently gets 20k+ page views within the first few days, for comparison.

I interpret this as implying that a case for WP editing can't be made based on just the traffic numbers. I may get 1k page views a day, but relatively little of that is to pages using GCTA or threshold models even in passing. It may be that writing those articles is highly effective because when someone does need to know about GCTA, they'll look it up on WP and read it carefully (even though they don't read most WP pages carefully), and over the years, it'll have a positive effect on the world that way. This is harder to quantify in a survey, since people will hardly remember what changed their beliefs (indeed, it sounds like most people find it hard to remember how they use WP at all, it's almost like asking how people use Google searches - it's so engrained).

My belief is that WP editing can have long-term effects like that, based primarily on my experiences editing Neon Genesis Evangelion and tracking down references and figuring out the historical context. I noticed that increasingly discussions of NGE online took on a much better informed hue, and in particular, the misguided obsession with the Christian & Kabbalic symbolism has died down a great deal, in part due to documenting staff quotes denying that the symbolism was important. On the downside, if you look through the edit history, you can see that a lot of terrific (and impeccably sourced) material I added to the article has been deleted over the years. So YMMV. Presumably working on scientific topics will be less risky.

Comment author: Viliam 14 July 2016 10:55:24AM 6 points [-]

past = outside view

For example, if in the past people have repeatedly suggested a plan to create a paradise on Earth, and the plan, when realized, repeatedly ended with bloodshed and poverty, and now someone suggests the same plan again... I guess that's a reason to suspect it probably wouldn't end well. At the very least, the proponent should explain why exactly the previous instances have failed and what exactly they are planning to do differently today to avoid that specific failure.

But there is a difference between using the past as an outside view, i.e. conservatism; and worshipping the "past as my modern mind imagines it", i.e. neoconservatism / neoreaction. The latter is, ironically, in some aspects similar to the progressives who are worshipping the fictional future -- similar approach to modelling society, different aesthetics (or as you called it "positive / negative affiliation with the passage of time").

Comment author: James_Miller 12 July 2016 07:01:57PM 6 points [-]
In response to comment by Riothamus on Zombies Redacted
Comment author: gjm 08 July 2016 08:56:49PM -2 points [-]

Do they lack qualia? How accurate are these simulations meant to be?

In response to comment by kilobug on Zombies Redacted
Comment author: gjm 08 July 2016 12:55:05PM -2 points [-]

My impression was that this was pretty much tinujin's point: saying "imagine something atom-for-atom identical to you but with entirely different subjective experience" is like saying "imagine something atom-for-atom identical to an Oreo except that it weighs 100 tons etc.": it only seems imaginable as long as you aren't thinking about it too carefully.

In response to Zombies Redacted
Comment author: timujin 06 July 2016 02:40:54PM 5 points [-]

This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like. But such a person would not have any problems with p-Zombies to begin with.

If you want to persuade someone who's been persuaded by Chalmers, you should debunk the argument itself, not bring it to your own epistemological ground where the argument is obviously absurd. Because you, and the Chalmers-supporter are not on the same epistemological ground, and will probably never be.

Here's how you would do that.

---- START ARGUMENT ----

Is it conceivable that the 5789312365453423234th digit of Pi is 7?

No, don't look it up just yet. Is it conceivable to you, right now, that it's 7?

For me, yes, it is. If I look it up, and it turns out to be 7, I would not be surprised at all. It's a perfectly reasonable outcome, with predictable consequences. It's not that hard for me to imagine me running a program that calculates and prints the number, and it printing out 7.

Yet, until you look it up, you don't really know if it's 7 or not. It could be 5. It would also be a reasonable, non-surprising and conceivable outcome.

Yet at least one of those outcomes is logically impossible. The exact value of Pi is logically determined, and, if you believe that purely logical conclusions apply universally, then one of those values of 5789312365453423234th digit of Pi is universally impossible.

And yet both are conceivable.

So logical impossibility does not imply inconceivability. This is logically equivalent to saying "conceivability does not imply logical possibility" (A->B => ~B->~A).

If conceivability does not imply logical possibility, then even if you can imagine a Zombie world, it does not mean that the Zombie world is logically possible. It may be the case that the Zombie world is logically impossible. Chalmer's argument does not rule that out. For example, it may be the case that certain atomic configurations necessarily imply consciousness. Or it may be any other case of logical impossibility. What matters is that consciousness as an additional nonphysical entity is not implied by its conceivability.

---- END ARGUMENT ----

Comment author: Elo 06 July 2016 11:31:22AM -2 points [-]

I (and probably Jayson) assumed you were the author. probably because of the start.

"A bit over four years ago I wrote a...

Comment author: Elo 05 July 2016 10:59:52PM -2 points [-]

It's bugging me to read this sentence so I propose a rewrite:

I don't know about you, but for me personally, it has always had the ring of a challenge. I like to think that I have a potent imagination, and non-negligible experience in the affairs of humans. Therefore, I am certainly able to imagine how you feel, am I not?

::

For me, that phrase has the ring of a challenge. I have a potent imagination, and non-negligible experience in the affairs of humans. I am certainly able to imagine how you feel, am I not?

(after looking at how I wanted to edit this sentence it would appear that I wanted to remove some/all of the hedge and crutch statements from the point of the sentence)


I would also add:

You might have heard this or similar from someone

and later

also with a less literal interpretation of her statement.

You appear to have switched genders on your imaginary friend. I would suggest you pick non-gender for the whole piece.

Other than that, neat little concept. (Even if I do agree with Gram's alternate explanation of the event - I think you might both be right)

Comment author: GraceFu 04 July 2016 09:08:20AM 6 points [-]

Is there a version of the Sequences geared towards Instrumental Rationality? I can find (really) small pieces such as the 5 Second Level LW post and intelligence.org's Rationality Checklist, but can't find any overarching course or detailed guide to actually improving instrumental rationality.

Comment author: Elo 03 July 2016 11:10:51PM -2 points [-]

I like that this is a collection of graphs. Visual representation is excellent.

In response to Zombies Redacted
Comment author: UmamiSalami 03 July 2016 08:08:09AM *  4 points [-]

This was longer than it needed to be, and in my opinion, somewhat mistaken.

The zombie argument is not an argument for epiphenomenalism, it's an argument against physicalism. It doesn't assume that interactionist dualism is false, regardless of the fact that Chalmers happens to be an epiphenomenalist.

Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Maybe because interactionism violates the laws of physics and is somewhat at odds with everything we (think we) know about cognition. There may be other arguments as well. It has mostly fallen out of favor. I don't know the specific reasons why Chalmers rejects it.

Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive. It starts requiring you to postulate strange things.

In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives. Maybe those thoughts and discussions help us with being altruistic, or maybe they're a necessary part of our own activity. Maybe the illusion of interactionism is necessary for us to have complex cognition and decisionmaking.

Also in the epiphenomenalist view, psychophysical laws relate mental states to neurophysical aspects of our cognition. So for some reason there is a relation between acting/thinking of pain, and mental states which are painful. It's not arbitrary or coincidental because the mental reaction to pain (dislike/avoid) is a mirror of the physical reaction to pain (express dislike/do things to avoid it).

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

Chalmers isn't denying that the zombie Chalmers would write that stuff down. He's denying that its beliefs would be justified. Maybe there's a version of me in a parallel universe that doesn't know anything about philosophy but is forced to type certain combinations of letters at gunpoint - that doesn't mean that I don't have reasons to believe the same things about philosophy in this universe.

Comment author: ignoranceprior 01 July 2016 08:42:22PM 6 points [-]

Source, since you didn't link it.

Comment author: ike 30 June 2016 12:51:55PM *  6 points [-]

Seeing prices go up doesn't mean there's demand for them. If demand is low, then this isn't a market failure, it can make perfect sense that products with low demand don't get large companies producing them and so the prices don't reflect economies of scale.

So let's look at the actual sales. I've sold a bit on Amazon and know some tools that can give you good estimates on how many sales an items has had.

https://www.amazon.com/gp/product/B00REF5PM2/, the generic currently selling for ~$30, is currently ranked 246,691 in Health & Personal Care (archived: https://archive.is/5RRNF) (this number fluctuates, so might be different when you look). According to http://junglescout.com/estimator , such a rank sells less than 5 a month. Other tools I've checked have similar results, under 5 a month.

https://www.amazon.com/dp/B000GCECRO/, the Tum brand, is ranked 33,992 in Health & Personal Care (https://archive.is/cP0k4). Junglescout estimates 122 sales a month. Another source I checked says 91 a month, so 100 a month is probably close. Now, maybe it would sell more if the price was lower? Sales rank when the price was lower seems to have been in the 10-20,000 range, or 200-300 sales a month.

Let's say there are 2000 sales a year, if you offer it at $5 and make $2 profit on each you're making $4,000 a year. That doesn't seem enough for a large company to deal with. (You should also account for sales at other locations, though. But you pointed to Amazon of proof of demand, when it can simply be proof of lack of supply and lackluster demand.)

I don't actually know what thresholds companies tend to have for keeping products alive. If you have better information on that it would be helpful.

Comment author: Elo 25 June 2016 09:57:43PM *  -2 points [-]
Comment author: philh 23 June 2016 02:16:49PM 6 points [-]

Ben Hoffman, Solve your problems by fantasizing

The problem with most goal-driven plans is that most goals are fake, and so are most plans. One way to fix this is to fantasize.

Comment author: Lumifer 23 June 2016 02:16:11AM 6 points [-]

Just because something is brand new, and does not have laws or regulations relating to it right now does not mean that people can simply do whatever they want.

Well, it's a bit more complicated than that.

When people say that some things (like the blockchain) are outside of the law, they don't usually mean that no one can be sued or that the courts won't try to enforce judgements. What they mean is that those things are hard for the law to reach. A court might issue a judgement but it won't be able to enforce it. The general idea is that enforcement is so difficult and expensive so that it's not worth it.

For a simple example, consider piracy (of the IP kind). It is very much illegal and... so what? I can still go online and download the latest movie in a few minutes. It's not that the FBI can't bust me if it really wants to. It can. But it's inefficient and cost-prohibitive.

As to smart contracts, that's just a misnomer. They are not contracts. They are determistic mechanisms, set up for a particular purpose. Bespoke machines, if you wish. A contract in law implies a meeting of the minds which these algorithms cannot provide. Instead, they offer a guarantee that if you do A, B happens.

They are more akin to vending machines: you feed in some money and you get the item. It's not a contract between you and the vendor -- it's just a machine which you used.

Comment author: Elo 22 June 2016 01:51:56AM *  -2 points [-]

Saying all this without actually seeing the app

I have been trying out systems for a while now. So has Regex and various others.

The introspective thing that I have noticed, and you mentioned here without clearly identifying it is the iterative development of systems. Which is to say that you started on paper, and moved to spreadsheet and after moved to an app (as well as probably several versions of each).

What makes the final version work in the face of potential complexity of starting a new system (and taking a leap) is partly the fact that you lived through the various versions, and know why/how/whatFor different factors have changed to improve the system (such is the pure nature of iterative system development).

HOWEVER by publishing only your final version you only publish the (probably very good) system that you are used to, and not all the intermediate steps that made it possible and necessary to get to here. While I imagine that every possible latest system so far developed by many many various people (Productivity Ninja, GTD, FVP to name a few), will have good features and functionality that are neat of themselves, without the iterative stages, you don't really give people the same final system that you have come to be accustomed to.

What I am saying is; I'd like to see the whole process to how you got here in the hopes of making sense of your successes/failures of systems to do what you want them to do and following that be better able to apply it to my own systems.

On top of that; a dream app would be one that starts as a simple list (like you did), and gradually offers you to add complexity to your system (like you ended up making). But in such a way as to let people progress to the final version when they need//want it.

I will look at the app and get back to you.

Comment author: Jayson_Virissimo 19 June 2016 06:31:27PM 6 points [-]

If by graphics, you mean graphical user interfaces, then shoes is a pretty easy way to get started. If by graphics, you mean drawing shapes and designing your own games, Gosu might be a good fit. Here is a book for learning programming by creating games with Ruby and Gosu.

In response to Crazy Ideas Thread
Comment author: Daniel_Burfoot 19 June 2016 05:33:35PM *  6 points [-]

Large scale heat management: controlling or influencing temperature flows on a geographic (regional or global) scale. Heat management is one of the deep fundamental problems in life and engineering, but humans have never tried to do anything smarter or more ambitious in this area than standard HVAC stuff.

Humans like moderate temperatures, say 55-75 F, but we spend quite a lot of our time in discomfort or even pain because the actual temperature is outside this range. But the problem isn't that heat (or cold) is in short supply, it's just distributed unevenly. This fact hit home for me when I was riding in an Uber because terrible winter weather knocked out Boston's subway system, and the driver told me she had just returned from a trip to Brazil, which was mostly unpleasant because the heat made it impossible to do anything outside.

Here are some options:

  • Heat banking: store heat during the summer in large reservoirs of water. Release it during the winter.
  • Heat trade: hot regions send heat to cold regions; both sides are happier.
  • Heat sequestration: there are huge pools of cold water about 1000m underneath the ocean surface, when your city is too hot, send some heat down there.

I'm actually quite confident some version of this idea will work, because there are two vastly powerful forces working in its favor:

  • Economics: in heat trade, both parties feel that they are exchanging a good for a bad. This kind of exchange almost never happens, most normal trade relies on the parties valuing something at a different magnitude of positive or negative value, but with the same sign.
  • The Great Second Law: Humans suffer from temperature unevenness but Nature actually prefers temperatures to equilibriate. We just have to help Nature do what it already wants to do.
In response to comment by Lumifer on Buying happiness
Comment author: gjm 17 June 2016 10:45:17PM 0 points [-]

I took his meaning to be: it's hard to figure out what causes what, but if X is associated with greater happiness and is easy to do, I might as well do X even though it might turn out it's a consequence rather than a cause.

In response to comment by Lumifer on Buying happiness
Comment author: gjm 17 June 2016 10:42:38PM -1 points [-]

to at least Jesus

I bet (p=0.8) there's something along those lines predating Jesus in the Buddhist tradition.

Yes, different people will be made happy by different things. The studies DGW cite were mostly done by other people rather than by DGW. I share your suspicion that too many of the subjects were young healthy well-off well-educated Western psychology students or the like.

Comment author: gjm 17 June 2016 02:42:24PM -2 points [-]

I think it's meant as a rationality test for those who say voting is pointless. If you consider voting pointless but value being 10c richer at least a little bit, then on Liron's premises you should maybe be willing to vote whichever way gets you the 10c.

(I am unconvinced, for reasons I've given in another comment on the OP. Also because people may reasonably value voting for "internal" reasons: it makes them feel like fuller participants in their society, or something.)

Comment author: NancyLebovitz 13 June 2016 04:13:59PM 6 points [-]

https://www.newscientist.com/article/dn25458-blood-of-worlds-oldest-woman-hints-at-limits-of-life

Possible clues about limits of lifespan but also how to possibly get around those limits and how people might stay healthier longer.

Comment author: ete 12 June 2016 07:25:45PM *  6 points [-]

Excellent post. Agree with all major points.

I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

I'd think it was primarily not the proportional number of critics, but lower quality of criticism and great users getting tired of replying to/downvoting it. Most of the old crowd of lesswrongers welcomed well thought out criticism, but when people on the other side of an inferential distance gap try to imitate those high-criticism norms it is annoying to deal with, so they end up leaving. Especially if the lower quality users are loud and more willing to use downvotes as punishment for things they don't understand.

Comment author: Lumifer 10 June 2016 03:38:26PM 6 points [-]

Oh, dear. Psychology, can you please get your act together and stop being an embarrassing mess?

Researchers have fixed a number of papers after mistakenly reporting that people who hold conservative political beliefs are more likely to exhibit traits associated with psychoticism, such as authoritarianism and tough-mindedness.

As one of the notices specifies, now it appears that liberal political beliefs are linked with psychoticism. That paper also swapped ideologies when reporting on people higher in neuroticism and social desirability (falsely claiming that you have socially desirable qualities); the original paper said those traits are linked with liberal beliefs, but they are more common among people with conservative values.

Comment author: Viliam 10 June 2016 02:55:22PM *  6 points [-]

Just some anecdotal evidence:

For some people, jobs provide a scaffolding of their daily time. It's what makes them wake up in the morning. Take the job away from these people and here is what happens: wake up at 10AM, slowly make a breakfast, eat it, watch the TV a bit, make a lunch and eat it, now it's 3PM, feels like too late to do anything meaningful, so just watch the TV, make a dinner and eat it, watch some more TV, go to sleep, repeat over and over again. It seems like people without jobs should logically have more free time, but some of them actually manage to achieve less while unemployed.

For some old people, jobs provide social opportunity that is hard to replace when they retire. Suddenly, instead of spending their day interacting with a dozen people they know, they spend their day alone at home, complaining that their children don't visit them more often.

Without routine, some people's lives "fall apart". Humans are not automatically strategic. In theory, a life without job (with basic income, or retirement money, or just savings that allow you to take a really long vacation) should be better, but in practice sometimes it isn't. Think about all the superstimuli around us: for some people, their job may be the only thing that gets them offline most of the days.

(Disclaimer: This all said, I would still prefer to have a basic income and get extra 8 hours of freedom every day. There is a risk my life would become less amazing than I imagine it, but I would gladly take the risk.)

There is a possible counter-argument that the effects of losing the job might be only temporary. Like, if people have been conditioned for years to organize their lives around their job (and school), of course it gets them out of balance when the job suddenly disappears, because they never had the opportunity to learn how to organize their lives for themselves. But given enough time, they might develop the skill.

Comment author: casebash 10 June 2016 04:14:50AM 6 points [-]

Great to see some collaboration occurring between these institutes.

Comment author: OrphanWilde 07 June 2016 02:23:14PM 4 points [-]

You've encrypted a brain, and maybe salted it a bit to boot. You're still running the brain's "consciousness" program, it's just encrypted, and the brain is still experiencing exactly the same things, on account of it is running exactly the same program it would otherwise. The fact that the brain is cryptographically entangled with other data doesn't make the brain not exist.

Comment author: Lumifer 02 June 2016 02:42:26PM *  5 points [-]

Nope. You continue to be wrong.

You are mostly familiar with Graeco-Roman mythology and less familiar with the literature of that period. But that literature certainly existed and I don't know on which basis do you make assertions about "most of their stories".

Take Apuleius' Golden Ass -- a story about the misadventures of a man who (spoilers!) manages to turn himself into a donkey. You think most people took it as true?

In any case, which characters are fictional is irrelevant to the original issue of spending empathy. What matters is whether the character you're feeling empathy for is someone you could meet in real life and form a relationship with. If the story, for example, concerns some illustrious ancestors who might well have been real, you're still "wasting" empathy on them because in the zero-sum game postulated by the OP this takes away from the empathy available for you to feel for your neighbours.

Comment author: Viliam 30 May 2016 08:51:24AM 5 points [-]

It's almost three months since a mysterious benefactor offered to donate to MIRI but insisted on doing it through other LW members contacted via private messsages.

So, I'm curious... Did anyone cooperate? Is there a story to share?

Comment author: Lumifer 26 May 2016 03:13:52PM 6 points [-]

You're ignoring risk aversion. Just maximising expected utility does not take it into account and humans do care about risk. Your consciousness just discovered that your gut cares :-)

Comment author: Vaniver 26 May 2016 01:27:51PM *  6 points [-]

Utility is approximately the logarithm of money. Pretend otherwise, and you will get results that go against the intuition, duh.

To be clearer, utility is approximately the logarithm of your wealth, not of the change to your wealth. So there's a hidden number lurking in each of those questions--if you have $100k (5) of wealth, then option A brings it up to $100240 (5.00104) and option B brings it up to either $101000 (5.00432) with 25% probability and leaves it where it is with 75% probability, which works out to a weighted average log wealth of 5.00108, which is higher, so go with B.

But if your wealth is $1k (3), then option A brings you up to a weighted average of 3.09 and B brings you up to a weighted average of 3.07. So go with A!

(The breakeven point for this particular option is a starting wealth of $8800.)

Comment author: Viliam 26 May 2016 10:21:55AM *  5 points [-]

Utility is approximately the logarithm of money. Pretend otherwise, and you will get results that go against the intuition, duh.

Utility is linear to money only if we take such a small part of the logarithmic curve that it is more or less linear at the given interval. But this is something you cannot extrapolate to situations where the part of the logarithmic curve is significantly curved. Two examples of linearity:

1) You are a millionaire, so you more or less don't give a fuck about getting or not getting $1000. In such case you can treat small money as linear and choose B. If you are not a millionaire, imagine that it is about certainty of 24¢ versus 25% chance of $1.

2) You are an effective altruist and you want to donate all the money to a charity that saves human lives. If $1000 is very small compared with the charity budget, we can treat the number of human lives saved as a linear function of extra money given. (See: Circular Altruism.)

Comment author: ShardPhoenix 26 May 2016 07:06:35AM *  6 points [-]

Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.

I think this demonstrates that the Mary's room though experiment is about the limitations of human senses/means of learning, and that the apparent sense of mystery it has comes mainly from the vagueness of what it means to "know all about" something. (Not saying it was a useless idea - it can be quite valuable to be forced to break down some vague or ambiguous idea that we usually take for granted).

Comment author: gwern 25 May 2016 08:27:03PM *  6 points [-]

I think you need to read up a little more on behavioral genetics. To point out the obvious, besides adoption studies (you might benefit from learning to use Google Scholar) and and more recent variants like using sperm donors (a design I just learned about yesterday), your classic twin study design and most any 'within-family' design does control for parental actions, because they have the same parents. eg if a trait is solely due to parental actions, then monozygotic twins should have exactly the same concordance as dizygotic twins despite their very different genetic overlaps, because they're born at the same time to the same parents and raised the same.

More importantly, the point of GCTA is that by using unrelated strangers, they are also affected by unrelated parents and unrelated environments. So I'm not sure what objection you seem to have in mind.

Comment author: Elo 24 May 2016 11:06:15PM -1 points [-]

I am a little bothered by the scale you used - on a scale from 0-5 where:

0: no and don't want to sign up 1: no, still considering it. 2: no, would like to but can't afford it. etc. towards more interested in cryonics.

If we take an ordinary human who has barely even heard that cryonics is a real thing - the entry point to the scale is somewhere between 0 and 1 on the 6 point scale. Which means that as much as we have detailed data of states above 1; we don't have detailed data of states below 1. Which means that we potentially only recorded half the story; and with that; we have unrepresentative data that skews positively towards cryonics.

Comment author: gwern 16 May 2016 12:25:35AM *  6 points [-]

Some people here may be amused by my recent series of insomnia-inspired tweets, what if light novel titles were about statistics and children's series vs SF/horror movies.

Comment author: Morgrim 14 May 2016 08:19:40AM 6 points [-]

When I was doing the survey I found the 'Highest Education Credential Earning' question difficult because the credentials listed don't match those in my home country, Australia. For example, we have a system of "technical certificates" that fall in between High School and Bachelor's degrees. (I think I chose '2 year degree' as the closest approximation, even though mine only took 1 year to complete.) And I know that doing a Bachelors in some areas is the functional equivalent of doing a Masters in others.

Would a question asking for how many years of post-schooling study one has completed be more or less useful? The wording could be tricky, since then there is ambiguity about whether to list time spent if one is part way through a qualification. If the majority of respondents are from places that match the listed options then mucking about with the question may not be of much value either.

Comment author: ChristianKl 13 May 2016 08:49:40PM 6 points [-]

The 1920 didn't have the same idea of science that we have today. Maybe you mean General Semantics?

Comment author: gjm 12 May 2016 10:58:09PM -1 points [-]

Wow, did I really write that? It's the exact opposite of what I meant. Will fix.

Comment author: OrphanWilde 12 May 2016 05:25:05PM 6 points [-]

3^^^3 dust specks in everybody's eye?

So basically we're talking about turning all sentient life into black holes, or torturing everybody?

I mean, it depends on how good the torture we're talking about is, and how long it will last. If it's permanent and unchanging, eventually people will get used to it/evolve past it and move on. If it's short-term, eventually people will get past it. So in either of those cases, torture is the obvious choice.

If, on the other hand, it's permanent and adaptive such that all life is completely and totally miserable for perpetuity, and there is nothing remotely good about living, oblivion seems the obvious choice.

Comment author: gjm 11 May 2016 05:17:16PM -1 points [-]

abolishing all labor law would vastly increase the size of the economy

[citation needed], as the saying goes.

Why does your list not embrace whatever political policies induce the fastest economic growth?

I agree that the list should include something like "Pursuing rapid economic growth". But (1) it would probably be a mistake for the list to pick specific economic policies on the basis that they produce the fastest economic growth, since then the discussion would be in danger of being politicized by, say, an advocate of some particular economic/political policy that happens to differ from the one assumed in the list. (Of course that would never happen if the OP declined to pick favourites in this fashion.) And (2) fastest economic growth should not be the only criterion, unless that really is the only thing that influences robustness, which it may well not be. E.g., a policy might produce faster growth but also greater danger of violent and destructive revolution. Or it might produce faster growth but also introduce more single points of failure where one asteroid strike or serious outbreak of illness or terrorist act could bring everything down.

To take an example you already gave: laws restricting how unpleasant employers can make their employees' lives may reduce economic growth but also make it less likely that there's a violent uprising by workers fed up of their unpleasant lives.

one major civilization that feel due to its own laws [...] Why does your list not include that kind of threat to civilization?

As I understand it, RyanCarey is interested in threats to human civilization as a whole rather than to individual human civilizations. Human civilization as a whole doesn't have laws, regulations, taxation, etc. If one nation collapses under the weight of its own regulatory burden then others will presumably take note.

(How widely held, and how well supported, is the theory that the Roman empire failed because of overregulation and overtaxation? It's not a claim I've heard before, but I am about as far from being an expert in late Roman history as it is possible to be. In particular, how widely accepted is this theory outside circles in which everything is blamed on overregulation and overtaxation?)

Comment author: Lumifer 10 May 2016 03:58:43PM 6 points [-]

Could machine learning be used to fruitfully classify academic articles?

The word "fruitfully" is doing all the heavy lifting here.

It is, of course, possible to throw an ML algorithm at a corpus of academic articles. Will the results be useful? That entirely depends on what do you consider useful. You will certainly get some results.

Comment author: gjm 10 May 2016 09:50:12AM -1 points [-]
Comment author: Error 09 May 2016 02:48:29PM 6 points [-]

I dunno, I liked it better as Open Threat. ;-)

Comment author: Lumifer 09 May 2016 01:07:39AM 6 points [-]

"What are some important, but uncomfortable truths that many people learn when transitioning into adulthood?"

This long list needs a post scriptum: Very few people manage to accomplish this transition :-/

Comment author: gjm 06 May 2016 08:36:55PM 1 point [-]

What reason is there to think that Allen is correct when he says that the "contemplative, logocentric approach" is a poor match for understanding the relationship between knowledge and technology? In the passage you quote, he makes a number of claims that seem (at best) extremely doubtful. Does he justify them elsewhere?

(Perhaps he -- or you -- might consider this a fruitlessly contemplative and logocentric question, too much concerned with evidence, warrant and justification. Too bad.)

Comment author: WalterL 06 May 2016 07:56:23PM 5 points [-]

False hopes are more dangerous than fears.

J. R. R. TOLKIEN, The Children of Hurin

Comment author: Viliam 06 May 2016 04:08:17PM *  6 points [-]

The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible.

And because we are taught that imagining "vastly more" is naive. Wise people know that we live in the best possible universe because if something could be improved, someone else would have already improved it. (Who are you to believe that you can improve something that people higher-status than you didn't?)

This is a "reversed stupidity" against people who believe that vast improvement are trivial. (Usually because they completely ignore coordination problems, perverse incentives, or even the obvious negative side-effects of their improvements. "Why don't we just abolish money and kill/reeducate all the bad people? Then the world would be a nice place without scarcity.") But the opposite of "trivial" isn't "impossible", and great improvements happen regularly, for example the computers and internet.

Then probably the next answer is that improvement is possible, but only by institutions. Which ignores the fact that (1) the institutions don't grow on trees, they are created by people, and (2) even the institutions can benefit from having competent people at the right place, so that fact that we need institutions is not an argument against personal competence.

How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational.

So, how far did we get? I think we don't even have the "rationality dojos" competing against each other, and I'd say it's time we made some. Maybe local meetups could be considered such. So, how specifically can we make them "fight" against each other? What would be the tournament about? Maybe something general like "you have two months to do an impressive thing, then report about it, and the jury will choose the most impressive winner".

"the nonconformist cluster", seems to be stunningly bad at coordinating group projects.

Anyone willing to countersignal, and make a well-cooperating rationalist group just to spite the others? ;)

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

This makes me realize in near mode how long way is there yet ahead of us. Keeping this website interesting and friendly is already a difficult problem; and it's incomparable with running a country and facing a war. But maybe that's a problem specific for online environments, and a real country, or a town, or a village, or even a large house would have a different dynamic, because people could meet in person and cooperate at real-life projects (which could teach them the cooperation skills, and select competent people by things other than posturing).

By the way, what is the largest known group of aspiring rationalists living together?

Practical advice is genuinely much, much more useful when it's backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted.

This means that for practical purposes only knowledge of cognitive biases in general is not enough. One needs to have knowledge in domains where they want to achieve something. Using the division of labor, someone can research a topic for the others, and then tell them only as much of the theory as they need to deeply understand the advice.

Which means that a website (or other communication tool) of a rationalist community that actually does something would contain articles on things beyond cognitive biases and math. If you want to build a house for rationalists to live together, you need to research something about house-building. You can delegate the whole decision to one person, if you trust their expertise enough, but if you want to have a community discussion, you need to educate the community about the basics, so they can participate in the debate meaningfully. (Perhaps the lesson should not be called "rational house-building", but as long as the rationalists want to debate house-building, some lesson is required. Of course, lessons produced outside of the community can be used, no need to reinvent the wheel, unless you believe you can communicate the same insights more efficiently.)

When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result.

And when many people do this, underconfidence becomes a standard signal of wisdom. ("No, I can't do anything, because I am not 100% sure it's the right action, and I am too clever to do a possibly wrong thing." "Great thinking! Anyone who does anything is clearly not smart enough to realize this.") Perverse incentives: in many situations, failure is obvious and humiliating, while not taking action is considered normal. It is good to take a step away and realize that in long term the failures are likely to be forgotten, but the successes can become stepping stones to further victories. Even if the failures are more frequent, as long as they are not too costly.

Comment author: Viliam 06 May 2016 12:15:53PM 5 points [-]

Probably saying the obvious, but anyway:

What is the advantage of nice communication in a rationalist forum? Isn't the content of the message the only important thing?

Imagine a situation where many people, even highly intelligent, make the same mistake talking about some topic, because... well, I guess I shouldn't have to explain on this website what "cognitive bias" means... everyone here has read the Sequences, right? ;)

But one person happens to be a domain expert in an unusual domain, or happened to talk with a domain expert, or happened to read a book by a domain expert... and something clicked and they realized the mistake.

I think that at this moment the communication style on the website has a big impact on whether the person will come and share their insight with the rest of the website. Because it predicts the response they get. On a forum with a "snarky" debating culture, the predictable reaction is everyone making fun and not even considering the issue seriously, because that's simply how the debate is done there. Of course, predicting this reaction, the person is more likely to just avoid the whole topic, and discuss something else.

Of course -- yes, I can already predict the reactions this comment will inevitably get -- this has to be balanced against people saying stupid things, etc. Of course. I know already, okay? Thanks.

Comment author: Dagon 05 May 2016 03:37:06PM 5 points [-]

I suspect you and I have different voting standards. A post does not need to be stupid or unspeakable to downvote, merely uninteresting and unhelpful.

The underlying question (exit vs voice in ownership as compared to other kinds of membership) might be interesting, but the post doesn't bring that out very well.

And there's no way to go foward from this post. It doesn't say anything concrete enough to agree or disagree with, it doesn't ask narrow enough questions that people can help to suggest ways to behave or how to further the research and thought.

fundamentally, I downvoted because I didn't like it.

Comment author: username2 02 May 2016 02:54:56PM *  6 points [-]

I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson's Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.

Some examples that mostly match what I want, in roughly descending order:

How do I go about finding more feeds like that? I have already tried the obvious, such as googling "allintext: egtheory jeremykun" and found a couple OPML files (including gwern's), but they didn't contain anything close. The obvious blogrolls weren't helpful either (most of them were endless lists of conference announcements and calls for papers). Also, I've grepped a few relevant subreddits for *.wordpress.*, *.blogspot.* and *.github.io submissions (only finding what I already have in my RSS feeds — I suspect the less established blogs just haven't gotten enough upvotes).

Comment author: gjm 02 May 2016 02:41:01PM -1 points [-]

Possibly just random? There's a feedback effect where if LW is quiet one day, there's less to respond to the next day so it is likely to remain quiet -- so I think smallish random fluctuations can easily produce week-long droughts or gluts.

Comment author: gwern 30 April 2016 12:56:55AM 6 points [-]

Eliezer has conceded the bet & paid me $5.

Comment author: Brillyant 27 April 2016 03:31:04PM 3 points [-]

Well, to be fair the Eugine Tsipursky Theory™, Gleb is using Eugine's super rude sock puppet army, which he invented and maintains using high level trolling techniques, to manufacture an enemy to LW... An enemy that he will later destroy using his own super polite meat puppet army. Then, he will use the goodwill he has earned to take over the universe. Or at least get a good amount of karma here on LW.

At any rate, Gleb and Eugine appearing to be very different is essential to the ruse. If the ETT is correct, you're falling right into his trap.

Comment author: MrMind 27 April 2016 07:27:15AM *  5 points [-]

Eugine Noir's Sock Puppet Army

It's ironic that 'Eugine' comes from the greek 'good spirit'.

Gleb and his international team of Meat Puppets

Whoa, can we put together the two, one against the other? A post-apocalyptic fight between sock puppets and meat puppets for the destiny of rationality on Earth?
Stitchpunk at it's best!

EDIT: it's already happening!

Comment author: Daniel_Burfoot 26 April 2016 06:50:23PM 1 point [-]

Regarding the EugeneNier fiasco, what if we were to require new users to submit a link to a Facebook/LinkedIn account or have a current active user vouch that the new account is real?

Comment author: Viliam 25 April 2016 10:28:39AM 6 points [-]

Okay, so it's not "everyone", but it's a majority.

So you have a majority of voters wanting something, a politician saying he will deliver, the politician has wide support... and this all makes the voters irrational because... uhm... because neuroscience explains that the voters actually have emotions, oh the horrors!

I suppose next time I care about something, I should vote for a politician who promises the exact opposite, so that no neuroscientist can suspect me of being a helpless victim of my own opinions. /s

Comment author: Dustin 24 April 2016 03:12:50PM *  1 point [-]

the elite are against it while everyone else is for it

You're living in a filter bubble if you think this.

Comment author: NancyLebovitz 24 April 2016 06:53:39AM *  3 points [-]

I start twitching when emphatic statements are made about the motivations of large numbers of people. How can you check on whether you're right?

This being said, Sanders is also appealing to anger and fear, and both candidates are also appealing to hope.

Comment author: SquirrelInHell 23 April 2016 02:49:56AM 6 points [-]

This comment on Scott Adams' blog gives some suggestions:

World War II? Decimation of major fisheries? Genocide of North American natives? Sort of by definition, we haven't wiped out the whole human race yet, but we have endured some seriously bad disasters that people saw coming - and a certain amount of "that wasn't so bad" comes from the perspective of the survivors and the children who never knew how good things were before the disaster hit. Maybe loss of the chestnut trees isn't a big deal to people today, but if chestnuts were important to you - things certainly aren't looking so good in the North American chestnut department anymore.

Comment author: Matthew_Opitz 21 April 2016 10:19:55PM 5 points [-]

One argument could be that many social scientists are being led down a blind alley of trying to find environmental causes of all sorts of differences and are being erroneously predisposed to find such causes in their data to a stronger extent than is really the case, which then leads to incorrect conclusions and policy recommendations that will not actually change things for the better because the policy recommendations end up not addressing what is the vast majority of the root of the problem (genetics, in this case).

View more: Prev | Next