Filter This year

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Elo 19 June 2016 12:00:44PM -2 points [-]

You should certainly get good at wage negotiations. For about 5-10 minutes of what some people feel is the most uncomfortable conversation of their industry you can increase your earnings by $5-20k or more. As well as bonuses etc. out of a good negotiation.

Comment author: ArgleBlargle 14 May 2016 06:28:10AM 11 points [-]

Thanks for doing this.

In response to May Outreach Thread
Comment author: tut 07 May 2016 07:28:29AM 10 points [-]

Ok, this is something I have been thinking every time I see an Outreach Thread, and now I can't resist asking it:

When did LW become a proselytizing community?

And are we sure that it is a good idea to do a lot of outreach when the majority of discussion on the site is about why LW sucks?

Comment author: James_Miller 05 May 2016 12:32:03AM *  12 points [-]

Very little published academic literature exists on the consequences of divestment.

This is because most people who study finance would put a very high probability on the consequences being zero. If my college refuses to buy from a firm it hurts that firm a little, but if it refuses to buy stock in a firm it does that firm zero harm. The best evidence is that while firms frequently advertise to get people to buy their products, they almost never advertise to get people to buy their stock. The value of a firm's stock is determined by what the big players in the market think are the long-term fundamentals of this stock.

Comment author: gjm 01 May 2016 07:32:07PM 0 points [-]

Asking scientists to keep their paper titles hedge-drift-resistant means (1) asking each individual scientist to do something that will reduce the visibility of their work relative to others', for the sake of a global benefit -- a class of policy that for obvious reasons doesn't have a great track record -- and (2) asking them to give their papers titles that are boring and wordy.

I agree that the world might be a better place if scientists consistently did this. But it doesn't seem very likely to happen.

(Also, here's what might happen if they almost consistently did this: the better, more conscientious scientists all write carefully hedged articles with carefully hedged titles, and journalists ignore all of them because they all sound like "Correlational analysis of OCEAN traits weakly suggest slight association between conscientiousness and Y-chromosome haplogroup O3". A few less careful scientists write lower-quality papers that, among other things, have titles like "The Chinese work harder: correlational analysis of OCEAN traits and genotype", and those are the ones that the journalists pick up. These are also the ones without the careful hedging in the actual analysis, without serious attempts to correct for multiple correlations, etc. So we end up with worse stuff in the press.)

In response to Gratitude Thread :-)
Comment author: username2 19 April 2016 02:16:01PM 12 points [-]

I'm very grateful that I've had a number of pain-free days this past week. I was able to summon up the energy to work on some dormant projects and overall feel much better about life than I did a week ago.

Comment author: James_Miller 21 March 2016 03:45:16PM *  12 points [-]

Not according to Razib Khan who writes in part:

"A new paper on which has some results on life satisfaction, intelligence and the number of social interactions one has has generated some mainstream buzz....The figure above shows the interaction effect between intelligence, life satisfaction, and number of times you meet up with friends over the week. What you see is that among the less intelligent more interactions means more life satisfaction and among the more intelligent you see the reverse...But take a look at the y-axis...The effect here is very small....These are not actionable results for anyone."

Comment author: cousin_it 14 March 2016 03:37:03PM *  12 points [-]

It could be worse. Rationality essays could be attracting a self-selected group of people whose bottleneck isn't rationality. Actually I think that's true. Here's a three-step program that might help a "stereotypical LWer" more than reading LW:

1) Gym every day

2) Drink more alcohol

3) Watch more football

Only slightly tongue in cheek ;-)

Comment author: Kaj_Sotala 11 March 2016 11:30:08AM 12 points [-]

'Yeah, we could maybe have AlphaGo learn everything totally from scratch and reach a superhuman level of knowledge just by playing itself, not using any human games for training material. Of course, reinventing everything that humanity has figured out while playing Go for the last 2,500 years, that's going to take quite a bit of time. Like a few months or so.'

Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing. It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months. But we think it’s possible to ground it all the way to pure learning.

http://www.theverge.com/2016/3/10/11192774/demis-hassabis-interview-alphago-google-deepmind-ai

Comment author: WalterL 10 March 2016 04:07:07PM 11 points [-]

He did...but...like, you can't really trust that. He'd have said that (or similar) no matter what. It isn't game commentary, its signalling.

There's a sort of humblebrag attitude that permeates all of Go. Every press conference is the same. Your opponent was very strong, you were fortunate, you have deep respect for your opponent and thank him for the opportunity.

In the game commentary you get the real dish. They stop using names and use "White/Black" to talk about either side. There things are much more honest.

Comment author: Kaj_Sotala 09 March 2016 06:41:42PM *  12 points [-]

I found this interesting: AlphaGo's internal statistics predicted victory with high confidence at about three hours into the game (Lee Sedol resigned at about three and a half hours):

For me, the key moment came when I saw Hassabis passing his iPhone to other Google executives in our VIP room, some three hours into the game. From their smiles, you knew straight away that they were pretty sure they were winning – although the experts providing live public commentary on the match weren’t clear on the matter, and remained confused up to the end of the game just before Lee resigned.

Hassabis’s certainty came from Google’s technical team, who pore over AlphaGo’s evaluation of its position, information that isn’t publicly available. I’d been asking Silver how AlphaGo saw the game going, and he’d already whispered back: “It’s looking good”.

And I realised I had a lump in my throat. From that point on, it was crushing for me to watch Lee’s struggle.

Towards the end of the match, Michael Redmond, an American commentator who is the only westerner to reach the top rank of 9 dan pro, said the game was still “very close”. But Hassabis was frowning and shaking his head – he knew that AlphaGo was definitely winning. And then Lee resigned, three and a half hours in.

Also this bit, suggesting that Lee might still win some matches:

Silver said that – judging from the statistics he’d seen when sitting in Google’s technical room – “Lee Sedol pushed AlphaGo to its limits”.

Comment author: cousin_it 09 March 2016 12:56:20PM *  12 points [-]

When I started hearing about the latest wave of results from neural networks, I thought to myself that Eliezer was probably wrong to bet against them. Should MIRI rethink its approach to friendliness?

Comment author: Lumifer 28 February 2016 11:41:52PM 4 points [-]

What is this asking for permission via a poll thing?

Make a thread and watch its karma. It will tell you all you need to know.

Comment author: Lumifer 22 February 2016 08:42:37PM 11 points [-]

LW might find that interesting:

I'm becoming a Christian, not just one who occasionally went to church as a kid, but a real one that believes in Christ, loving God with all my heart, etc.

Most ex-atheists who become deists turn to Buddhism, so I thought I'd be clear why they are all wrong (Robert Wright!). I'd like to thank Mencius Moldbug, Dierdre McCloskey, Mike Behe, Tim Keller (four names probably never listed in sequence ever), and hundreds more...Below are snippets (top and bottom) from my Christian apology: I came to Christ via rational inference, not a personal crisis.

Comment author: SolveIt 12 February 2016 05:36:12PM 12 points [-]

The actual effectiveness of MIRI

Comment author: OrphanWilde 08 February 2016 08:03:24PM 10 points [-]

I'm not really sure what you're hoping to accomplish here. The fable isn't framed in a way that accurately represents reality. The sympathetic arguments you're making could be made without euphemism. The story falsely equivocates refusing sex as maliciously refusing to save someone's life.

Given that the author has, in other comments, mentioned suicidal tendencies... I'd suggest the equivalence might be real to them.

Shrug I dunno. I find this poorly written, and poorly thought out, and fails to touch much at all in me; granted, my moments of compassion are few and far between.

But the hostile response is disproportionate to what was actually written, to the point where I must conclude that this piece has successfully made its readers feel deeply uncomfortable, and the hostility is a rationalization to cover that discomfort.

Comment author: entirelyuseless 08 February 2016 04:07:55PM *  9 points [-]

The burning is the unsatisfied desire for sex, and lifting the branch is offering sex. At the end of the story, the boy goes to prison for attempted rape. I presume you were joking in saying that you did not recognize this, or that you simply intended to say that you consider it a bad analogy.

In any case, I agree that such an analogy is pointless, and that is why I downvoted the post.

Comment author: WhyAsk 19 January 2016 03:43:58PM *  12 points [-]

LW should make this unique thread widely known. Many couples facing similar decisions can be helped.

I am sorry for your loss.

EDIT: This association to your post won't leave me alone, so here it is: APACHE II software gives the odds of an adult leaving an ICU alive. Perhaps there is, or will soon be, an intrauterine version of this using blood values & other metrics that can prompt preventive measures early in a pregnancy.

Comment author: RichardKennaway 13 January 2016 11:19:04AM 12 points [-]

Can you think of any good reason to consult any so called psychic?

I can think of a good reason for anything. I ask my brain "conditional upon it being a good idea, what might the situation be?" and the virtual outcome pump effortlessly generates scenarios. A professional fiction writer could produce a flood of them. Try it! For any X whatever, you can come up with answers to the question "what might the world look like, conditional upon X being a good idea?" For extreme X's, I recommend not publishing them. If you find yourself being persuaded by the stories you make up, repeat the exercise for not-X, and learn from this the deceptively persuasive power of stories.

Why consult a psychic? Because I have seen reason to think that this one is the real deal. To humour a friend who believes in this stuff. For entertainment. To expose the psychic as a fraud. To observe and learn from their cold reading technique. To audition them for a stage act. Because they're offering a free consultation and I think, why not? (Don't worry, my virtual outcome pump can generate reasons why not just as easily as reasons why.)

What is the real question here?

Comment author: James_Miller 06 January 2016 02:55:06PM *  10 points [-]

Usul, I'm one of the galactic overlords in charge of earth. I have some very bad news for you. Every night when you (or any other human) go to sleep your brain and body are disintegrated, and a reasonably accurate copy is put in their place. But, I have a deal for you. For the next month we won't do this to you, however after a month has passed we will again destroy your body and brain but then won't make any more copies so you will die in the traditional sense. (There is no afterlife for humans.) Do you agree?

Comment author: Jayson_Virissimo 05 January 2016 05:30:11PM *  12 points [-]

...the common practice of taking down Chesterton fences is a process which seems well established and has a decent track record...

How are you measuring this?

Comment author: username2 31 December 2015 03:32:00AM -1 points [-]

So basically what you're saying is that your emotional unqualified to be a moderator.

BTW, you're doing a remarkably good job of demonstrating advancedatheist's claim that women can't be trusted with positions of power.

Comment author: gjm 07 December 2015 05:52:12PM 12 points [-]

In so far as the Fermi paradox implies we're in great danger, it also suggests that exciting newly-possible things we might try could be more dangerous than they look. Perhaps some strange feedback loop involving intelligence enhancement is part of the danger. (The usual intelligence-enhancement feedback loop people worry about around here involves AI, of course, but perhaps that's not the only one that's scary.)

Comment author: Lumifer 02 December 2015 05:00:53PM 12 points [-]

Using personal preference or personal intuitions as priors instead of some objective measure along the lines of Solomonoff Induction

You can't do Solomonoff induction in real life. That by itself seems to be a good reason to look for something else.

As to objective/subjective priors, note that the classic Bayesian understanding of probability is as "subjective belief" which is what drives frequentists hopping mad. If you accept this concept of probability then there shouldn't be any problem with using subjective (="personal") priors.

Comment author: John_Maxwell_IV 24 November 2015 04:48:59AM 12 points [-]

MealSquares (the company I'm starting with fellow LW user RomeoStevens) is searching for nutrition experts to join our advisory team. The ideal person has a combination of formally recognized nutrition expertise & also at least a casual interest in things like study methodology and effect sizes (this unfortunately seems to be a rare combination). Advising us will be an opportunity to improve the diets of many people, it should not be much work, you'll get a small stake in our company, and you'll help us earn money for effective giving. Please get in touch with us (ideally using this page) if you or someone you know might be interested!

Comment author: VoiceOfRa 23 November 2015 11:41:23PM -2 points [-]

How are you currently helping the movement?

By calling out idiots such as yourself who are attempting to associate it with bad reasoning.

Comment author: Vaniver 23 November 2015 08:15:39PM 12 points [-]

I'm going to guess that English language proficiency is far higher in Europe than it is in China. But Asian Americans seem underrepresented on LW relative to the fields that LW draws heavily from, so that seems unlikely to be a complete explanation.

Comment author: Gleb_Tsipursky 20 November 2015 04:56:52AM 11 points [-]

I really appreciate you sharing your concerns. It helps me and other involved in the project learn more about what to avoid going forward and optimize our methods. Thank you for laying them out so clearly! I think this comment will be something that I will come back to in the future as I and others create content.

I want to see if I can address some of the concerns you expressed.

In my writing for venues like Lifehack, I do not speak of rationality explicitly as something we are promoting. As in this post, I talk about growing mentally stronger or being intentional - euphemisms that do not associate rationality as such with what we're doing. I only incidentally mention rationality, such as when I speak of Rationality Dojo as a noun. I also generally do not talk of cognitive biases, and use other euphemistic language, such as referring to thinking errors, as in this article for Salon. So this gets at the point of watering down rationality.

I would question the point about arguing from authority. One of the goals of Intentional Insights is to convey what science-based itself means. For example, in this article, I specifically discuss research studies as a key way of validating truth claims. Recall that we are all suffering from a position of curse of knowledge on this point. How can we expect to teach people who do not know what science-based means without teaching it to them in the first place? Do you remember when you were at a stage when you did not know the value of scientific studies, and then came to learn about them as a useful way of validating evidence? This is what I'm doing in that article above. Hope this helps address some of the concerns about arguing from authority.

I hear you about the inauthentic feeling writing style. As I told Lumifer in my comment below, I cringed at that when I was learning how to write that way, too. You can't believe how weird that feels to an academic. My Elephant kicks and screams and tries to throw off my Rider whenever I do that. It's very ughy. This writing style is much more natural for me. So is this.

However, this inauthentic-feeling writing style is the writing style needed to get into Lifehack. I have been trying to change my writing style to get into venues like that for the last year and a half, and only succeeded in changing my writing style in the last couple of months sufficiently to be published in Lifehack. Unfortunately, when trying to spread good ideas to the kind of people who read Lifehack, it's necessary to use the language and genre and format that they want to read, and that the editors publish. Believe me, I also had my struggles with editors there who cut out more complex points and links to any scientific papers as too complex for their audience.

This gets at the broader point of who reads these articles. I want to quote a comment that Tem42 made in response to Lumifer:

Unless you mean simply the site that it is posted on smells of snake oil. In that case I agree, but at the same time, so what? The people that read articles on that site don't smell snake oil, whether they should or not. If the site provides its own filter for its audience, that only makes it easier for us to present more highly targeted cognitive altruism.

Indeed, the site itself provides a filter. The people who read that site are not like you and me. Don't fall for the typical mind fallacy here. They have complete cognitive ease with this content. They like to read it. They like to share it. This is the stuff they go for. My articles are meant to go higher than their average, such as this or this, conveying both research-based tactics applicable to daily life and frameworks of thinking conducive to moving toward rationality (without using the word, as I mentioned above). Hope this helps address the concerns about the writing style and the immunization of people to good ideas, since the readers of this content are specifically looking for this kind of writing style.

Does this cause any updating in decreasing the likelihood of nightmare scenarios like the one you described?

Comment author: gwern 19 November 2015 07:29:13PM *  12 points [-]

I think you're missing the broader point I was making: writing your own articles is like changing the oil in your own car. It's what you do when you are poor, unimportant, have low value of time, or it's your hobby.

Once you become important, you start outsourcing work to research assistants, personal assistants, secretaries, PR employees, vice presidents, grad students, etc. Musk is a billionaire and a very busy one at that and doesn't write his own books because it makes more sense for him to bring in someone like WBW to talk to for a few hours and have his staff show them around and brief them, and then they go off and a while later ghostwrite what Musk wanted to say. Zuckerberg is a billionaire and busy and he doesn't write all his own stuff either, he tells his PR people 'I want to predictably waste $100m on a splashy donation; write up the press release and a package for the press etc and send me a final draft'. Jobs didn't write his own autobiography, that's what Isaacson was for. Memoirs or books by famous politicians or pundits - well, if they're retired they may have written most or all of it themselves, but if they're active...? Less famously, superstar academics will often have written little or none of the papers or books published under their names; examples here would be invidious, but I will say I've sometimes looked at acknowledgements sections and wondered how much of the book the author could have written themselves. (If you wonder how it's possible for a single person to write scores of high-quality papers and books and opeds, sometimes the answer is that they are a freak of nature blessed with shortsleeping genes & endless willpower; and sometimes the answer is simply that it's not a single person.) And this is just the written channels; if you have access to the corridors of power, your time may well better be spent networking and having in-person meetings and dinners. (See the Clinton Foundation for an example of the rhizomatic nature of power.)

I'm not trying to pass judgment on whether these are appropriate ways for the rich and powerful to express their views and influence society, but it is very naive to say that just because you cannot go to the bookstore and buy a book with Musk's name on it as author, that he must not be actively spreading his views and trying to influence people.

Comment author: OrphanWilde 18 November 2015 05:51:36PM 10 points [-]

I'll talk about marketing, actually, because part of the problem is that, bluntly, most of you are kind of inept in this department. By "kind of" I mean "have no idea what you're talking about but are smarter than marketers and it can't be nearly that complex so you're going to talk about it anyways".

Clickbait has come up a few times. The problem is that that isn't marketing, at least not in the sense that people here seem to think. If you're all for promoting marketing, quit promoting shit marketing because your ego is entangled in complex ways with the idea and you feel you have to defend that clickbait.

GEICO has good marketing, which doesn't sell you on their product at all. Indeed, the most prominent "marketing" element of their marketing - the "Saves you 15% or more" bit - mostly serves to distract you from the real marketing, which utilizes the halo effect, among other things, to get you to feel positively about them. (Name recognition, too.) The best elements of their marketing don't get noticed as marketing, indeed don't get noticed at all.

The issue with this entire conversation is that everybody seems to think marketing is noticed, and uses the examples they notice as examples of good marketing. Those are -terrible- examples, as demonstrated by the fact that you think of them when you think of marketing - and anybody you market to will, too. And then you justify these examples of marketing by relying on an unrealistically low opinion of average people - which many average people share.

Do you think somebody clicking on a "One Weird Trick" tries it out? No, they click on clickbait to see what it says, then move on, which is exactly its goal - be attractive enough to get someone's attention, entertaining enough to keep them interested, and no more. Clickbait doesn't impart anything - its goal isn't to be remembered or to change minds or to sell anything except itself, because its goal is to serve up ads to a steady stream of readers.

And if you click on Clickbait to see what stupid people are being tricked into believing - guess what, you're the "stupid person". You were the target audience, which is anybody they can get to click on their stuff, for any reason at all. The author of "This One Weird Trick" doesn't want to convince you to use it, they want you to add a little bit of traffic to the site, and if they can do that by crafting an article and headline that makes intelligent people want to click to see what gullible morons will buy into, they'll do it.

Clickbait isn't the answer. "Rationalist's One Weird Trick To a Happy Life" isn't the answer - indeed, it's the opposite of the answer, because it's deliberately setting rationality up as a sideshow to sell tickets to so people can laugh at what gullible morons buy into.

Comment author: Viliam 18 November 2015 08:35:19AM 11 points [-]

Is he wrong though? Sometimes I feel I'm getting tired of humanity, because it makes everything about status.

Comment author: [deleted] 17 November 2015 02:50:31AM 12 points [-]

I think we should get rid of "main" and "promoted" .

Right now there's four tiers: open thread, discussion, main, and main promoted.

at least once a week I see a comment that says "this should be in main," "this shouldn't be in main", "this should be in the open thread," or "this shouldn't be in the open thread, it should be it's own post".

I think the two tier system of open thread/discussion would suffice, and the upvote downvote mechanism could take care of the rest.

Comment author: James_Miller 16 November 2015 09:34:58PM *  8 points [-]

Get a new girlfriend. (Probably easier than getting your current girlfriend to get a new cat.)

Comment author: CellBioGuy 04 October 2016 10:00:49PM *  11 points [-]

Advice solicited. Topics of interest I have lined up for upcoming posts include:

  • The history of life on Earth and its important developments
  • The nature of the last universal common ancestor (REALLY good new research on this just came out)
  • The origin of life and the different schools of thought on it
  • Another exploration of time in which I go over a paper that came out this summer that basically did exactly what I did a few months earlier with my "Space and Time Part II" calculations of our point in star and planet order that showed we are not early and are right around when you would expect to find the average biosphere, but extended it to types of stars and their lifetimes in a way I think I can improve upon.
  • My thoughs on how and why SETI has been sidetracked away from activities that are more likely to be productive towards activities that are all but doomed to fail, with a few theoretical case studies
  • My thoughts on how the Fermi paradox / 'great filter' is an ill-posed concept
  • Interesting recent research on the apparent evolutionary prerequisites for primate intelligence

Any thoughts on which of these are of particular interest, or other ideas to delve into?

Comment author: ChristianKl 03 October 2016 08:42:52PM 11 points [-]
Comment author: WhySpace 27 September 2016 02:06:25AM *  10 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: helldalgo 12 September 2016 06:09:38PM 11 points [-]

I occasionally just forget that I can change things about my environment. If my clothes are uncomfortable, I can change. If there are annoying sounds, I can wear earplugs.

Comment author: Elo 04 August 2016 11:20:15AM -2 points [-]

http://www.abc.net.au/triplej/programs/hack/hack-thursday/7674406

I had the opportunity to be on national radio talking about cryonics - my segment starts at 17:30. And the media will always cut and paste what you say. They did enjoy my tagline, "you only live twice", and gave it an overall positive spin.

Article also here: http://www.abc.net.au/triplej/programs/hack/cheating-death-with-cryonics/7662164

Comment author: gwern 18 June 2016 06:11:48PM *  11 points [-]

First, they are solving real world problems. But as usual, companies talk a lot more about the research than the trade secrets. Google uses it heavily, even for the crown jewel of Search. The Deepmind post yesterday mentions DQN is being used for recommender systems internally; I had never seen that mentioned anywhere before and I don't know how DQN would even work for that (if you treat every eg YT video as a different 'action' whose Q-value is being estimated, that can't possibly scale; but I'm not sure how else recommending a particular video could be encoded into the DQN architecture). Google Translate will be, is, or has already been rolling out the encoder-decoder RNN framework delivering way better translation quality (media reports and mentions in talks make it hard for me to figure out what exactly). The TensorFlow promotional materials mention in passing that TF and trained models are being used by something like 500 non-research groups inside Google (what for? of course, they don't say). Google is already rolling out deep learning as a cloud service in beta to make better use of all their existing infrastructure like TPU. Deepmind recently managed to optimize Google's already hyper-optimized data centers to reduce cooling electricity consumption by 40% (!) but we're still waiting on the paper to be published to see the details. The recent Facebook piece quotes them as saying that FB considers their two AI labs to have already paid for themselves many times over (how?); their puff piece blog about their text framework implies that it's being used all over Facebook in a myriad of ways (which don't get explained). Baidu is using their RNN work for voice recognition on smartphones in the Chinese market, apparently with a lot of success; given the language barrier there and Baidu's similarly comprehensive aspect as Google and Facebook, they are doubtless using NNs for many other things. Tesla's already (somewhat recklessly) rolling out self-driving cars; powered by Mobileye doesn't use a pure end-to-end CNN framework like Geohotz and some others, but they do acknowledge using NNs in their pipelines and are actively producing NN research. People involved say that companies are spending staggering sums.

Second, in the initial stages of a Singularity, why would you expect a systematic bias towards all the initial results being reported as deployed commercial services with no known academic precedents? I would expect it the other way around: even when corporations do groundbreaking R&D, it's more typical for it to be published first and then start having real-world effects. (eg Bell Labs - things like Unix were written about long before AT&T started selling it commercially.)

In response to Crazy Ideas Thread
Comment author: James_Miller 18 June 2016 12:34:45AM 10 points [-]

Someone should create a free speech Twitter that doesn't censor anything protected by the U.S. 1st amendment.

In response to comment by Lumifer on Buying happiness
Comment author: gjm 17 June 2016 01:33:38PM 0 points [-]

I agree with your suspicions about material versus experiential purchases, for similar reasons. (It's a similar objection to the one I raised about their buying-for-self versus buying-for-others arguments.)

What you'd really need to do to assess this is give a few hundred people $1000, tell half of them to spend it on something material and half to spend it on something experiential, and then do experience-sampling on them all for the following 5 years or so to assess their happiness. Oh, and for the previous year or so too. It isn't hard to see why no one has done that particular piece of research.

I'm not so convinced they're wrong about "buy now, consume later" (though "plan now, buy later" seems like it might be better), or by "follow the herd" or "don't comparison-shop", at least if they're taken to mean "follow the herd more" and "comparison-shop less". (Yup, those might make things better for retailers too. But, y'know, the whole point of trade is that it isn't zero-sum.)

In response to Avoiding strawmen
Comment author: gjm 17 June 2016 10:10:25AM -2 points [-]

George Bernard Shaw wrote that, "the single biggest problem in communication is the illusion that it has taken place".

I have grave difficulty believing that GBS ever wrote anything containing the words "the single biggest problem in communication". Not his style. And, indeed, it doesn't look like he did.

Comment author: gjm 05 May 2016 12:43:26AM -2 points [-]

I wonder why they're trying this on brain-dead patients first.

Nothing to lose.

If your brain is gradually degenerating, a treatment that might fix it or might (say) give you cancer and kill you horribly in short order might well seem like a good deal on balance, but I expect you'd have to think about it. But if you're already brain-dead, nothing this treatment does to you can make things worse.

The question I'd be asking instead is: Why haven't they published their results showing success in (say) rats? Except actually I probably wouldn't bother asking because the chance of this being anything other than bullshit seems so very very small.

Comment author: RainbowSpacedancer 19 April 2016 01:38:38AM *  11 points [-]

Until LessWrong 2.0 comes out, this is how I've been staying in touch with the Rationalist Diaspora. It took about an hour to set up and I can now see almost everything in the one place.

I've been using an RSS reader (I use feedly) to collate RSS feeds from these lists,

Rationist Blogs,

https://wiki.lesswrong.com/wiki/List_of_Blogs

https://www.reddit.com/r/RationalistDiaspora/

Effective Altruist Blogs,

http://www.stafforini.com/blog/effective-altruism-blogs/

Rationalist Tumblers,

http://yxoque.tumblr.com/RationalistMasterlist

And using this twitter to RSS tool for these LessWrong Twitters,

http://lesswrong.com/lw/d92/less_wrong_on_twitter/

This system is unsatisfying in a number of ways the two most obvious to me being 1) I don't know of any way to integrate the Rationalists on Facebook into this system and 2) Upvotes from places that use them like LW or r/rational aren't displayed. Nevertheless it is still much simpler for me to be notified of new material. If anyone has suggestions on improvements or wants to share how they follow the Diaspora that'd be most welcome.

Comment author: Lumifer 12 April 2016 05:04:02PM *  10 points [-]

While I don't play Rust, my impression is that the devs are being dicks (heh) for what looks to be ideological reasons. They say:

"Technically nothing has changed, since half the population was already living with those feelings. The only difference is that whether you feel like this is now decided by your SteamID instead of your real life gender."

They are wrong, of course, what changed was that there was no choice possible and now there is a choice (which they dangle in front of you and then deny to you).

Comment author: Lumifer 04 April 2016 02:46:59PM *  8 points [-]

You're getting dramatic for no good reason. I don't think that in reality people didn't submit patches because they were too busy with Eugine. Just didn't happen.

You are literally killing LW

Nope. Availability bias is a fallacy. Eugine is a very minor problem for LW.

Comment author: Elo 04 April 2016 08:46:23AM *  2 points [-]

user account: "Lamp" is banned for being eugine_nier. This is an update in case anyone was wondering.

so far accounts have been:

  • Eugine_Nier
  • Azazoth123
  • The_Lion
  • The_Lion2
  • Old_Gold
  • Lamp

(that I know of, I think there were more in between too that I forgot.)

If I could send this guy a message it would be this: You are quite literally wasting our time. And by "our" I mean; the moderators and the people who could be spending their time improving the place, coding and implementing a better place; instead are spending their time getting rid of you over and over. DON'T COME BACK. You are literally killing LW.

I don't want to get into the community's time or the time of the people you debate with; or the time of anyone who reads this post here. That time also adds up. Seriously.

Comment author: gjm 21 March 2016 06:44:23PM 11 points [-]
Comment author: Lumifer 15 March 2016 04:49:41PM *  10 points [-]

This post by Eric Raymond should be interesting to LW :-) Extended quoting:

There’s a link between autism and genius says a popular-press summary of recent research. If you follow this sort of thing (and I do) most of what follows doesn’t come as much of a surprise. We get the usual thumbnail case studies about autistic savants. There’s an interesting thread about how child prodigies who are not autists rely on autism-like facilities for pattern recognition and hyperconcentration. There’s a sketch of research suggesting that non-autistic child-prodigies, like autists, tend to have exceptionally large working memories. Often, they have autistic relatives. Money quote: “Recent study led by a University of Edinburgh researcher found that in non-autistic adults, having more autism-linked genetic variants was associated with better cognitive function.”

But then I got to this: “In a way, this link to autism only deepens the prodigy mystery.” And my instant reaction was: “Mystery? There’s a mystery here? What?” Rereading, it seems that the authors (and other researchers) are mystified by the question of exactly how autism-like traits promote genius-level capabilities.

At which point I blinked and thought: “Eh? It’s right in front of you! How obvious does it have to get before you’ll see it?”

... Yes, there is an enabling superpower that autists have through damage and accident, but non-autists like me have to cultivate: not giving a shit about monkey social rituals.

Neurotypicals spend most of their cognitive bandwidth on mutual grooming and status-maintainance activity. They have great difficulty sustaining interest in anything that won’t yield a near-immediate social reward. By an autist’s standards (or mine) they’re almost always running in a hamster wheel as fast as they can, not getting anywhere.

The neurotypical human mind is designed to compete at this monkey status grind and has zero or only a vanishingly small amount of bandwidth to spare for anything else. Autists escape this trap by lacking the circuitry required to fully solve the other-minds problem; thus, even if their total processing capacity is average or subnormal, they have a lot more of it to spend on what neurotypicals interpret as weird savant talents.

Non-autists have it tougher. To do the genius thing, they have to be either so bright that they can do the monkey status grind with a tiny fraction of their cognitive capability, or train themselves into indifference so they basically don’t care if they lose the neurotypical social game.

Once you realize this it’s easy to understand why the incidence of socially-inept nerdiness doesn’t peak at the extreme high end of the IQ bell curve, but rather in the gifted-to-low-end-genius region closer to the median. I had my nose memorably rubbed in this one time when I was a guest speaker at the Institute for Advanced Study. Afternoon tea was not a nerdfest; it was a roomful of people who are good at the social game because they are good at just about anything they choose to pay attention to and the monkey status grind just isn’t very difficult. Not compared to, say, solving tensor equations.

Comment author: Daniel_Burfoot 29 February 2016 08:38:59AM 11 points [-]

PSA: if you have chronic, low-level respiratory/sinus problems, there is a good chance it is caused by poor air quality in your place of residence. There are a lot of possible causes, including mold, bad carpeting, and poorly maintained HVAC systems.

I may be especially vulnerable, but I've experienced significant respiratory problems caused by bad air in several different apartments over the years. A couple of years ago, I was going through a particularly bad episode, and I thought I had simply developed adult-onset allergies to standard allergens (pollen, etc) but I noticed that when I spent a week away on a family visit, the symptoms almost went away. Then I moved out of the place with bad air, and the problems disappeared in a matter of weeks.

The good news is: it should be easy to tell if your apartment is causing problems, by staying elsewhere for a couple of days and noticing changes in the symptoms. The bad news is, if you'd prefer to fix the problem instead of moving to a different place, it might be hard to do, or even to identify the problem exactly.

Comment author: James_Miller 28 February 2016 10:51:18PM *  9 points [-]

It's one of the most important and surprising events of our time and much of the discussion is anti-rational, i.e. bad people support Trump so Trump is bad; many are claiming that electing Trump would be catastrophic and discussing potential catastrophes is supposed to be one of the purposes of LW.

Comment author: enfascination 23 February 2016 05:52:30AM 11 points [-]

I think a lot about signal detection theory, and I think that's still the best I can come up with for this question. There are false positives, there are false negatives, they are both important to keep in mind, the cost of reducing one is an increase in the other, humans and human systems will always have both.

So, for example, even the most over-generous public welfare system will have deserving people off the dole and even the most stingy system will have undeserving recipients (by whatever definition), so the question (for a welfare system, say) isn't how do we prevent abuse, but how many abusers are we willing to tolerate for every 100 deserving recipients we reject? Also useful in lots of medical discussions, legal discussions, pop science discussions, etc.

Comment author: Old_Gold 14 February 2016 04:09:07AM 1 point [-]

I know that there are woman who don't participate on the LW forum but who do participate on meetups. Reinventing LW2.0 means shifting LW into being more welcoming to those people.

Would they contribute anything besides starting witch hunts. If the very existence of a single post at -19 is enough to drive them away, things don't look good in their favor.

As far as truth goes it's irrational to think that a the actions in a single case determine who someone happens to be.

"I only murdered someone once, I'm not a murderer."

Comment author: username2 14 February 2016 12:34:00AM 11 points [-]

Moral realism, apparently

Comment author: NancyLebovitz 11 February 2016 05:35:22PM 8 points [-]

Thank you for your reply. This is not at all what I expected.

I think there's a rule for allegories that the symbols shouldn't be too much like the thing symbolized (in this case an allegory about sex shouldn't use real world genders). I also recommend updating about people's ability to interpret (especially about a fraught subject like sex) rather than complaining that they didn't understand things the way you hoped.

This being said, I agree with you about prostitution, though more from a libertarian /sympathy for the prostitutes who should be allowed to do their work in peace than sympathy for people who have trouble finding sexual partners.

I'm not sure what the emotional journey is supposed to be. Maybe going from thinking of something as a personal problem to realizing that there's a systemic problem?

Comment author: Lumifer 10 February 2016 04:53:01PM 11 points [-]

No, because you haven't demonstrated that you successfully preserved it, where "successfully" means "able to revive in an intact and working shape".

Comment author: TheAltar 08 February 2016 09:51:49PM *  9 points [-]

I made one reply to this, and later deleted it. Then, I made another reply, and deleted that one as well.

I feel mind-killed and I can't tell who else is mind-killed. I'm just going to take this in stride as a time-appropriate refresher course on why we don't discuss politics.

Comment author: James_Miller 08 February 2016 08:26:07PM 11 points [-]

Worm

Comment author: Jiro 08 February 2016 05:38:15PM 8 points [-]

The metaphor doesn't even make sense, assuming it's about sex. If the burning branch represents virginity, then it would be possible to pay a girl to free the boy from the branch, but it would not be possible for another girl to put him under again. If the branch represents "having regular sex", then it would be possible for a girl to put him under the branch again, but it would also mean that the girl given the gold nugget has to be given a continuous stream of gold nuggets or she would also put the boy under the branch again.

Also, dragging someone under the burning branch to free yourself doesn't make sense even as rape. Rapists do not turn other people into virgins.

In response to Upcoming LW Changes
Comment author: Brillyant 03 February 2016 08:16:04PM *  10 points [-]

Count me as equal parts hopeful and skeptical.

I think the best part of LW was the content—articles by EY and the dude who writes at SSC being on the top of that list. Oh, and Luke wrote some cool stuff, too. There have been others, but the main consistent top posters are out as far as I can tell. If you can find good content, you will win in this LW reboot mission, even if no other changes are made.

Otherwise, I think you'll need huge changes to Make LW Great Again™. It's basically a good rationality/math-y reddit sub with an AI and EA focus. There's nothing wrong with that, but it's not terribly novel or special either.

The pure cynic in me says almost nothing has demonstrably changed in the 3+ years (maybe more?) I've been reading here (other than the decline in good content) and I've no reason to believe this effort will yield anything.

Anyway, sincere kudos to you for your efforts. I like LW and support common sense efforts to improve it.

Couple ideas off the top off my head:

  • Come up with a 2.0 karma system. Reddit-style karma is cool and functional, but I bet the finest minds at LW could come up with something that fosters even more rational discussion. Maybe a drop-down box with a few broad categories for "Why did you vote this way?" on every upvote or downvote so the commentor receives better feedback? It seems to me the system, while it has its value, is still often times just "yay!" or "boo!" buttons. Maybe you could devise something better?

  • How about an extensive blogroll system with some sort of interactive component ("vote" via radio button, or an upvote/downvote) to indicate which articles/blogs LWers found worth reading? LW could have some value as a meta-level "hub" site for the rationality blog universe.

  • Get rid of the Main/Discussion dichotomy altogether. It's super broken. Or maybe just organically let posts that get north of X upvotes be marked with a "This is good stuff" status star and place them in a more prominent spot.

Good luck. I look forward to the coming changes!

Comment author: WithAThousandFaces 02 February 2016 09:04:54AM *  11 points [-]

Harsh, but this does have two HuffPo-like traits: first, he uses his opening line to make a point that's grossly misleading, and repackages his generic pitch for EA as something relevant to an upcoming holiday. "Hey, you know what's the most romantic thing to do? Turns out that it's the same thing we recommend doing all the time. What a coincidence!"

Second, his factoids about the psychology of generosity are as misleading as HuffPo-tier science reporting. Generally speaking, the psych/neuropsych studies I've read don't really support the conclusions that EAs seem to want them to, including those studies that they cite as evidence. Specifically speaking, in this case, the studies don't seem to indicate that charitable giving is special, broadly or vis-a-vis the activity that this post is contrasting them with. I.e., neither of the articles provide evidence that giving to charity has a particular advantage in making people feel good over other forms of generous behavior, including the conventional Valentine's Day one of giving something nice and romantic to someone you love. Indeed, most of the research I've seen on the subject indicates that a wide range of actions taken on behalf of others produce neurological rewards.

I'd find it very strange if actions toward other people you didn't know produced greater psychological rewards than those you knew and loved, and I've yet to see any evidence that it's true. Anecdotally, it seems vastly more likely that the opposite is true: that if you're trying to maximize your own happiness, being generous to the people you love is the best way to push this psychological button.

Comment author: philh 28 January 2016 05:15:45PM 11 points [-]

I'm not going to argue that you should pay attention to EY. His arguments convince me, but if they don't convince you, I'm not gonna do any better.

What I'm trying to get at is, when you ask "is there any evidence that will result in EY ceasing to urgently ask for your money?"... I mean, I'm sure there is such evidence, but I don't wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: "this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!" And I'm pointing out that no, that's not how he acts.

While we're at it, this exchange between us seems relevant. ("Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design." "Well, what a relief!") You seem surprised, and I'm not sure what about it was surprising to you, but I don't think you should have been surprised.

Basically, even if you're right that he's wrong, I feel like you're wrong about how he's wrong. You seem to have a model of him which is very different from my model of him.

(Btw, his opinion seems to be that AlphaGo's methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)

Comment author: Kaj_Sotala 25 January 2016 09:41:17PM *  11 points [-]

If you haven't heard of it yet, I recommend the novel Crystal Society (freely available here, also $5 Kindle version.)

You could accurately describe it as "what Inside Out would have been if it looked inside the mind of an AI rather than a human girl, and if the society of mind had been composed of essentially sociopathic subagents that still came across as surprisingly sympathetic and co-operated with each other due to game theoretic and economic reasons, all the while trying to navigate the demands of human scientists building the AI system".

Brienne also had a good review of it here.

Comment author: gjm 20 January 2016 10:01:36AM 11 points [-]

I am trying, and failing, to think of any possible situation in which I would want to use one of these gestures.

If you don't know the person you're facing is another EA, the likely outcome is that you make a weird gesture and the other party is offended or confused, thinks you're creepy and weird, and avoids interacting with you in the future. Extra bonus points if they ask why you just did that weird thing, you explain it's an identity symbol for effective altruists, and now they think effective altruism is a sinister cult with weird hand gestures.

If you do know the person you're facing is another EA, why do you need a special gesture to identify you both as such?

Maybe it might be used in a situation other than one-to-one interaction like, er, that Nazi salute? But it's not like EA rallies are particularly common, and if they were you'd need this gesture to be known by everyone there, which ... doesn't seem likely to happen.

What other instances are there of this kind of gesture being used? You get them sometimes in secret societies like the Freemasons with their special handshakes, or maybe the early Christians supposedly scratching fish symbols in the dirt. But EA isn't a secret society nor (so far as I know) does it aspire to be one; effective altruists aren't persecuted and have no need to hide, and the available evidence suggests that openness about generous giving helps to encourage others to give more.

What am I missing? When would anyone use this and why? I think I need some concrete examples. The OP says things like "for various purposes" (what purposes?) and "to help us identify each other" (in what situations?) and "for the positive psychological effects" (of doing what, exactly?). I just don't get it.

Comment author: Vaniver 18 January 2016 07:47:27PM 11 points [-]

Temple Grandin has some work that's relevant, and argues for quantitative measures. One of the easy metrics to use now are bodily integrity things, like the percentage of animals who are lame when they make it to the slaughterhouse. A lame animal is unlikely to be a happy or well-treated animal, and it seems easy to measure and compare.

Comment author: Risto_Saarelma 06 January 2016 09:20:40PM *  11 points [-]

I see the pattern identity theory, where uploads make sense, as one that takes it as a starting point that you have an unambiguous past but no unambiguous future. You have moments of consciousness where you remember your past, which gives you identity, and lets you associate your past moments of consciousness to your current one. But there's no way, objective or subjective, to associate your present moment of consciousness to a specific future moment of consciousness, if there are multiple such moments, such as a high-fidelity upload and the original person, who remember the same past identity equally well. A continuity identity theorist thinks that a person who gets uploaded and then dies is dead. A pattern identity theorist thinks that people die in that sense several times a second and have just gotten used to it. There are physical processes that correspond to moments of consciousness, but there's no physical process for linking two consecutive moments of consciousness as the same consciousness, other than regular old long and short term memories.

There's no question that the upload and the original will diverge. If I have a non-destructive upload done on me, I expect to get up from the upload couch, not wake up in the matrix, old habits and all that. And there is going to be a future me who will experience exactly that. But if the upload was successful, there's also going to be a future me who will be very surprised to wake up staring at some fluorescent polygons, having expected to wake up on the upload coach. This is where the "no unambiguous future selves" stops being sophistry and starts paying rent for the pattern identity theorist. "Which one is the real me" is a meaningless question. All we have to go with are memories, and both of me will have my memories.

If you want to argue a pattern identity theorist out of it, you'll want to argue why there has to necessarily be more than just memory going on with producing the sense of moment-to-moment personal continuity, and why the physically unconnected moments of consciousness model can't be sufficient.

Comment author: The_Lion 06 January 2016 04:43:48AM 1 point [-]

Rosa Parks didn't start her own non-racist bus service. She helped to create a climate in which the existing bus service providers couldn't get away with telling black people where to sit.

So, how did the movement she started work out for black people?

Hmm, it appears that half a century afterwards most blacks live in crime-field hell-holes where more of them get killed in a single year (by other blacks) than were lynched during the entire century of Jim Crow.

Comment author: RichardKennaway 05 January 2016 01:16:09PM 11 points [-]

There is some confusion in the comments over what utility is.

kithpendragon writes:

the maximum utility that it could conceivably expect to use

and Usul writes:

goes out to spend his utility on blackjack and hookers

Utility is not a resource. It is not something that you can acquire and then use, or save up and then spend. It is not that sort of thing. It is nothing more than a numerical measure of the value you ascribe to some outcome or state of affairs. The blackjack and hookers, if that's what you're into, are the things that you would be specifically seeking by seeking the highest utility, not something you would afterwards get in exchange for some acquired quantity of utility.

Comment author: gjm 04 January 2016 09:58:07PM 11 points [-]

I will be most interested to find out what it is that requires a sockpuppet but doesn't require it to be secret that it's a sockpuppet or even whose sockpuppet.

Comment author: Viliam 29 December 2015 09:51:33PM 11 points [-]

From my experience the most "value added per book" in psychology is reading Games People Play. Just read the "games" and ignore all the psychoanalytical classifications attached to them -- psychoanalysis is highly dubious field, but the examples of the "games" come from real life, and many readers are shocked to find out that some of their life-long problems are actually instances of quite trivial scenarios. Sometimes there is an advice about how to quit playing the "game".

I know it's not exactly the kind of book you wanted, but it probably has more everyday applications than anything else. And it is really easy to read (when you skip the psychoanalytical classifications, which are provided separately).

Comment author: [deleted] 28 December 2015 02:33:10PM 11 points [-]

How to handle feeling low status? I mean the feeling that people don't respect you, and don't consider what you're doing or saying important or worthy. When I was young, I used to feel this way all the time. Now there are groups in which I don't feel this, but I still feel it occasionally, especially if I'm in new social situations. This is the worst feeling for me, and usually the number one reason why I sometimes lose motivation to do things.

The simple solution is to acquire more status, but I'm not really asking about that because you have to be able handle being low status before you can become high-status. Easiest way I've found for acquiring status in groups is this:

  1. Pick a group
  2. Become accustomed to the norms of that group
  3. Signal knowledge, experience, and talent in the areas of interest of that group. Have the right opinions and interests and follow fashion as those interests and popular opinions change. Make the right lifestyle choices. Do impressive things based on those norms. It's not good to be too obvious about these things because explicitly seeking approval signals low status in many groups. There's room for freedom in most of these areas because of countersignaling reasons.

Then there are generally impressive things like having a Ph.D, a high-paying job, or being really skilled in some area which are high status in many groups.

I've noticed that some people who are very intelligent, and especially those who are socially intelligent, can often make people respect them even in new groups because they always find interesting and relevant things to say. I'm not that kind of person.

In response to comment by [deleted] on Open Thread, Dec. 28 - Jan. 3, 2016
Comment author: Viliam 28 December 2015 01:20:13PM *  11 points [-]

You seem to assume that the management consulting companies are paid for making the correct decision based on the data... as opposed to giving the answer someone important in the management (the person who made the decision to hire them) wanted to hear, while providing this person plausible deniability ("it wasn't my idea; it's what the world-renown experts told us to do; are you going to doubt them?").

Depending on which view is correct, there may or may not be a market demand for your solution.

Comment author: Clarity 27 December 2015 04:43:23PM *  9 points [-]

It's Monday the 28th in Australia.

Different LessWrongers have different time zones...

Then again, only Clarity would somehow get his open thread downvoted to a negative balance...

Comment author: username2 25 December 2015 06:55:52PM *  6 points [-]

Would you apply the same logic to actions by an actual government? That is, argue that the news coverage of trials shows they are distracting and it would be better to just have suspects vanished by the secret police in the dead of night?

Or since this is a forum, how about having problematic posters quietly karmassassinated, oh wait.

Comment author: username2 25 December 2015 02:58:36PM 11 points [-]

Please, don't ban anonymous account, there are at least couple people who regularly use it. It is rare that anyone would use it for voting, it was the first time I have logged in and noticed so much upvoting and downvoting in a single thread. Sometimes I find a couple of votes in a thread, and I often revert them, but that's it. Maybe there were previous incidents in the past, but I haven't noticed them. Of course, things like that relies on goodwill. If someone started abusing it, there would be no choice except to ban it.

By the way, thank you Nancy. You do a job that is often unpleasant, but necessary.

Comment author: Kaj_Sotala 24 December 2015 12:06:16PM *  11 points [-]

If we're doing the virtue ethical banning, then as long as we agree that the people in question deserved a ban, the specific reasons given for the ban aren't very important. The moderator may be reacting to a pattern that's clearly ban-worthy, but nonetheless hard to verbalize exactly, and thus misreport their real reason. Verbal reporting is hard.

Comment author: jsteinhardt 24 December 2015 06:16:47AM 10 points [-]

I'm dubious that that constitutes abusing her power; AdvancedAtheist was highly and consistently downvoted for a long period of time before being banned.

Comment author: Panorama 16 December 2015 11:23:55PM *  11 points [-]

Guess the correlation

The aim of the game is simple. try to guess how correlated the two variables in a scatter plot are. The closer your guess is to the true correlation, the better.

Comment author: James_Miller 07 December 2015 05:21:47PM 11 points [-]

I asked Steve Hsu (an expert) "How long do you think it will probably take for someone to create babies who will grow up to be significantly smarter than any non-genetically engineered human has ever been? Is the answer closer to 10 or 30 years?"

He said it might be technologically possible in 10 years but " who will have the guts to try it? There could easily be a decade or two lag between when it first becomes possible and when it is actually attempted."

In, say, five years someone should start a transhumanist dating service that matches people who want to genetically enhance the intelligence of their future children. Although this is certainly risky, my view is that the Fermi paradox implies we are in great danger and so should take the chance to increase the odds that we figure out a way through the great filter.

Comment author: Lumifer 25 November 2015 03:17:02AM 11 points [-]

By "transient" I mean that you mention a topic once and then never show any interest in it again. By "noise" I mean random pieces of text which neither contain useful information nor are interesting.

View more: Prev | Next