All of Rain's Comments + Replies

Rain1-1

I thought it was funny when Derek said, "I can explain it without jargon."

It seems to be conflating 'morality' with 'success'. Being able to predict the future consequences of an act is only half the moral equation - the other half is empathy. Human emotion, as programmed by evolution, is the core of humanity, and yet seems derided by the author.

5jessicata
Why do you think the author (me?) is deriding empathy? On SimplexAI-m's view, empathy is a form of cognition that is helpful, though not sufficient, for morality; knowing what others are feeling doesn't automatically imply treating them well (consider that predators tend to know what their prey are feeling); there's an additional component that has to do with respecting moral symmetries, e.g. not stealing from them if you wouldn't want them to steal from you.
Answer by Rain41

The novel After Life by Simon Funk has quite a few flashbacks to the world prior humanity's end, though it takes more than a year. I find it one of the more hopeful stories in the genre.

Rain120

Your periodic reminder that in 1947, New York City vaccinated ~6.35 million people (80% of their population) for smallpox in less than a month. If you do not think we can do this, what changed to make it impossible?

What changed? We started looking for every possible negative consequence of rolling out vaccines that quickly, and then working to mitigate each and every one.

Rain60

Neat. I work for DLA. Thanks for the update.

Rain50

Thank you very much for the insightful news. I consider these posts essential reading.

Rain100

Once again, thank you for these incredibly informative posts.

Rain110

Thank you for all this useful information and analysis.

Rain220

Thank you very much for posting these.

3Theodore Ehrenborg
Seconded
Rain30

I agree it fits well here. However, it has a very different tone from other posts on the MIRI blog, where it has also been posted.

Rain70

Laziness. Though I note Stuart_Armstrong had the same opinion as me, and offered even fewer means of improvement, and got upvoted. I should have also said I agree with all points contained herein, and that the message is an important one. That would have reduced the bite.

6Ben Pace
Just as a data point, you're right, your comment felt to me as though it had more 'bite' and felt a little more aggressive than Stuart's, which is why I downvoted yours and not his, even though I almost downvoted his too.
Rain00

This article is very heavy with Yudkosky-isms, repeats of stuff he's posted before, and it needs a good summary, and editing to pare it down. I'm surprised they posted it to the MIRI blog in its current form.

Edit: As stated below, I agree with all the points of the article, and consider it an important message.

5dxu
There is constructive criticism, and there is non-constructive criticism. My personal heuristic for determining whether a given critic is being constructive is to look at (a) how specific they are about the issues they perceive, and (b) whether they provide any specific suggestions as to how to address those issues. The parent comment does poorly on both fronts, and that in conjunction with the heavily aggressive tone therein are sufficient to convince me that it was very much written in bad faith. Please strive to do better.

I'll agree that it's more than a little redundant, especially when I understood the point he was getting at in the first part. But how much of that is the fault of his writing here and how much of it is the fault of the fact that he's written about the issue before? And, more importantly, if you were to hand this article to someone who knows nothing about Yudkowsky or Less Wrong, would that extra length help them? I'd argue that a lot of the article's length comes from trying to avoid some of his most common problems - instead of r

... (read more)
2Vaniver
https://www.lesserwrong.com/feed.xml is the primary one; more customization is coming soon.
4Gunnar_Zarncke
There are other big deals. The MS ImageNet win also contained frightening progress on the training meta level. -- extracted from very readable summary at wired: http://www.wired.com/2016/01/microsoft-neural-net-shows-deep-learning-can-get-way-deeper/

Thanks. Key quote:

What this indicates is not that deep learning in particular is going to be the Game Over algorithm. Rather, the background variables are looking more like "Human neural intelligence is not that complicated and current algorithms are touching on keystone, foundational aspects of it." What's alarming is not this particular breakthrough, but what it implies about the general background settings of the computational universe.

Rain00

Even in that case, whichever actor has the most processors would have the largest "AI farm", with commensurate power projection.

Rain80

That interview is indeed worrying. I'm surprised by some of the answers.

3Viliam
Like this? The first one is a non-answer, the second one suggests that a proper response to Dr. Evil making a machine that transforms the planet into a grey goo is Anonymous creating another machine which... transforms the grey goo into a nicer color of goo, I guess?
Rain110

More likely, he also "always thought that way," and the extreme story was written to provide additional drama.

2Adam Zerner
Perhaps. My best guess is that he did always think that way... but that the experience also gave him a notable boost (how could it not?!). My reasoning is that tons of people have similarly painful experiences, but don't become behavioral economists afterwards.
Rain60

Thank you for replicating the experiment!

Rain480

Somewhat upper middle class job; low cost of living, inexpensive hobbies, making donations a priority.

Rain740

I donated $5000 today and continue my $1000 monthly donations.

[anonymous]130

Where are you getting that much money?

So8res150

Woah. Thanks!

Rain40

So MIRI and LW are no longer a focus for you going forward?

3JRMayne
Oh, for pity's sake. You want to repeatedly ad hominem attack XiXiDu for being a "biased source." What of Yudkowsky? He's a biased source - but perhaps we should engage his arguments, possibly by collecting them in one place. "Lacking context and positive examples"? This doesn't engage the issue at all. If you want to automatically say this to all of XiXiDu's comments, you're not helping.
5XiXiDu
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of. To quote one of my posts: I even defended LessWrong against RationalWiki previously. The difference is that I also highlight the crazy and outrageous stuff that can be found on LessWrong. And I also don't bother offending the many fanboys who have a problem with this.
XiXiDu190

Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.

I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.

To quote one of my posts:

I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.

I even defended LessWrong again... (read more)

Rain50

Skin reacts to light, too.

0Lumifer
In the visible part of the spectrum (that is, not UV)?
4ChristianKl
Actually the post says: Don't buy index funds because the efficient market hypothesis isn't true.
2Viliam_Bur
That's the first 1/3 of the article.
Rain20

The FAQ addresses Crohn's Disease: "more data needed".

https://faq.soylent.me/hc/en-us/articles/200838449-Will-Soylent-help-my-Crohns-or-IBS-

It also has a full list of ingredients.

https://faq.soylent.me/hc/en-us/articles/200789315-Soylent-1-0-Nutrition

One thing from the link above that I didn't previously know: "The Soylent recipe is based on the recommendations of the Institute of Medicine (IOM) and is approved as a food by the Food and Drug Administration (FDA)." (emphasis theirs)

1Lumifer
That triggers my bullshit detector. FDA does not "approve as food". It has a list of substances which have been approved as food additives or are GRAS (generally recognized as safe, basically a grandfathering clause). I'm willing to believe that Soylent ingredients all come from that FDA-approved list. That does not mean that the FDA approved Soylent as food.
3Mizue
Is "approved as a food" like those fake star naming companies which claim that that the star names are in the library of Congress? The FDA approving it as a food doesn't mean the FDA approves of it being consumed in a specific way. I'm pretty sure ketchup is approved as a food too, but that doesn't mean you can drink a bottle of it for lunch each day and stay healthy.
1[anonymous]
Rain, thanks for the link. I'm impressed they factored in the question of Crohns/IBS. Most people tend to forget the issue. I hope to get a chance to talk to my GI about it and some other supplements soon, when I have more time to judge his reliability (new guy so the verdict's still out), just so I can have his opinion on the subject.
Rain60

No agreement. It's a polarizing topic, even here.

Rain110

No reason to apologize. It's a good time for another thread, since it's actually out now.

8Adam Zerner
I should have thought to check for previous threads. I just heard of Soylent and thought of it as some obscure product so I didn't, but I should have just checked anyway. I'm reading through them now, but if you wouldn't mind saving me some time, is there some sort of general agreement as to the safety of Soylen and similar meal replacement drinks/bars?
Rain140

Here's my review of Soylent and a taskification of how I use it.

Pros:

  • Much easier than cooking or even fast food, when transportation costs are taken into account
  • Much more nutritionally complete than fast food or processed sugar-foods
  • Relatively cheap
  • Tastes neutral or slightly sweet

Cons:

  • Sometimes sticks to the back of my throat
  • Can give foul smelling gas
  • Can cause headaches
  • Can cause nausea
  • Texture of high pulp orange juice
  • Doesn't have the daily allowance of sodium

Preparation Process:

  • Place Takeya pitcher on counter with top off
  • Rip off top of S
... (read more)
Rain360

I pledged to continue donating $1,000 per month.

I also convinced a friend to donate for the first time.

2lukeprog
Awesome, thanks!
Rain160

Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

XiXiDu cares about every Eliezer potential-mistake.

2gwillen
I like to taboo the word 'drama', as meaning "conflict, with some connotations of insignificance snuck in". Forum conflict may or may not be noise. If there's a problem with moderation on the site, it's valuable to me to know that, especially if the moderation doesn't announce itself (as seems to be the case here) so that such complaints are the only way for me to even know it's happening. So I am interested in more information about this issue.
Rain80

I didn't realize the grand prize was based on daily unique donors until I got the 'urgent' email. I got my dad to chip in $10, too. Looks like the other leading organization has more friends and family.

Rain00

My apologies, I won't be able to make it. Work unexpectedly kept me up until 3am, and my body punished me with sleep.

0arundelo
Too bad, we missed you.
0Rain
My apologies, I won't be able to make it. Work unexpectedly kept me up until 3am, and my body punished me with sleep.
Rain30

Jon's what I call normal-smart. He spends most of his time watching TV, mainly US news programs, and they're quite destructive to rational thinking, even if the purpose is for comedic fodder and to discover hypocrisy. He's very tech averse, letting the guests he has on the show come in with information he might use, trusting (quite good) intuition to fit things into reality. As such, I like to use him as an example of what more normal people feel about tech / geek issues.

Every time he has one of these debates, I really want to sit down as moderator so I ca... (read more)

Rain00
[This comment is no longer endorsed by its author]Reply
9wedrifid
What is the intended lesson or rationality insight here?
Rain00

Immediate thoughts, before reading comments: One-box. I had started to think more deeply until I read the part about being run over for factoring, and for some reason my brain applied it to reasoning about this topic as a whole and spit out a final answer.

Intuitively, it seemed one boxing would get me a million, as per standard Newcomb. The lottery two million seemed like gravy above that (diminishing marginal utility of money), with a potential for 3 million total. Since they're independent, the word "separately" and its description made it seem like the lottery was unable to be affected by my actions at all. Thus, take box B, and hope for a lottery win. Definitely don't over think it, or risk a trolley encounter.

Rain60

Glad to hear. It is interesting data that you managed to bring in 3 big name trolls for a single thread, considering their previous dispersion and lack of interest.

Rain00

AMF/GiveWell charities to keep GiveWell and the EA movement growing while actors like GiveWell, Paul Christiano, Nick Beckstead and others at FHI, investigate the intervention options and cause prioritization, followed by organization-by-organization analysis of the GiveWell variety, laying the groundwork for massive support for the top far future charities and organizations identified by said processes

Cool, if MIRI keeps going, they might be able to show FAI as top focus with adequate evidence by the time all of this comes together.

0lukeprog
Well, in collaboration with FHI. As soon as Bostrom's Superintelligence is released, we'll probably be building on and around that to make whatever cases we think are reasonable to make.
Rain10

Build up general altruistic capacities through things like the effective altruist movement or GiveWell's investigation of catastrophic risks

I read every blog post they put out.

Invest money in an investment fund for the future which can invest more [...] when there are better opportunities

I figure I can use my retirement savings for this.

(recalling that most of the value of MIRI in your model comes from major institutions being collectively foolish or ignorant regarding AI going forward)

I thought it came from them being collectively foolish or ig... (read more)

1lukeprog
I believe the correct term is "ass-pull number." :)
Rain30

The method is even more important (practice vs. perfect practice, philanthropy vs. givewell). I believe in the mission, not MIRI per se. If Eliezer decided that magic was the best way to achieve FAI and started searching for the right wand and hand gestures rather than math and decision theory, I would look elsewhere.

Rain40

I subscribe to the view that AGI is bad by default, and don't see anyone else working on the friendliness problem.

Rain00

I'm not sure which fallacy you're invoking, but saying (to paraphrase), 'superintelligence is likely difficult to aim' and 'MIRI's work may not have an impact' are certainly possible, and already contribute to my estimates.

0Peter Wildeford
I think a fair amount of people argue that because a cause is important, anyone working on that cause must be doing important work.
Rain20

Could you clarify your definition of success?

From MIRI's mission statement: "the creation of smarter-than-human intelligence has a positive impact."

I see smarter-than-human intelligence as required to overcome the combined threat of existential risks in the long run.

6Pablo
The full sentence reads: "MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact." (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI's efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI. So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI's efforts? I find that view implausible, and instead agree with Carl Shulman that "the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole," for the reasons he mentions.
Load More