Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jyan 23 April 2017 01:37:26PM 0 points [-]

If a new non-profit AI research company were to be built from scratch, which regions or countries would be best for the safety of humanity?

Comment author: MrCogmor 23 April 2017 12:36:05PM 0 points [-]

Nutrition is taught in colleges to so people become qualified to become accredited dieticians. You should be able to find a decent undergrad textbook on Amazon. If you get used and an edition behind the current one it should be cheap as well.


Comment author: Thomas 23 April 2017 09:57:22AM *  0 points [-]

"def" isn't a word in the English language

But the English statement:

A Python program which begins with the letter "d"; followed by the letter "e";....

is an English statement. Especially when it is a bit improved grammatically.

Comment author: ChristianKl 23 April 2017 09:41:08AM *  0 points [-]

"def" isn't a word in the English language. __init__ isn't either.

Comment author: ESRogs 23 April 2017 07:50:30AM 0 points [-]

Then, as the Open Philanthropy Project explored active funding in more areas, its estimate of its own effectiveness grew. After all, it was funding more speculative, hard-to-measure programs...

If I start funding a speculative project because I think it has higher EV than what I'm funding now, then isn't it rational for me to think my effectiveness has gone up? It seems like you're implying it's wrong of them to think that.

but a multi-billion-dollar donor, which was largely relying on the Open Philanthropy Project's opinions to assess efficacy (including its own efficacy), continued to trust it.

I worry that this might paint a misleading picture to readers who aren't aware of the close relationship between Good Ventures and GiveWell. This reads to me like the multi-billion-dollar donor is at arm's length, blindly trusting Open Phil, when in reality Open Phil is a joint venture of GiveWell and Good Ventures (the donor), and they share an office.

Comment author: ChristianKl 23 April 2017 06:30:01AM 0 points [-]

You think people don't expect it from GiveWell?

Comment author: Benquo 23 April 2017 06:24:19AM 0 points [-]

This seems a little unfair to Charles Ponzi. He was emulating the practices of Banco Zarossi, the bank where he got his first good job. Maybe it seemed like a normal accepted business practice to him.

He told his investors the money would come from postal stamp arbitrage. He'd really found an arbitrage opportunity, albeit one it was hard to cash out. Maybe he really thought he'd be able to make those kinds of returns, and then just never went back to check once the money started rolling in.

It's not obvious to me that he consciously formed an intent to deceive. Maybe he was fooling himself too.

Comment author: Thomas 23 April 2017 06:12:47AM *  0 points [-]

English is computable. You can describe, letter by letter any Python program you want. Letter by letter.

Not only its listing, but also its execution, step by step. So yes, English is computable.

In response to comment by gilch on Cheating Omega
Comment author: WalterL 23 April 2017 06:04:25AM 0 points [-]

I'm simplifying, but I don't think it's really strawmanning.

There exists no procedure that the Chooser can perform after Omega sets down the box and before they open it that will cause Omega to reward a two boxer or fail to reward a one boxer. Not X-raying the boxes, not pulling a TRUE RANDOMIZER out of a portable hole. Omega is defined as part of the problem, and fighting the hypothetical doesn't change anything.

He correctly rewards your actions in exactly the same way that the law in Prisoner's Dilemma hands you your points. Writing long articles about how you could use a spoon to tunnel through and overhear the other prisoner, and that if anyone doesn't have spoons in their answers they are doing something wrong...isn't even wrong, it's solving the wrong problem.

What you are fighting, Omega's defined perfection, doesn't exist. Sinking effort into fighting it is dumb. The idea that people need to 'take seriously' your shadow boxing is even more silly.

Like, say we all agree that Omega can't handle 'quantum coin flips', or, heck, dice. You can just repose the problem with Omega2, who alters reality such that nothing that interferes with his experiment can work. Or walls that are unspoonable, to drive the point home.

Comment author: Benquo 23 April 2017 06:01:43AM 1 point [-]

Is the idea that someone might think that current managers are wrongly failing to listen to them, but if forced to listen, would accept good ideas and reject bad ones? That seems plausible, though the more irrational you think the current managers are in the relevant ways, the more you should expect your influence to be through control rather than contributing to the discourse. Overall this seems like a decent alternative hypothesis.

Comment author: scarcegreengrass 23 April 2017 03:23:04AM 0 points [-]

Ah, the comments too! Okay, now I understand.

Comment author: David_Gerard 23 April 2017 01:29:45AM 0 points [-]

but in the context of Wikipedia, you should after all keep in mind that I am an NSA shill.

Comment author: David_Gerard 23 April 2017 01:26:36AM *  0 points [-]

(More generally as a Wikipedia editor I find myself perennially amazed at advocates for some minor cause who seem to seriously think that Wikipedia articles on their minor cause should only be edited by advocates, and that all edits by people who aren't advocates must somehow be wrong and bad and against the rules. Even though the relevant rules are (a) quite simple conceptually (b) say nothing of the sort. You'd almost think they don't have the slightest understanding of what Wikipedia is about, and only cared about advocating their cause and bugger the encyclopedia.)

Comment author: David_Gerard 23 April 2017 01:06:43AM *  0 points [-]

This isn't what "conflict of interest" means at Wikipedia. You probably want to review WP:COI, and I mean "review" it in a manner where you try to understand what it's getting at rather than looking for loopholes that you think will let you do the antisocial thing you're contemplating. Your posited approach is the same one that didn't work for the cryptocurrency advocates either. (And "RationalWiki is a competing website therefore his edits must be COI" has failed for many cranks, because it's trivially obvious that their true rejection is that I edited at all and disagreed with them, much as that's your true rejection.) Being an advocate who's written a post specifically setting out a plan, your comment above would, in any serious Wikipedia dispute on the topic, be prima facie evidence that you were attempting to brigade Wikipedia for the benefit of your own conflict of interest. But, y'know, knock yourself out in the best of faith, we're writing an encyclopedia here after all and every bit helps. HTH!

If you really want to make the article better, the guideline you want to take to heart is WP:RS, and a whacking dose of WP:NOR. Advocacy editing like you've just mapped out a detailed plan for is a good way to get reverted, and blocked if you persist.

Comment author: tukabel 22 April 2017 10:54:15PM 2 points [-]

Welcome to the world of Memetic Supercivilization of Intelligence... living on top of the humanimal substrate.

It appears in maybe less than a percent of the population and produces all these ideas/science and subsequent inventions/technologies. This usually happens in a completely counter-evolutionary way, as the individuals in charge get most of the time very little profit (or even recognition) from it and would do much better (in evolutionary terms) to use their abilities a bit more "practically". Even the motivation is usually completely memetic: typically it goes along the lines like "it is interesting" to study something, think about this and that, research some phenomenon or mystery.

Worse, they give stuff more or less for free and without any control to the ignorant mass of humanimals (especially those in power), empowering them far beyond their means, in particular their abilities to control and use these powers "wisely"... since they are governed by their DeepAnimal brain core and resulting reward functions (that's why humanimal societies function the same way for thousands and thousands of years - politico-oligarchical predators living off the herd of mental herbivores, with the help of mindfcukers, from ancient shamans, through the stone age religions like the catholibanic one, to the currently popular socialist religion).

AI is not a problem, humanimals are.

Our sole purpose in the Grand Theatre of the Evolution of Intelligence is to create our (first nonbio) successor before we manage to self-destruct. Already nukes were too much, and once nanobots arrive, it's over (worse than DIY nuclear grenade for a dollar any teenager or terrorist can assemble in a garage).

Singularity should hurry up, there are maybe just few decades left.

Do you really want to "align" AI with humanimal "values"? Especially if nobody knows what we are really talking about when using this magic word? Not to mention defining it.

Comment author: casebash 22 April 2017 10:51:14PM 0 points [-]

Because people expect this from funds.

Comment author: Kallandras 22 April 2017 10:45:57PM *  0 points [-]

My perspective is that religious folk have not been prepping the party. Scientists have been trying to get some instruments together to make some music, but the religious people keep grabbing guitars, smashing them, and calling it music. Then, when the music finally starts up despite all the smashed instruments, religious folks say "oh hey, that's what we were trying to do, you're welcome everybody."

As soon as something conveniently fits the religious narrative (appropriately tortured beyond its original construction), it gets incorporated. I find that frustrating, as it should instead shatter the narrative and reveal it for the useless pile of dogma that it is.

Comment author: gjm 22 April 2017 10:41:18PM 0 points [-]

Did he?

I may be misremembering. I thought that was where I first saw the C&H pony cartoon being used in the way we're talking about, but I could very well be wrong -- or of course I could be right about that but wrong to think that was its first influential emergence. Here is a 2004 example, not from CT itself, but from one of its leading contributors.

Comment author: tristanm 22 April 2017 10:21:35PM 0 points [-]

I haven't seen that your approach nor Paul's necessarily conflicts with that of MIRI's. There may be some difference of opinion on which is more likely to be feasible, but seeing as how Paul works closely with MIRI researchers and they seem to have a favorable opinion of him, I would be surprised if it were really true that OpenPhil's technical advisors were that pessimistic about MIRI's prospects. If they aren't that pessimistic, then it would imply Holden is acting somewhat against the advice of his advisors, or that he has strong priors against MIRI that were not overcome by the information he was receiving from them.

Comment author: Lumifer 22 April 2017 09:32:15PM *  0 points [-]

I suppose you're making fun

Mea culpa, I do that :-)

showing that they weren't or that they were right to be

You do realize this is casual discussion on the 'net, not an academic text intended to be used as a reference with all the Is dotted and Ts crossed?

got from someone at Crooked Timber

Did he? I don't read Crooked Timber regularly, but I don't remember them being excited about ponies.

I'm getting a funny sense of deja vu here; how about you?

Oh, I just see a mulberry bush :-P

Comment author: Lumifer 22 April 2017 09:28:45PM 0 points [-]

It's possible to provide someone useful help by giving them information about their weaknesses but still be treated negatively as a result.

Sure, so? You just have to figure out whether it's worth it.

Comment author: Lumifer 22 April 2017 09:27:56PM 0 points [-]

tl;dr Talk to professionals, don't take advice on mental health from a internet forum.

Comment author: username2 22 April 2017 07:25:03PM 0 points [-]

Furthermore Aragorn et al specifically saw this conflict as a one-shot dilemma that had to be definitely resolved with the absolute destruction of Sauron. They already knew what negotiated peace with the enemy looked like (Saruman) and were not willing to risk that outcome, or any other outcome that would result in the rise of Sauron again. This is why they risk everything by making a frontal assault on Mordor against overwhelming odds. Killing the messenger / burning bridges is perfectly in line with the character motivations here and actually a point where the original source material fails.

Comment author: username2 22 April 2017 07:18:16PM 0 points [-]

It's a bit ironic to say that on a website with a large contingent of people that are purposefully child-free until the control problem is solved.

Comment author: username2 22 April 2017 07:16:14PM 0 points [-]

I don't think the rise of humanity has been very beneficial to monkeys, overall. Indeed the species we most directly evolved from, which at one point coexisted with modern humans, are now all extinct.

Comment author: whpearson 22 April 2017 06:53:27PM *  0 points [-]

Also if a singular cortex tries to do something too out of alignment with the monkey brain, various other cortices that are aligned with their monkey brains will tend to put them in jail or a mental asylum.

You can see the monkey brain as the first aligned system, to the genetic needs. However if we ever upload ourselves the genetic system will have completely lost. So in this situation it takes 3+ layers of indirect alignment to completely lose.

self-promotion I am trying to make a normal computer system with this kind of weak alignment built in. self-promotion

Comment author: ChristianKl 22 April 2017 05:03:25PM 0 points [-]

Why are the fund managers going to report on the success of their investments when an organisation like GiveWell doesn't do this (as per the example in the OP)?

Comment author: casebash 22 April 2017 04:18:37PM *  0 points [-]

To what extent is it expected that EAs will be the primary donors to these funds?

If you want to outsource your donation decisions, it makes sense to outsource to someone with similar values. That is, someone who at least has the same goals as you. For EAs, this is EAs.

Comment author: casebash 22 April 2017 03:59:12PM 0 points [-]

No, because the fund managers will report on the success or failure of their investments. If the funds don't perform, then their donations will fall.

Comment author: casebash 22 April 2017 03:55:59PM 2 points [-]

Wanting a board seat does not mean assuming that you know better than the current managers - only that you have distinct and worthwhile views that will add to the discussion that takes place in board meetings. This may be true even if you know worse than the current managers.

Comment author: gjm 22 April 2017 03:43:18PM 0 points [-]

You may have any finite number of additional words.

The difficulty is not simply that the vocabulary is not perfectly well defined; it's that "English" really means, in practice, something like "whatever someone considered an English speaker finds themselves able to understand as English" and yes, that's circular. Languages evolve, writers deliberately push the boundaries, etc. Is the property of being English even computable? Probably (though I wouldn't want my life or my job to depend on being able to provide even a sketchy proof) for finite sentences; for the infinite "sentences" you want to allow, though, not so much. It's not even clear what "computable" would mean there; it certainly can't be anything that involves feeding the "sentence" to a machine that does a finite amount of computation and then returns a verdict.

Comment author: gjm 22 April 2017 03:39:29PM 1 point [-]

Oi! Stop giving away endings.

Comment author: gjm 22 April 2017 03:34:51PM 0 points [-]

so I just have to throw all my work away

Or write something like "sorry, no time to give a real answer to this question" for each of the long nebulous ones. Probably almost as easy as giving up, and more useful-in-expectation to the people who made the questionnaire. (If there's some reward for filling it in, this may reduce the probability that you're eligible, but I doubt you're filling in questionnaires just for the sake of the rewards offered.)

Comment author: lahwran 22 April 2017 03:24:50PM 0 points [-]

this seems a bit oversold, but basically represents what I think is actually possible

Comment author: entirelyuseless 22 April 2017 03:19:14PM *  0 points [-]

This is a good example of what I think will actually happen with AI.

Comment author: gjm 22 April 2017 03:08:17PM 0 points [-]

spherical cows [...] vacuum

An interesting rhetorical tactic. I suggest you're being simplistic, and you respond not by showing that you weren't being, nor by admitting that you were being, but by ... well, I'm not sure, actually. I suppose you're making fun of the idea that anyone might think your earlier comments were simplistic. That's certainly easier than showing that they weren't or that they were right to be, and easier on the ego than admitting they were.

Are you familiar with Popehat?

Yup. And with the pony trope more generally, which I think Ken got from someone at Crooked Timber, who of course got it from Calvin and Hobbes. But laughing at something is not actually the same thing as demonstrating that it deserves only laughing at.

where no one has much in the way of coercive powers

Gosh, if only there were non-market mechanisms other than coercion. ... I'm getting a funny sense of deja vu here; how about you?

Comment author: tristanm 22 April 2017 01:44:27PM 2 points [-]

I know that an organization isn't a "superintelligence" (maybe it's more of a scaled up human intelligence), but I think this kind of thing is a useful metaphor for what happens when a powerful intelligence maximizes a simple utility function. In this case, the utility function that was specified was "do the most good in the world." There was no (as far as I can tell) malicious intent anywhere in the process. But even such a goal can result in manipulative or secretive behavior if the agent pursuing that goal follows it without regard to any other restrictions on behavior. My fear is that we'll attribute these problems to corruption or malicious intent whereas I don't think that's the case here, and one of the reasons I don't feel like the beginning of your essay involving Ponzi schemes is super appropriate.

Comment author: Benito 22 April 2017 01:40:24PM 2 points [-]

I'd like to register disagreement, I found the opening walked me through the analogy at a very helpful pace. Actually, I didn't quite know how ponzi schemes work. Perhaps I'm the odd one out.

Comment author: Kaj_Sotala 22 April 2017 11:37:58AM *  3 points [-]

Good criticisms and I think I'm in rough agreement with many of them, but I'd suggest cutting/shortening the beginning. ~everyone already knows what Ponzi schemes are, and the whole extended "confidence game" introduction frames your post in a more hostile way than I think you intended, by leading your readers to think that you're about to accuse EA of being intentionally fraudulent.

Comment author: ChristianKl 22 April 2017 08:34:15AM 0 points [-]

I don't feel any despair from reading Terry Pratchett's work. I rather feel pleasure by reading it and laughing about various jokes.

I don't think the solution you are seeking is found in fiction. If professional mental health services aren't easily available for you how about CBT workbooks like David Burn's The Feeling Good Handbook?

Comment author: ChristianKl 22 April 2017 08:11:29AM *  0 points [-]

It's possible to provide someone useful help by giving them information about their weaknesses but still be treated negatively as a result.

Telling someone to use more deodorant when they are smelly is useful help. The person might still hate you for it even if they actually use more deodorant as a result.

The social act of offering help also has an emotional aspect. A shy person can estimate that they could provide help and care about providing help and still not offer to help as a result of their shyness.

Comment author: lmn 22 April 2017 07:52:03AM 1 point [-]

Of course, the there is a game theoretic reason to shoot the messenger. The whole point of doing so is to burn a bridge. The original meaning of the term is:

Originally in military sense of intentionally cutting off one's own retreat (burning a bridge one has crossed) to commit oneself to a course of action

Ancient battles, and probably to large extend in modern battles as well, were won or lost on moral. When a large part of your army panicked and ran your side was almost certain to loose. Furthermore, whoever was the last to run would be the first one killed when the enemy overran your position. Thus, if you were afraid the soldier next you would run, you were likely to run as well. Burning the bridge behind you was one way to resolve the game theoretic dilemma. Running cannot save your life, so you might as well hold the line.

Metaphorically burning a bridge by killing the messenger serves the same purpose. By publicly killing Sauron's messenger Aragorn is reassuring his allies that he's not going to betray them by cutting a deal with Sauron that leaves them out to dry.

Comment author: chaosmage 22 April 2017 06:01:28AM 3 points [-]

Very very interesting. I have nothing to add except: This would get more readers and comments if the summary was at the top, not the bottom.

Comment author: Ritalin 22 April 2017 05:48:56AM *  0 points [-]

I don't think you're getting this. You are a meat sack of chemicals. "Being depressed by the realization" means that your meatsack chemistry shifted.

Well, assuming that said shift was long lasting, I want to shift it back into something more conductive to a productive and enjoyable life. Being miserable feels miserable, and, worst of all, it's boring.

the problem is that your ability to consume and digest that happiness is impaired.

On the contrary, I consume and digest the happiness way too fast. It helps me for a short while, and I feel gladness and joy and merriment and flow... and then I'm hungry again. I'm like an insatiable happiness sinkhole.

Comment author: UmamiSalami 22 April 2017 05:08:27AM *  1 point [-]

Why do you expect that to be true?

Because they generally emphasize these values and practices when others don't, and because they are part of a common tribe.

How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it's just about people's philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple additional years of relevant experience or schooling.

I do think that if you had a research-focused organization where everyone was an EA, it would be better to hire outsiders at the margin, because of the problems associated with homogeneity. (This wouldn't the case for community-focused organizations.) I guess it just depends on where they are right now, which I'm not too sure about. If you're only going to have 1 person doing the work, e.g. with an EA fund, then it's better for it to be done by an EA.

Comment author: Benquo 22 April 2017 05:00:44AM 0 points [-]

Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people.

Why do you expect that to be true? How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

Comment author: UmamiSalami 22 April 2017 04:53:24AM *  0 points [-]

I bet that most of the people who donated to Givewell's top charities were, for all intents and purposes, assuming their effectiveness in the first place. From the donor end, there were assumptions being made either way (and there must be; it's impractical to do all kinds of evaluation on one's own).

Comment author: UmamiSalami 22 April 2017 04:40:19AM *  1 point [-]

I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It's a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it's embryonic enough that everything is tied closely together. I tentatively agree that that is not desirable. But ideally, growth of thoroughly EA institutions should lead to specialization and independence. This will lead to a much more interesting ecosystem than if the intellectual work is largely outsourced.

Comment author: UmamiSalami 22 April 2017 04:25:52AM 6 points [-]

It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.

Comment author: Sandi 22 April 2017 03:11:04AM 1 point [-]

The SSC article about omega-6 surplus causing criminality brought to my attention the physiological aspect of mental health, and health in general. Up until now, I prioritized mind over body. I've been ignoring the whole "eat well" thing because 1) it's hard, 2) I didn't know how important it was and 3) there's a LOT of bullshit literature. But since I want to live a long life and I don't want my stomach screwing with my head, the reasonable thing to do would be to read up. I need book (or any other format, really) recommendations on nutrition 101. Something practical, the do's and don'ts of food and research citations to back it up. On a broader note, I want to learn more about biodeterminism, also from a practical perspective. There might be conditions in my environment causing me issues that I don't even know of. It goes beyond nutrition.

Comment author: juliawise 22 April 2017 01:27:27AM 1 point [-]

Yeah, I remember around 2007 a friend saying her parents weren't sure whether it was right for them to have children circa 1983, because they thought nuclear war was very likely to destroy the world soon. I thought that was so weird and had never heard of anyone having that viewpoint before, and definitely considered myself living in a time when we no longer had to worry about apocalypse.

Comment author: Fluttershy 22 April 2017 12:16:35AM 2 points [-]

Well, that's embarrassing for me. You're entirely right; it does become visible again when I log out, and I hadn't even considered that as a possibility. I guess I'll amend the paragraph of my above comment that incorrectly stated that the thread had been hidden on the EA Forum; at least I didn't accuse anyone of anything in that part of my reply. I do still stand by my criticisms, though knowing what I do now, I would say that it wasn't necessary of me to post this here if my original comment and the original post on the EA Forum are still publicly visible.

Comment author: jsteinhardt 21 April 2017 11:35:42PM 4 points [-]

When you downvote something on the EA forum, it becomes hidden. Have you tried viewing it while not logged in to your account? It's still visible to me.

Comment author: Raemon 21 April 2017 11:21:21PM 0 points [-]

Side note: author is female.

Comment author: gilch 21 April 2017 11:00:27PM 0 points [-]

Interesting. Why the equal mass? Omega would need Schrodinger's box, that is, basically no interaction with the outside world lest it decohere. I'm not sure you could weigh it. Still, quantum entanglement and superposition are real effects that may have real-world consequences for a decision theory.

We can inflate a quantum event to macroscopic scales like with Schrodinger's cat. You have vial of something reactive in the box to destroy the money, and a hammer triggered by a quantum event.

But isn't that altering The Deal? If Omega is allowed to change the contents of the box after your choice, then it's no longer a Newcomblike problem and just an obvious quid pro quo that any of the usual decision theories could handle.

I'm not sure I understand the setup. Can you cause entanglement with the coins in advance just by knowing about them? I thought it required interaction. I don't think Omega is allowed that access, or you could just as easily argue that Omega could interact with the Chooser's brain to cause the predicted choice. Then it's no longer a decision; it's just Omega doing stuff.

Comment author: morganism 21 April 2017 10:33:20PM 0 points [-]

Sorta related again, being charged for the "services" of the courts, and sometimes reparations, plus 12% interest. Ouch


"Today, in Washington state (where Harris conducted her research), you can be charged not only for victim’s restitution, but for bench warrants, clerks, court-appointed attorneys, lab analyses, juries, drug funds, incarcerations, emergency responses, extraditions, convictions, collections, drug and alcohol assessments and treatments, supervisions, house arrests, and, of course, interest. These charges are levied even on minors and, in some cases, defendants found innocent."

Comment author: Fluttershy 21 April 2017 10:32:03PM *  2 points [-]

Some troubling relevant updates on EA Funds from the past few hours:

  • On April 20th, Kerry Vaughan from CEA published an update on EA Funds on the EA Forum. His post quotes the previous post in which he introduced the launch of EA Funds, which said:

We only want to focus on the Effective Altruism Funds if the community believes it will improve the effectiveness of their donations and that it will provide substantial value to the EA community. Accordingly, we plan to run the project for the next 3 months and then reassess whether the project should continue and if so, in what form.

  • In short, it was promised that a certain level of community support would be required to justify the continuation of EA Funds beyond the first three months of the project. In an effort to communicate that such a level of support existed, Kerry commented:

Where we’ve received criticism it has mostly been around how we can improve the website and our communication about EA Funds as opposed to criticism about the core concept.

  • Around 11 hours ago, I pointed out that this claim was patently false.
  • (I stand corrected by the reply to this comment which addressed this bullet point: the original post on which I had commented wasn't hidden from the EA Forum; I just needed to log out of my account on the EA Forum to see it after having downvoted it.)
  • Between the fact that the EA Funds project has taken significant criticism, failed to implement a plan to address it, acted as if its continuation was justified on the basis of having not received any such criticism, and signaled its openness to being deceptive in the future by doing all of this in a way that wasn't plausibly deniable, my personal opinion is that there is not sufficient reason to allow the EA Funds to continue to operate past their three-month trial period, and additionally, that I have less reason to trust other projects run by CEA in light of this debacle.
Comment author: morganism 21 April 2017 10:26:18PM 0 points [-]

This looks like a great database conversion tool.


"The aim of FlowHeater is to offer a simple and uniform way to transfer data from one place to another, providing a simple graphical user interface to define the modifications specific to each data target. No programming knowledge is required to use FlowHeater.."

The Fitter automatically undertakes many necessary modifications according to changes of data environment and there is no need to worry about such conversions, this is especially useful when the data source and target are for different locales. (e.g. German date formats are converted to American date formats).

Of course, FlowHeater can also cope with the most diverse conversions of character encoding, which can also be combined in any way desired. e.g. the TextFileAdapter reads codepage 10000 (Macintosh, western European) and everything must be converted to codepage 20773 (IBM mainframe EBCDIC) . Admittedly such a requirement might only arise on rare occasions, but FlowHeater makes this really simple to achieve. Naturally FlowHeater supports all the more commonly encountered codepage groups, including those for MS-DOS, UNIX, Unicode (utf7, utf8, utf16, utf32), and so on.

Comment author: Fluttershy 21 April 2017 09:39:24PM 7 points [-]

GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program.

This seems particularly horrifying; if everyone already knows that you're incentivized to play up the effectiveness of the charities you're recommending, then deciding to not check back on a charity you've recommended for the explicit reason that you know you're unable to show that something went well when you predicted it would is a very bad sign; that should be a reason to do the exact opposite thing, i.e. going back and actually publishing an after-the-fact retrospective of long-run results. If anyone was looking for more evidence on whether or not they should take GiveWell's recommendations seriously, then, well, here they are.

In response to comment by Lumifer on Cheating Omega
Comment author: gilch 21 April 2017 09:22:19PM *  0 points [-]

And the winner is

you use words in a careless and imprecise manner

(The pot calls the kettle black.) Natural languages like English are informal. Some ambiguity can't be helped. We do the best we can and ask clarifying questions. Was there a question in there?

guaranteed to be no empirical differences

Assuming Omega's near-omniscience, we just found one! Omega can reliably predict the outcome of a quantum coin flip in a Copenhagen Universe, (since he knows the future), but can't "predict" which branch we'll end up in given a Many Worlds Multiverse, since we'll be in both. (He knows the futures, but it doesn't help.)

So let's not assume that. Now we can both agree Omega is unrealistic, and only useful as a limiting case for real-world predictors. Since we know there's no empirical difference between interpretations, it follows that any physical approximation of near-omniscience can't predict the outcome of quantum coin flips. My strategy still works.

Comment author: gilch 21 April 2017 08:54:31PM 0 points [-]

I think that does follow, but you're altering The Deal. This is a different game.

The only thing Omega is allowed to do is fill the box, or not, in advance. As established in the OP, however, Omega can reduce the expected value by predicting less accurately. But over multiple games, this tarnishes Omega's record and makes future Choosers more likely to two-box.

Comment author: gilch 21 April 2017 08:49:31PM *  0 points [-]

Many Worlds is deterministic. What relevant information is hidden? Omega can predict with certainty that both outcomes happen in the event of a quantum coin flip, in different Everett branches. This is only "random" from a subjective point of view, after the split. Yet given the rules of The Deal, Omega can only fill the box, or not, in advance of the split.

In response to comment by gilch on Cheating Omega
Comment author: Lumifer 21 April 2017 08:40:02PM *  0 points [-]

I am trying to say that you use words in a careless and imprecise manner.

I also don't "believe" in Many Worlds, though since there are guaranteed to be no empirical differences between the MWI and Copenhagen, I don't care much about that belief: it pays no rent.

Comment author: Lumifer 21 April 2017 08:35:59PM *  0 points [-]

there's a difference between being depressed by the realization, or finding it depressing because there's something wrong with your meatsack chemistry.

I don't think you're getting this. You are a meat sack of chemicals. "Being depressed by the realization" means that your meatsack chemistry shifted.

do you know media

How many IQ points are you willing to pay? X-D

But if you want a real answer, clinical depression isn't cured by happy movies. The problem isn't that the outside world provides too little happiness for you, the problem is that your ability to consume and digest that happiness is impaired.

Comment author: Benquo 21 April 2017 08:29:40PM *  0 points [-]

Hmm. There are enough partially overlapping things called the EA Newsletter that I'm raising my prior that I'm just confused and conflating things. I'll just retract that bit entirely - it's not crucial to my point anyway. But, sorry for bringing 80K in where I shouldn't have.

Comment author: Ritalin 21 April 2017 08:23:35PM *  0 points [-]

Right, but there's a difference between being depressed by the realization, or finding it depressing because there's something wrong with your meatsack chemistry.

I wish to believe that which is true, but getting tested and diagnosed for depression is expensive, and so are the chemicals often prescribed to treat them, in money and in secondary effects.

Forgive me if I seem a little impatient, but I'd rather focus on the stated purpose of this thread: media that will help me feel better about myself and the world and foster in me a sense of curiosity, hope, and discipline.

In response to comment by WalterL on Cheating Omega
Comment author: gilch 21 April 2017 08:20:06PM 0 points [-]

Consider that down voted. You're totally strawmanning. You're not taking this seriously, and you're not listening, because you're not responding to what I actually said. Did you even read the OP? What are you even talking about?

In response to comment by Lumifer on Cheating Omega
Comment author: gilch 21 April 2017 08:16:12PM 0 points [-]

Consider that down voted. It's too ambiguous. I can't tell what you're trying to say. Are you just nitpicking that both worlds have the same value on the t axis? Are you just signaling that you don't believe in many worlds? Is there some subtlety of quantum mechanics I missed, you'd like to elaborate on? Are you just saying there's no such thing as randomness?

Comment author: Benquo 21 April 2017 08:13:52PM *  1 point [-]

I got something from CEA but maybe it wasn't 80K. Will correct. Thanks for catching this.

Comment author: Lumifer 21 April 2017 08:09:20PM 0 points [-]

As I said, it all entirely depends on the contex. It can be a status transaction. It can also not be a status transaction.

I would also remark again that if the point is to assert status, calling that aggression "fake" is probably not quite right.

Comment author: Lumifer 21 April 2017 08:05:30PM 0 points [-]

Realizing you are a meat sack full of chemicals is philosophical, existentialist despair. Almost at the Rick & Morty level.

Comment author: Ritalin 21 April 2017 08:02:30PM 0 points [-]

I like to think it's not some chemical imbalance, but a philosophical, existentialist despair. Think Saturday Morning Breakfast Cereal, Rick & Morty, Douglas Adams and Terry Pratchett's work... "THERE IS NO JUSTICE. THERE IS JUST US."

Comment author: RobertWiblin 21 April 2017 08:01:20PM 4 points [-]

"After they were launched, I got a marketing email from 80,000 Hours saying something like, "Now, a more effective way to give." (I’ve lost the exact email, so I might be misremembering the wording.) This is not a response to demand, it is an attempt to create demand by using 80,000 Hours’s authority, telling people that the funds are better than what they're doing already. "

I write the 80,000 Hours newsletter and it hasn't yet mentioned EA Funds. It would be good if you could correct that.

Comment author: Benquo 21 April 2017 07:52:07PM 0 points [-]

So, relative status at the end of the interaction depends on how someone responds to "fake" aggression, and one possible outcome is that it's the same as it started.

This isn't the same thing at all as not being a status transaction.

Comment author: Lumifer 21 April 2017 07:43:11PM *  1 point [-]

I'm not sure that they would welcome my help

Part of trying to actually help is figuring out what kind of help will be useful (in this case: accepted).

Comment author: Lumifer 21 April 2017 07:41:24PM 0 points [-]

sapped my optimism and energy to the point that I'm not sure why I get away in the morning, or why bother making any kind of effort beyond ensuring survival when everything is absurd and pointless.

That's called depression. Unfortunately, it's not rare.

Comment author: ChristianKl 21 April 2017 07:37:13PM 1 point [-]

"Ravenclaw wanted to have better communities, so they wrote a book about it and other Ravenclaws all separately read the book, and nothing further happened" sounds like such a probable way for this to fail that it'd be almost as funny as it would be sad. I assume Raemon has foreseen this possibility and has some plan to counteract this.

Part of the plan seems to be "Get people together for an unconference to talk".

Comment author: Ritalin 21 April 2017 07:35:40PM *  0 points [-]

Hi! I'm an electrical engineering student close to finishing my MsC. These days I feel really, really tired and disenchanted with my work, in spite of it leading to one of my childhood dreams of working on green energies and/or electric transportation.

The same happened when I went to see a couple of museums involving Norway's naval history, Amudsen's arctic expeditions, and the epic journies of the Kon Tiki and the Ra. Despite all the pain and hardship those stories portrayed, I left full of energy and determination.

Over the most recent years, most of my media consumption, both fiction and non-fiction, involved delving deep into the complexities and flaws of human nature, both on an individual and societal level. While that has helped me become somewhat more socially functional, it has also sapped my optimism and energy to the point that I'm not sure why I get away in the morning, or why bother making any kind of effort beyond ensuring survival when everything is absurd and pointless, and everyone, myself included, is irredeemably stupid and evil in ways that cannot be fixed, only mitigated.

I want to feel hopeful, optimistic, interested, engaged, and growing. I want to learn shit that makes me want to strive and thrive.

Comment author: Ritalin 21 April 2017 07:35:32PM 0 points [-]

Cosmos-like works: for inspiration and fuzzies

The other day, I was watching NDT's Cosmos, and even though it taught me absolutely nothing new, it was so gorgeous and beautiful and inspiring that I couldn't help but feel reinvigorated, and tackle my hard, painful, frustrating work with renewed zest and zeal! I'd like to know of more works like that, *especially in Audiobook format, to listen to while bothering with the mundane daily tasks that don't let me hold a book or a computer in my hands while doing them.

Comment author: ChristianKl 21 April 2017 07:29:37PM 2 points [-]

I don't think the two are the only concerns.

If I hear a friend having a problem I often notice and I do care but I'm not sure that they would welcome my help.

Offering and receiving help for big emotional issues isn't easy.

View more: Next