All of RobertWiblin's Comments + Replies

1: Our epistemic rationality has probably gotten way ahead of our instrumental rationality

I would defend the instrumental rationality of having a rule of thumb that unless you're quite wealthy, you don't bother looking into anything that appears to be a 'get rich quick' scheme, or seek to invest in high-risk high-return projects you can't evaluate.

Yes sometimes it will fail big, if you miss the boat on bitcoin, or Facebook or whatever. Every strategy fails in some scenarios. Sometimes betting it all on 23 red will have been the rig... (read more)

From a selfish point of view, I don't think most rationalists would benefit significantly from a bit of extra money, so it doesn't make much sense to be dedicating their truly precious resource (time and attention) to identifying high-risk high-return investments like bitcoin and in this case figuring out how to buy/store them safely. And I'm someone who bought bitcoin for the sake of entertainment.

From an altruistic point of view, yes I expect hundreds of millions of dollars to be donated, and the current flow is consistent with that - I kn... (read more)

Collectively the community has made hundreds of millions from cypto. But it did so by getting a few wealthy people to buy many bitcoin, rather than many people to buy a few bitcoin. This is a more efficient model because it avoids big fixed costs for each individual.

It also avoid everyone in the community having to dedicate some of their attention to thinking about what outstanding investment opportunities might be available today.

Due to declining marginal returns, hundreds of millions is a substantial fraction as good as billions. So I think we did alright.

gjm220

If you're considering the welfare of the individuals concerned: "a few wealthy people make multiple millions" is not as good an outcome as "dozens of not-so-wealthy people make hundreds of thousands" because of diminishing marginal returns.

If you're considering cryptocurrency gains only as fuel for effective altruism or something, then "a few wealthy people make multiple millions" might be as good an outcome, but then it's no longer so plausible that "hundreds of millions is a substantial fraction as good a... (read more)

"After they were launched, I got a marketing email from 80,000 Hours saying something like, "Now, a more effective way to give." (I’ve lost the exact email, so I might be misremembering the wording.) This is not a response to demand, it is an attempt to create demand by using 80,000 Hours’s authority, telling people that the funds are better than what they're doing already. "

I write the 80,000 Hours newsletter and it hasn't yet mentioned EA Funds. It would be good if you could correct that.

5Benquo
Hmm. There are enough partially overlapping things called the EA Newsletter that I'm raising my prior that I'm just confused and conflating things. I'll just retract that bit entirely - it's not crucial to my point anyway. But, sorry for bringing 80K in where I shouldn't have.
2Benquo
I got something from CEA but maybe it wasn't 80K. Will correct. Thanks for catching this.

"If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people"

Amusingly, our suggestion of these two charities is entirely syndicated from a blog post put up by Holden Karnofsky himself: http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016

Thanks for your interest in our work.

As we say in the post, on this and most problem areas 80,000 Hours defers charity recommendations to experts on that particular cause (see: What resources did we draw on?). In this case our suggestion is based entirely on the suggestion of Chloe Cockburn, the Program Officer for Criminal Justice Reform at the Open Philanthropy Project, who works full time on that particular problem area and knows much more than any of us about what is likely to work.

To questions like "does 80,000 Hours have view X that would make s... (read more)

4The_Jaded_One
Upvoted, and I encourage others to upvote for visibility.
3Lumifer
Ah. Well then.

Yes, thanks so much to everyone who contributed! :)

This fundraiser has been promoted on the Effective Altruism Forum already, so you may find your questions answered on the thread:

http://effective-altruism.com/ea/hz/please_support_giving_what_we_can_this_spring/

http://effective-altruism.com/ea/j9/giving_what_we_can_needs_your_help/

I'll re-post this comment as well:

"If I was going to add another I think it would be

  1. Have fun

Talking to people who really disagree with you can represent a very enjoyable intellectual exploration if you approach it the right way. Detach yourself from your own opinions, circumstances and feelings and instead view the conversation as a neutral observer who was just encountering the debate for the first time. Appreciate the time the other person is putting into expressing their points. Reflect on how wrong most people have been throughout history and ... (read more)

1[anonymous]
I think "have fun" deserves to be in the original post!! I look at disagreements as fun challenges. When I consider myself more rational or knowledgeable than the other people involved, I give myself a lot more responsibility for the opening/changing of their minds than I give them. If it doesn't work, I don't think "ugh, this person is an idiot"... I think "I wonder what I could have done to communicate more effectively with this person in particular." For example, I just deconverted from Christianity, and lots of people who unfortunately think I'm going to hell want to talk about it. In these conversations, I don't always say what I want to say. Instead, I say what I think will be most likely to make the person uncomfortable enough to stop and think. For many people, especially more emotional ones, this means going on the defensive instead of the offensive. I'll tell them about how I yearned and prayed for years for a stronger faith. I'll say that if God were going to answer any prayer, that would seem like a good one to answer. I'll show them an MBTI study that shows all the "T" types are more likely to deconvert than the "F" types and say glumly that if they ended up being right after all, culture/upbringing wasn't the only way God seemed to play favorites. This turns out to be a good strategy for people who believe pure childlike faith is a virtue, since it makes them uncomfortable that I really wanted a strong faith too, but it was more of a struggle for me. For people who are more interested in actual facts, I'll be more likely to ask their opinion on the Census of Quirinius or share how I just learned how the least complex fossils are found in the oldest, deepest layers of rock, and ask why they think that might be. This is a better strategy for the (few) people who think that their faith is totally backed up by evidence. I've gotten almost everyone to stop and think a LOT and have somehow managed to stay on good terms with almost everyone. In any disag

When someone has an incurable and lethal respiratory illness, I think we do require them to stay in quarantine and this is broadly accepted. The reason this doesn't apply to HIV and other such diseases is that they are barely contagious.

Well I wasn't proposing a strict quarantine or limits on travel. Merely preventing people from coming into close contact with colleagues at work where the risk of contagion is highest, and requiring them to have the option to reschedule their (expensive) travel. People are already familiar and comfortable with regulations in workplaces and aviation.

If I were proposing a thoroughgoing quarantine, I expect people wouldn't be nearly as enthusiastic.

Thanks for the feedback.

Note it was also the most popular post on the Facebook group (as measured by likes) in almost two weeks, so clearly some other members thought this was a sensible proposal.

I can see how it could come across as 'hero worship', except that Bostrom is indeed a widely-recognised world-leading academic at the highest ranked philosophy department in the world. There are sound reasons to be respectful of his work.

"sexual innuendo"

I can assure you the intended level of sexual innuendo in this ad is less than zero.

3seez
In case I wasn't clear, I see nothing wrong with seeking a personal assistant for Bostrom amongst EAs and LessWrongers. Obviously, many people in those groups might be very interested in the job. I'm sure it will be an interesting opportunity for whoever gets it. My objection was to the tone. I'm glad if people didn't find it as alarming as I did, but I was aware of some additional controversy, expressed both publicly in the comments and privately. And of course, there is no downvote on Facebook. Yes, by all means, be respectful of Bostrom's work! Luckily, there's a lot of room between "worshipful" and "respectful" to aim at. Lots of hero worship, perhaps even most of it, is directed at people who are legitimately awesome (as I personally believe Bostrom is)! And since LW has already been accused of excessive hero worship, with varying degrees of thoughtfulness, to an extent that has turned people off the site, I think it's worth considering extolling the virtues of those we admire in a more conventional manner, especially in large public forums. I completely believe that no innuendo was intended. That's why I thought it would be helpful to let you know that at least to some people, it came off that way.

As we have not secured funding yet it would be premature to do either of these things. We can negotiate a salary later on in the process depending on the person's qualifications.

I think it'll be faster to get a sense of that from a personal conversation.

Exactly - if anything I am trying to make the job seem less appealing than it will be, so we attract only the right kind of person.

8Cthulhoo
I see people are highly upvoting the post, even correcting for the Bostrom's halo effect, so I'm updating a bit in the direction of you being right. I also see that you've followed Lachouette suggestion, and I like it. I would be genuinely curious to see if it worked as intended in the end, might change the way in which I conduct job interviews a bit (I obviously realize that this is an irrelevant request that will probably not be met). Best of luck with the recruiting.

I was just giving what would be sufficient conditions, but they aren't all necessarily necessary.

If you can't otherwise improve their lives, the death is painless, and murder isn't independently bad.

0Lumifer
Well, not quite. If you think being dead has positive utility for this creature, this positive utility is not necessarily small. If so, you need to weight the issues in killing against that positive utility. For example, let's take "death is painless" -- actually, if the negative utility of the painful death is not as great as the positive utility of dying, you would still be justified and obligated to impose that painful death upon the creature as the net result is positive utility.

"Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living"

No, this makes perfect sense. 1. They decide animals are objects of moral concern. 2. Look into the conditions they live in, and decide that in some cases they are worse than not being alive. 3. Decide it's wrong to fund expansion of a system that holds animals in conditions that are worse than not being alive at all.

1Lumifer
Isn't a direct consequence of (2) is that those animals are better off dead than alive and so, if the opportunity to (relatively costlessly) kill some of them arises, one should do so?

For what it's worth, I've found being vegetarian almost no effort at all. Being vegan is a noticeable inconvenience, especially cutting out the last bits of dairy (and that shows up in your examples, which are both about dairy).

Hey, this doesn't seem like the best location for it. Is there a post on the 80,000 Hours or EA blogs related to your criticism you could use?

"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."

Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.

"Public declarations would only be signaling, having little to do with maximizing good outcomes."

On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.

"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."

a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are wr... (read more)

0[anonymous]
This is an interesting argument, but it seems a bit truncated. Could you go into more detail?
0Said Achmiz
Where is the best content on this topic, in your opinion? Eh? Unpack this, please.

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

-2Said Achmiz
That has very little to do with whether Eliezer should make public declarations of things. Are you of the opinion that Eliezer does not share your view on this matter? (I don't know whether he does, personally.) If so, you should be attempting to convince him, I guess. If you think that he already agrees with you, your work is done. Public declarations would only be signaling, having little to do with maximizing good outcomes. As for the other thing — I should think the fact that we're having some disagreement in the comments on this very post, about whether animal suffering is important, would be evidence that it's not quite as uncontroversial as you imply. I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one. Perhaps you should write one? I'd be interested in reading it.

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

-2Said Achmiz
I have to disagree on two points: 1. I don't think that we should take this thesis ("suffering (and pleasure) are important where-ever they occur, whether in humans or mice") to be well-established and uncontroversial, even among the transhumanist/singularitarian/lesswrongian crowd. 2. More importantly, I don't think Eliezer or people like him have any obligation to "lead the way", set examples, or be a role model, except insofar as it's necessary for him to display certain positive character traits in order for people to e.g. donate to MIRI, work for MIRI, etc. (For the record, I think Eliezer already does this; he seems, as near as I can tell, to be a pretty decent and honest guy.) It's really not necessary for him to make any public declarations or demonstrations; let's not encourage signaling for signaling's sake.

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

3CarlShulman
I agree on that effect, I left out various complications. A flip side to that would be the number of cortex neurons (and equivalents). These decrease rapidly in simpler nervous systems. We don't object nearly as much to our own pains that we are not conscious of and don't notice or know about, so weighting by consciousness of pain, rather than pain/nociception itself, is a possibility ( I think that Brian Tomasik is into this).

Thanks for doing this. I found it a very memorable when it first aired years ago.

To Larks and Shminux - I am twisting the idea of arbitrage, to be more like 'economic profit' or 'being sure to beat the market rate of return on investment/altruism'. Maybe I should stop using the term arbitrage.

"Isn't your point basically just that consumer surplus can be unusually high for individuals with unusual demand functions because the supply (of chances to do good) is fixed so lower demand => lower price?"

Yes, though the supply surve just slopes upwards - it isn't vertical.

I could re-write the principle as 'when supply curves slope ... (read more)

0Larks
Yes - but I think the two provisos you mention are very important. Proving will be more difficult (measuring profit is easy; measuring Delta(QALY) is hard), and fewer people do care about efficiency.

I lurked until I read something I really disagreed with.

I lurked til a few weeks back when I read something I really disagreed with.

Possibly doing nothing is a good idea for hunter gatherers in case of starvation, but that seems worth checking in the anthropology research. If starvation were a frequent risk, lethargy would surely been prompted by insufficient food intake, which is rare for humans today. We wouldn't just be lazy for that reason all the time; during times of abundance you ought to gather and store as much food as possible.

Apparently hunter gatherer bands were egalitarian, so it's unlikely people would have been beaten up by (non-existent) leaders just for hunting and gat... (read more)

I am skeptical of the evolutionary explanation he poses for inactivity.

I don't believe large numbers of people were typically thrown out of hunter gatherer bands for incompetence, surely not more than inactive people (http://books.google.com.au/books?sitesec=reviews&id=ljxS8gUlgqgC). And in how many crisis situations is doing nothing really the best option? Hiding from a predator would surely be one of only a few.

3pjeby
The number of people who have fears of being discovered as incompetent (e.g. "impostor syndrome") strongly suggests a biological explanation. In any case, my model is slightly broader than Kaj's summary implies -- this sort of fear-of-discovery applies to any acts or personal qualities that, if known, would lead to lower tribal status. You don't have to be actually thrown out of your tribe in order for discovery of a negative quality to alter your reproductive opportunities (or your probability of surviving long enough to have some).
4NancyLebovitz
In any case, I'm not convinced we just have paleolithic brains. The fact that we've got a lot more paleolithic ancestors doesn't mean there hasn't been selection and change since then. And we don't know that much about how they lived. I believe we have paleolithic adaptations (smallish group living, need for small grained variation (going barefoot on rough ground), need for movement (not exercise!), lowish carb eating, possibly a need to eat insects which is sadly neglected in the modern world), primitive farming adaptations (comfort in hierarchies, tolerance of drudgery, willingness to forgo gratification, tolerance of grains and perhaps a need for them,), and urban adaptations (comfort with strangers, strong immune systems, tolerance of noise and crowding, enjoyment of novelty, ability to handle strong drink). None of these lists are expected to be complete or entirely accurate, but it's worth noting that to some extent, they're pulling in different directions. Is people being unwilling to move away from smoke best explained in terms of paleolithic motivations, or as a result of civilized, and perhaps cultural, overlays?
5RHollerith
Uh, whenever getting food is expected to yield less of some necessary nutrient than is consumed by getting the food is a time when doing very little is the best option -- at least if the nutrient situation will probably get better in a few months. Also, if you are male, whenever you are seen as effective sexual competition by a male or coalition of males who are in a position to kill you or permanently damage you with little risk of bodily or reputational harm to themselves is a time when doing nothing impressive is the best option -- and in the EEA simply doing well at hunting, farming or making tools was probably seen as impressive by the women. Heck, I've gotten significant signs of interest from women just for sitting in a cafe in San Francisco and looking like I was productively engaged in writing things down on pieces of paper when the woman had no way to know what I am writing. Many writers say that industriousness is attractive to women. And, heck, in junior high school, I remember being attacked and my study supplies kicked around on the street by another male for appearing industrious. In summary, it seems to me that one of the most profound differences between the EEA and modern life (at least modern life in the relatively well-run jurisdictions) is that in modern life, there is no reason not to spend a significant part of every day in effortful activity either physical or mental -- and note that effortful mental activity consumes many times more calories than "subsistence" mental activity does (I think: I should actually research that; personal experience is my main reason for believing it) and that mental activity accounts for 25% of the calories consumed by a human being (and I'm fairly certain of that last point) And if getting thrown out of the band were not a large source of loss of reproductive fitness in the EEA, then please explain the natural human fascination with the theme as evidenced by the mass appeal of shows like Survivor (plot: every w

That's because chances for us to go extinct seem many. If we are necessarily held back from space for thousands of years, it's very unlikely we would last that long just here on Earth.

1timtyler
Usually, the people who think this sort of thing count there not being any humans around as us "going extinct". That is all very well if you are fixated on the human form - but it is not a valid concept in the context of the Fermi paradox - since our post-human offspring are likely to be far more capable of executing the task of forming a galactic civilisation than we are. If you think that there are many chances for civilization to go extinct - then I beg to differ. The chances of civilization going extinct are pretty miniscule - in my estimation. Discussion of that happening usually has far more to do with memes, mind viruses, manipulation, exploitation and paranoia than it does to do with actual risk.

I am using (total) preference utilitarianism to mean: "we should act so as to maximise the number of beings' preferences that are satisfied anywhere at any time".

"As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that?"

Because they are not selfish and they are concerned about the welfare of that being in proportion to its ability to have experiences?

"Further, why would something like that be better at seizing resources?"

That's a weakness, but at some point... (read more)

I can understand valuing oneself more than others (simple selfishness is unsurprising), but I think Eliezer is saying cryonics is a positive good, not just that it benefits some people at the equal expense of others.

If uploading gets up as is probably required for people to be woken up, the future will be able to make minds as diverse as they like.

2Paul Crowley
I don't think so; when people say "shouldn't you argue that people give the money to SIAI", he says "why does this come out of our saving the world budget, and not your curry budget?" I think this is a very weak point and the far future will probably be able to make whatever kinds of minds they like, but we could have scanning/WBE long before we know enough about minds to diversify them.

What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?

0[anonymous]
Only if they find a way to turn me into this

OK, I'm new to this.

Why is it worse to die (and people cryonically frozen don't avoid the pain of death anyway) than to never have been born? Assuming the process of dying isn't painful, they seem the same to me.

Alicorn170
  1. Once a person exists, they can form preferences, including a preference not to die. These preferences have real weight. These preferences can also adjust, although not eliminate, the pain of death. If I were to die with a cryonics team standing over me ready to give me a chance at waking up again, I would be more emotionally comfortable than if I were to die on the expectation of ceasing to exist. Someone who does not exist yet does not have such preferences.

  2. People do not all die at the same time. Although an impermanent death is, like a permanent

... (read more)

Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.

0gregconen
That depends on your definition of Friendly, which in turn depends on your values.
0Vladimir_Nesov
Maybe probability you estimate for that to happen is high, but "proportion" doesn't makes sense, since FAI is defined as an agent acting for specific preference, so FAIs have to agree on what to do.

Sorry, this may be a stupid question, but why is it a good for people to get cyonically frozen? Obviously if they don't they won't make it to the future - but other people will be born or duplicated in the future and the total number of people will be the same.

Why care more about people who live now than future potential people?

0wintermute92
In addition to the very valid counterpoints listed here, I think its worth noting the false dichotomy of the question. If the initial assumption is that population is capped, that hasn't been borne out yet, and assuming we eventually leave Earth in a sustainable-habitats manner, doesn't have to ever hold true. If population-capping isn't the basis for your statement, then I don't see anything suggesting that the total number of people will be the same with and without cryonics. We are not choosing between ourselves and future potential people - at the moment, we are simply choosing between possible-ourselves and definitely-not-ourselves existing in the future.
Alicorn170

Because we exist already, and they don't. Our loss is death; theirs is birth control.

6Paul Crowley
I care more about myself than future potential people. More seriously, I value a diversity of minds, and if the future does too they may be glad to have us along.

Silly to worry only about the preferences of your present self - you should also act to change your preferences to make them easier to satisfy. Your potential future self matters as much as your present self does.

6Vladimir_Nesov
Irony? I gather if the "future self" is a rock, which is a state of existence that is easier to satisfy, this rock doesn't matter as much as your present self.

What are they then?

Utility as I care about it is probably the result of information processing. Not clear why information should only be able to be processed in that way by human type minds, let alone fleshy ones.

You will be gone and something which does want to be a big integer will replace you and use your resources more effectively. Both hedonistic and preference utilitarianism demand it.

1[anonymous]
Preference utilitarianism as I understand it implies nothing more than using utility to rank universe states. That doesn't imply anything about what the most efficient use of matter is. As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that? Further, why would something like that be better at seizing resources?
1thomblake
For reference, the "utilitarians" 'round these parts tend to be neither of those.

Needn't be total - average would suggest creating one single extremely happy being - probably not human.

Needn't only include hedonic pleasure - a preference utilitarian might support eliminating humans and replacing them with beings whose preferences are cheap to satisfy (hedonic pleasure being one cheap preference). Or you could want multiple kinds of pleasure, but see hedonic as always more efficient to deliver as proposed in the post.

Who cares about humans exactly? I care about utility. If the AI thinks humans aren't an efficient way of generating utility, we should be eliminated.

3gregconen
That's a defensible position, if you care about the utility of beings that don't currently exist, to the extent that you trade the utility of currently existing beings to create new, happier ones. The point is that the result of total utility maximization is unlikely to be something we'd recognize as people, even wireheads or Super Happy People.
2tut
That is nonsense. Utility is usefulness to people. If there are no humans there is no utility. An AI that could become convinced that "humans are not an efficient way to generate utility" would be what is referred to as a paperclipper. This is why I don't like the utile jargon. It makes it sound as though utility was something that could be measured independently of human emotions. Perhaps some kind of substance. But if statements about utility are not translated back to statements about human action or goals then they are completely meaningless.
1thomblake
Starting with the assumption of utilitarianism, I believe you're correct. I think the folks working on this stuff assign a low probability to "kill all humans" being Friendly. But I'm pretty sure people aren't supposed to speculate about the output of CEV.