All of Gedusa's Comments + Replies

I found it really helpful to have a list of places where Eliezer and Paul agree. It's interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.

A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?

Do you still think that people interested in alignment research should apply to work at OpenAI?

I think alignment is a lot better if there are strong teams trying to apply best practices to align state of the art models, who have been learning about what it actually takes to do that in practice and building social capital. Basically that seems good because (i) I think there's a reasonable chance that we fail not because alignment is super-hard but because we just don't do a very good job during crunch time, and I think such teams are the best intervention f... (read more)

A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share?

My own departure was driven largely by my desire to work on more conceptual/theoretical issues in alignment. I've generally expected to transition back to this work eventually and I think there a variety of reasons that OpenAI isn't the best for it. (I would likely have moved earlier if Geoffrey Irving's departure hadn't left me managing the alignment team.)

I'm pretty hesitant to speak on behalf of other people who left.... (read more)

In the case of reducing mutational load to near zero, you might be doing targeted changes to huge numbers of genes. There is presumably some point at which it's easier to create a genome from scratch.

I agree it's an open question though!

2ChristianKl
I don't see why that's should be the case. I see no principle reasons why you shouldn't be able to scale up targeted changes to do 20,000 changes (~number of human genes).  

An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.

4ChristianKl
While it's pausible that there will be a future where that's cheaper it's currently 9-figures for synthecizing a human genome from scratch. Whether or not there will be a time where that's cheaper then more targeted modifications is very open.

I would find this more useful if you spelled out a bit more about your scoring method. You say:

They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.

Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)

And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)

a dog which is as close to being a wolf as one

... (read more)
4Callmesalticidae
Health is most important characteristic of all, because I care about whether it is good to be that dog. Next is intelligence, mostly because I’m an ape who got where I am because my ancestors were clever enough to figure out nifty things like “fire-starting” and “germ theory.” This probably biases me a little, but I still can’t help but feel that a dog that is dumber than a wolf has lost something. Loyalty and affection (or at least tolerance) for humans are important because dogs are wolves, but they’re domesticated wolves, and if a breed is mean and vicious and unfriendly then it’s failing super hard at “being domesticated.” To be hardworking and to have dignity are mostly aesthetic concerns for me. To the extent that the former matters at all, it’s because I associate work ethic with purposiveness, and aimless breeds whose only task is to provide companionship have a tendency toward neuroticism and especially separation anxiety, in my experience. Just as I’m judging dogs as domesticated wolves, I’m also judging them as domesticated wolves. That said, don’t take any of this too seriously. It’d be good to keep the health stuff in mind before getting a dog, but my personal favs are greyhounds and I don’t think they’re going to get three stars, let alone the #1 Top Dog ribbon. The real objective rating, of course, is “the best dog is whichever breed best suits your personal circumstances, and also isn’t an inbred freak like a pug.”

Vanguard has a UK website, I use them and it works well.

Monevator also has a good guide to investment firms in the UK, along with a bunch of UK specific advice.

OpenPhil gave Carl Shulman $5m to re-grant

I didn't realise this was happening. Is there somewhere we can read about grants from this fund when/if they occur?

Would this approach have any advantages vs brain uploading? I would assume brain uploading to be much easier than running a realistic evolution simulation, and we would have to worry less about alignment.

You'd only do this if it was cheaper than uploading.

I filled in the survey! Like many people I didn't have a ruler to use for the digit ratio question.

Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.

Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape's character.

4Alsadius
What, as in he made a deal not to hurt Lily, but killing's okay? Snape's messed-up, but I don't think he's quite inhuman enough to treat that as a deal honoured. The earlier part of the events, perhaps, but not the "Lily died without pain, then?".

Slightly off topic, but I'm very interested in the "policy impact" that FHI has had - I had heard nothing about it before and assumed that it wasn't having very much. Do you have more information on that? If it were significant, it would increase the odds that giving to FHI was a great option.

-3mytyde
Unfortunately, the impact of information is often too closely tied to the funding poured into its propagation. Look at the way American media networks are basically billboards for the rich

We get to talk to government and military people quite a bit, attending seminars and giving them presentations, and they nod wisely and ask pertinent questions which we answer. We're not sure how much this has translated into actual policy differences at the end of the day, but there does seem to be a class of people in government willing to listen to these ideas (informally, it seems that the military is more interested than the standard civil servants and politicians).

There are other policy achievements, but Nick and Anders would know more...

2John_Maxwell
I happened to see this on the FHI website, but don't know of anything beyond that.

Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).

A bigger possible problem would be if I too... (read more)

[This comment is no longer endorsed by its author]Reply

This probably sounds horrible, but "saving human lives" in some contexts is an applause light. We should be able to think beyond that.

As a textbook example, saving Hitler's life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.

Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that ... (read more)

5Giles
I think the poor meat eater problem is a legitimate concern, and it's something that would benefit from research - we may not be able to establish the relative value of human/nonhuman life to everyone's satisfaction, but in principle we can do empirical research to find out the size of the effect that poverty reduction has on factory farming. To me this would be a point in favour of "meta" in general, but not necessarily GWWC/80K in particular, as they don't seem currently focused on this kind of research. A good concrete step you could take would be to get in touch with Effective Animal Activism (an 80K spinoff) and see if you can get the poor meat eater problem onto their research agenda. If there's already research in this area (I haven't looked), they may be able to point you towards it.
9MTGandP
GWWC in particular does not recommend any animal welfare charities, which makes me especially reluctant to donate to them or even support them at all. It seems much too specifically focused on global poverty. From the GWWC homepage: This seems excessively limiting given that good animal welfare charities are orders of magnitude more efficient than even the best human charities; and it becomes especially concerning when we consider the poor meat-eater problem. Effective Animal Activism is a meta-charity that evaluates animal welfare charities. They do not accept donations and instead recommend that you give directly to their top charities.

Hey,

80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.

We did a little bit of impact-assessment for 80k (again, wit... (read more)

2juliawise
Will, I remember you saying that new 80K members tend to be interested in x-risk, so that expanding 80K could be a good way to increase x-risk funding. Is that right?

Something on singletons: desirability, plausibility, paths to various kinds (strongly relates to stable attractors)

"Hell Futures - When is it better to be extinct?" (not entirely serious)

4wedrifid
Why (not serious)?

Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.

And maybe we could use certain ems as gatekeepers - the AI wouldn't have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.

Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).

I was the one who asked that question!

I was slightly disappointed by his answer - surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.

I guess what I'm really thinking is that it's pretty unlikely that the two charities are equally optimal.

3XiXiDu
It seems that argument applies primarily to well-defined goals. Do you necessarily have to view the SI and FHI as two charities? The SI is currently pursuing a wide range of sub-goals, e.g. rationality camps. I perceive the FHI to be mainly about researching existential risks in general. Clearly you should do your own research and then decide which x-risk is the most urgent one and then support its mitigation. Yet you should also reassess your decision from time to time. And here I think it might be justified to contribute part of your money to the FHI. By doing so you can externalize the review of existential risks. You concentrate most of your effort on the risk that the FHI deems most urgent until it does revise its opinion. In other words, view the SI and FHI as one charity with different departments and your ability to contribute separately as a way to weight different sub-goals aimed at the same overall big problem, saving humanity.

Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?

I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).

3katydee
I agree with your estimates/answers. There are certainly SL0 existential risks (most people in the US understand nuclear war), but I think the issue in question is that the risks most targeted by the "x-risks community" are above those levels-- asteroid strikes are SL2, nanotech is SL3, AI-foom is SL4. I think most people understand that x-risks are important in an abstract sense but have very limited understanding of what the risks the community is targeting actually represent.

I thought this article was for SL0 people - that would give it the widest audience possible, which I thought was the point?

If it's aimed at the SL0's, then we'd be wanting to go for an SL1 image.

8katydee
SL0 people think "hacker" refers to a special type of dangerous criminal and don't know or have extremely confused ideas of what synthetic biology, nanotechnology, and artificial intelligence are.

Whilst I really, really like the last picture - it seems a little odd to include it in the article.

Isn't this meant to seem like a hard-nosed introduction to non-transhumanist/sci-fi people? And doesn't the picture sort of act against that - by being slightly sci-fi and weird?

-3juliawise
The squid-shaped dingbats are pretty bad, too.
2MichaelAnissimov
I'm in favor of including the last picture as part of the article, because it shows the possible world we gain by averting existential risk. I don't believe that "context" is necessary, the image is self-explanatory. Nitpicking on ringworld vs. stanford torus is not relevant, or interesting. The overall connotations and message are clear. "Sci-fi" of today becomes "reality" of tomorrow. Non-transhumanists ought to open up their eyes to the potential of the light cone, and introducing them to that potential, whether directly or indirectly, is one of the biggest tasks that we have. Otherwise people are just stuck with what they see right in front of their eyes. For a big picture issue like existential risk, it fits that one would want to also introduce a vague sketch of the possibilities of the big picture future. Suggesting that the Earth picture itself doesn't belong in the post shows some kind of general bias against visuals, or something. You think that a picture about saving human life on earth isn't appropriately paired with a picture of the Earth? What image could be more appropriate than that?
3Bongo
Also, I'd say both of those pictures seem to have the effect of inducing far mode.
8katydee
Agreed, especially since it is presented with no explanation or context. If the aim was "here's a picture of what we might achieve," I would personally aim for more of a Shock Level 2 image rather than an SL3 one-- presuming, of course, that this is being written for someone around SL1 (which seems likely). That said, I might omit it altogether.
7gjm
Not only is the picture slightly sci-fi and weird, it's also wrong. I mean, my thought processes on seeing it went something like this: "Oh, hey, it's a ringworld. Presumably this is meant to hint at the glorious future that might be ahead of us if we don't get wiped out, and therefore the importance of not getting wiped ou ... no, wait a moment, it's kinda like a ringworld but it's really really really small. Much smaller than the earth. What the hell's the point of that?"

Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it'd read more smoothly if it was smaller, along the lines of the header images in "Philosophy by Humans" or (as an extreme on the high end) "The Cognitive Science of Rationality".

7[anonymous]
Agreed on this. The ringworld thing comes out of nowhere and doesn't clearly follow from the content of the article. Unless the point is to wink-wink-nudge-nudge at the idea that we might have to do some weird-looking and weird-sounding things in order to save the world... in which case I still don't like the picture.

Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.

I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.

2Kaj_Sotala
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery - it's not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it's the idea of colonizers in general. Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that "somewhat" depends on whether you think it's reasonable to presume that we've already passed the Great Filter.)
0JoshuaZ
One important thing to keep in mind is that although Katja emphasizes this argument in a context of anthropics, the argument goes through even if one hasn't ever heard of anthropic arguments at all simply in terms of the Great Filter.
0timtyler
Katja's blog post on the topic is here. The claim that the argument there is significant depends strongly on this - where I made some critical comments.

What initiatives is the Singularity Institute taking or planning to take to increase it's funding to whatever the optimal level of funding is?

I'm guessing they mean a university affiliated person doing a formal philosophy degree of some kind.

I didn't think of that - given that a huge chuck here have probably taken such tests, if Yvain allowed such an estimation, it would be very helpful.

excluded-middle bias

Yes! That's what I was thinking of :)

5pragmatist
I've never taken an IQ test, so when I was responded to the survey I considered estimating my IQ based on my SAT and GRE scores. The result, according to the site torekp linked to, is surprisingly high (150+). I think I'm smart, but not that smart. Anyone have any idea if these estimators should be trusted at all?

This is great! I hope there's a big response.

It seems likely you're going to get skewed answers for the IQ question. Mostly it's the really intelligent and the below average who get (professional) IQ tests - average people seem less likely to get them.

I predict high average IQ, but low response rate on the IQ question, which will give bad results. Can you tell us how many people respond to that question this time? (no. of responses isn't registered on the previous survey)

5quentin
I was wondering if the IQ-calibration question was referring to reported or actual IQ. It seems to be the latter, but the former would be much more fun to think about. Also, are so many LWers comfortable estimating with high confidence that they are in the 99.9th percentile? Or even higher? Is this community really that smart? I mean, I know I'm smarter than the majority of people I meet, but 999 out of every 1000? Or am I just being overly enthusiastic in correcting for cognitive bias?
1kilobug
For myself I took my result to the Mensa online pre-test, that I did for the purpose of calibrating myself a few years ago. It's not a fully professional test (and not done in test situation), but I consider it valid enough to be more than pure noise.

I think it would be more informative to ask people to take one specific online test, now, and report their score. With everyone taking the same test, even if it's miscalibrated, people could at least see how they compare to other LWers. Asking people to remember a score they were given years ago is just going to produce a ridiculous amount of bias.

8torekp
Are we encouraged to estimate IQ from SAT tests and the like? That's what I did. That could reduce the excluded-middle bias that Gedusa mentions.
4[anonymous]
I predict with 70% certainty that we will get an IQ in the range of 140-145 again, though I think it will be a bit lower than last time. I'm very surprised if it's outside 130-150. (Also took the survey. Would like more "other" options so I can ramble about my totally different opinions on many issues, but whatever.)

The obvious solution is to stop eating all those kinds of animal/animal products. That would satisfy CO2 concerns and killing concerns.

Of course, it might not satisfy things like fun of eating meat, ease of eating meat, health etc.

3Bongo
It was probably that, but note that that page is not concerned with minimizing killing, but minimizing the suffering-adjusted days of life that went into your food. (Which I think is a good idea; I've used that page's stats to choose my animal products for a year now.)
2Normal_Anomaly
Unfortunately, it seems that the best choices for which animals to eat are opposite depending on whether your goal is killing fewer animals or minimizing your carbon footprint.

I take it that you partially changed "my mistakes" to include nicotine. I enjoyed your article on it - but how are you using it?

Are you rotating with other stimulants on a regular basis, using when you like, using to promote habit formation etc.

1gwern
See http://www.gwern.net/Nootropics#nicotine I haven't been doing anything systematically with it because I'm still getting a handle on dosage and effects - it is, unfortunately, not as powerful as modafinil or Adderall, so it can be hard for me to get a lock on how long it lasts, how to administer, etc to the point where I could try rotation or double-blinds.

Would anyone care to comment on the recent Mt Gox hack n' crash?

Personally, I'm thinking that this very bad. The currency won't look as good the mainstream, and I'm anticipating panic sells as soon as the exchanges get up and running again. I'm agnostic as to whether Bitcoin will die or not though...

4Clippy
I have a comment. It's bad, like unbending paperclips. I had set up a Mt Gox account so I could finally have access to the USD-based part of the financial system. I received an internet email telling me that my account was compromised. Before even seeing that internet email, "Google" made me switch to a more secure password because of "suspcious activity" on my internet email account. The Mt Gox email said to change and strengthen my passwords, so I did so on the internet websites that I have accounts for, including this one. Fortunately, I didn't have any USD or bitcoins in Mt Gox because I have been saving them to trade to User:Kevin near par for completion of our deal. In any case, I have devoted more cognition to protecting bitcoin resources, such as by encrypting my wallet.dat file with GPG. I'm also not giving away the private key needed to decrypt it.

The obvious extra question is:

"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.

3gwern
Obviously in this grim future dystopia, sales has been taken over by tireless machines!
0kpreid
She wishes to make sure everyone has the opportunity to enjoy...oh, right.

The obvious extra question is:

"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.

In the least convenient world, the answer is: "I can't afford it until I make enough money by working in sales." Or alternatively, "I have a rare genetic defect which makes the machine not work for me."

And if you do assume "fiat money is doomed, doomed!" then why wouldn't something like bitcoin become the world's reserve currency?

Okay, I'm willing to grant that if the dollar/fiat money in general is doomed then something along the lines of bitcoin would probably take over. But I don't assume this. I guess it is rational to put lots of money into bitcoin if you do take this premise though.

I agree that the dollar becoming effectively worthless would be pretty bad to put it mildly!

0David_Gerard
Even if the USD goes down really badly in value, there's still three hundred million people, a large proportion of whom are workers, to give it some utility as currency.

Weirdly, though I think that bitcoins will succeed (and accordingly have some) I don't think Calacanis' article is well-founded. To focus just on the points I feel I can judge with some merit:

Bitcoin is unstoppable without end-user prosecution.

I don't think this is true. Shutting down of all legitimate currency exchanges would tend to increase the barrier to investment by legitimate investors and be likely to decrease interest in Bitcoins. Anecdote: I would get less interested in bitcoins if this happened. Also, a focused government campaign against it... (read more)

0David_Gerard
It struck me also as a rather shallow and hand-wavy article, and I was wondering what people who liked Bitcoin would think of it. (I am quite used to the feeling of wishing a particular advocate for a view of mine wasn't on my side.) I am not a fan of Calacanis; I respect his skills, I don't like his work or the quality of his ideas.
0mutterc
Lots of people (fans of bitcoin or not) take that as a given. And if you do assume "fiat money is doomed, doomed!" then why wouldn't something like bitcoin become the world's reserve currency? The euro has its problems, but if the dollar ever becomes worthless then the only useful currencies will be ammunition, canned food, and sex.

Hi Less Wrong!

Decided to register after seeing this comment and wanting to post give a free $10 to a cause I value highly.

I got pulled into less wrong by being interested in transhumanist stuff for a few years, finally decided to read here after realizing that this was the best place to discuss this sort of stuff and actually end up being right as opposed to just making wild predictions with absolutely no merit. I'm an 18 year old male living in the UK. I don't have a background in maths or computer sci as a lot of people here do (though I'm thinking of le... (read more)

Your right action is most excellent!