A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share? Do you still think that people interested in alignment research should apply to work at OpenAI?
Do you still think that people interested in alignment research should apply to work at OpenAI?
I think alignment is a lot better if there are strong teams trying to apply best practices to align state of the art models, who have been learning about what it actually takes to do that in practice and building social capital. Basically that seems good because (i) I think there's a reasonable chance that we fail not because alignment is super-hard but because we just don't do a very good job during crunch time, and I think such teams are the best intervention f...
A number of people seem to have departed OpenAI at around the same time as you. Is there a particular reason for that which you can share?
My own departure was driven largely by my desire to work on more conceptual/theoretical issues in alignment. I've generally expected to transition back to this work eventually and I think there a variety of reasons that OpenAI isn't the best for it. (I would likely have moved earlier if Geoffrey Irving's departure hadn't left me managing the alignment team.)
I'm pretty hesitant to speak on behalf of other people who left....
An alternative to editing many genes individually is to synthesise the whole genome from scratch, which is plausibly cheaper and more accurate.
I would find this more useful if you spelled out a bit more about your scoring method. You say:
They must be loyal, intelligent, and hardworking, they must have a sense of dignity, they must like humans, and above all they must be healthy.
Which of these do you think are the most important? Why do these traits matter? (for example, hardworking dogs are not really necessary in the modern world)
And why these traits and not others? (for example: size, cleanliness, appearance, getting along with other animals)
...a dog which is as close to being a wolf as one
Also, I'm torn between how to interpret Snape's last question - my first thought was that he was verifying the truth of a story he had been told("Your master tortured her, now join the light side already!" being the most likely), but upon rereading, I wonder if he was worried that she had been used as Horcrux fuel.
Or verifying a deal he made with Voldemort, though that might not make as much sense with Snape's character.
We get to talk to government and military people quite a bit, attending seminars and giving them presentations, and they nod wisely and ask pertinent questions which we answer. We're not sure how much this has translated into actual policy differences at the end of the day, but there does seem to be a class of people in government willing to listen to these ideas (informally, it seems that the military is more interested than the standard civil servants and politicians).
There are other policy achievements, but Nick and Anders would know more...
Possible consideration: meta-charities like GWWC and 80k cause donations to causes that one might not think are particularly important. E.g. I think x-risk research is the highest value intervention, but most of the money moved by GWWC and 80k goes to global poverty or animal welfare interventions. So if the proportion of money moved to causes I cared about was small enough, or the meta-charity didn't multiply my money much anyway, then I should give directly (or start a new meta-charity in the area I care about).
A bigger possible problem would be if I too...
This probably sounds horrible, but "saving human lives" in some contexts is an applause light. We should be able to think beyond that.
As a textbook example, saving Hitler's life in a specific moment of history of the alternate universe would create more harm than good. Regardless of how much or little money it would cost.
Even if we value all human lifes as intrinsically equal, we can still ask what will be the expected consequences of saving this specific human. Is he or she more likely to help other people, or perhaps to harm them? Because that ...
Hey,
80k members give to a variety of causes. When we surveyed, 34% were intending to give to x-risk, and it seems fairly common for people who start thinking about effective altruism to ultimately think that x-risk mitigation is one of or the most important cause area. As for how this pans out with additional members, we'll have to wait and see. But I'd expect $1 to 80k to generate significantly more than $1's worth of value even for existential risk mitigation alone. It certainly has done so far.
We did a little bit of impact-assessment for 80k (again, wit...
Maybe some kinds of ems could tell us how likely Oracle/AI-in-a-box scenarios were to be successful? We could see if ems of very intelligent people run at very high speeds could convince a dedicated gatekeeper to let them out of the box. It would at least give us some mild evidence for or against AIs-in-boxes being feasible.
And maybe we could use certain ems as gatekeepers - the AI wouldn't have a speed advantage anymore, and we could try to make alterations to the em to make it less likely to let the AI out.
Minor bad incidents involving ems might make people more cautious about full-blown AGI (unlikely, but I might as well mention it).
I was the one who asked that question!
I was slightly disappointed by his answer - surely there can only be one optimal charity to give to? The only donation strategy he recommended was giving to whichever one was about to go under.
I guess what I'm really thinking is that it's pretty unlikely that the two charities are equally optimal.
Point taken. This post seems unlikely to reach those people. Is it possible to communicate the importance of x-risks in such a short space to SL0's - maybe without mentioning exotic technologies? And would they change their charitable behavior?
I suspect the first answer is yes and the second is no (not without lots of other bits of explanation).
Actually, both that and the Earth image at the beginning of the article seem a little out of place. At least the latter would fit well into a print article (where you can devote half a page or a page to thematic images and still have plenty of text for your eyes to seek to), but online it forces scrolling on mid-sized windows before you can read comfortably. I think it'd read more smoothly if it was smaller, along the lines of the header images in "Philosophy by Humans" or (as an extreme on the high end) "The Cognitive Science of Rationality".
Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I'm a little concerned that it hasn't been dealt with properly by SIAI folks - aside from a few comments by Carl Shulman on Katja's blog.
I suspect the answer may be something to do with anthropics - but I'm not really certain of exactly what it is.
This is great! I hope there's a big response.
It seems likely you're going to get skewed answers for the IQ question. Mostly it's the really intelligent and the below average who get (professional) IQ tests - average people seem less likely to get them.
I predict high average IQ, but low response rate on the IQ question, which will give bad results. Can you tell us how many people respond to that question this time? (no. of responses isn't registered on the previous survey)
I think it would be more informative to ask people to take one specific online test, now, and report their score. With everyone taking the same test, even if it's miscalibrated, people could at least see how they compare to other LWers. Asking people to remember a score they were given years ago is just going to produce a ridiculous amount of bias.
Would anyone care to comment on the recent Mt Gox hack n' crash?
Personally, I'm thinking that this very bad. The currency won't look as good the mainstream, and I'm anticipating panic sells as soon as the exchanges get up and running again. I'm agnostic as to whether Bitcoin will die or not though...
The obvious extra question is:
"If you think it's so great, how come you're not using it?" Unless the sales girl's enjoyable life includes selling the machine she's in to disinterested customers.
In the least convenient world, the answer is: "I can't afford it until I make enough money by working in sales." Or alternatively, "I have a rare genetic defect which makes the machine not work for me."
And if you do assume "fiat money is doomed, doomed!" then why wouldn't something like bitcoin become the world's reserve currency?
Okay, I'm willing to grant that if the dollar/fiat money in general is doomed then something along the lines of bitcoin would probably take over. But I don't assume this. I guess it is rational to put lots of money into bitcoin if you do take this premise though.
I agree that the dollar becoming effectively worthless would be pretty bad to put it mildly!
Weirdly, though I think that bitcoins will succeed (and accordingly have some) I don't think Calacanis' article is well-founded. To focus just on the points I feel I can judge with some merit:
Bitcoin is unstoppable without end-user prosecution.
I don't think this is true. Shutting down of all legitimate currency exchanges would tend to increase the barrier to investment by legitimate investors and be likely to decrease interest in Bitcoins. Anecdote: I would get less interested in bitcoins if this happened. Also, a focused government campaign against it...
Hi Less Wrong!
Decided to register after seeing this comment and wanting to post give a free $10 to a cause I value highly.
I got pulled into less wrong by being interested in transhumanist stuff for a few years, finally decided to read here after realizing that this was the best place to discuss this sort of stuff and actually end up being right as opposed to just making wild predictions with absolutely no merit. I'm an 18 year old male living in the UK. I don't have a background in maths or computer sci as a lot of people here do (though I'm thinking of le...
I found it really helpful to have a list of places where Eliezer and Paul agree. It's interesting to see that there is a lot of similarity on big picture stuff like AI being extremely dangerous.