[Contains No HPMOR Spoliers]

[http://hpmor.com/notes/119/](http://hpmor.com/notes/119/)

I was at first confused by Eliezer's requests at the end of Ch. 119. I missed his Author's Notes, which his explains his rationale behind them. I thought I would share in case others missed it, especially because readers on LessWrong may have more elite or broader networks to help Eliezer achieve his new goals.

  • Eliezer is seeking the possibility of gaining J. K. Rowling's permission to publish HPMoR in some form, and have the profits donated to a U.K. charity. To this end, he wants to contact someone who can put him in touch with either J. K. Rowling, or Daniel Radcliffe.
  • If you're attending or supporting Worldcon through 2015-17, and think Harry Potter and the Methods of Rationality is worthy, please nominate it to win Best Novel for the 2016 Hugo Awards.
  • The Machine Intelligence Research Institute is looking to hire at least one new executive, and very competent mathematicians for its research.
  • Eliezer is thinking on what fiction to next write. He would re-edit an old script of his if and only if someone contacts with him to make either a movie with high-quality special effects, or an anime. Otherwise, he is seeking to work with the Collective Intelligence (the problem-solving force composed of the HPMoR readership). Check it out if you consider yourself part of the CI.
  • Eliezer is seeking to contact John Paulson the hedge-fund manager, to represent a financier as their angel investor, and other investment opportunities.

Eliezer has several other projects he might be interested in. Learn more by clicking the link.

 

New to LessWrong?

New Comment
66 comments, sorted by Click to highlight new comments since: Today at 10:41 PM

Obstacle #2 to my writing more fiction is that my writing so far has had negative, as well as positive, consequences for public relations. My writing tends to be controversial and stomp all over certain sorts of minefields. Worse, there is some quality of it that seems to attract a certain sort of Sneer mindset – not just social-media sneertrolls, but the seething pools of corruption that are mainstream journalists.

This is something that has made me feel rather conflicted about HPMOR - on the one hand, I've really enjoyed reading it, but on the other I fear it makes a whole range of important beliefs look ridiculous by association.

But most of the damage that can been done, already has been done. The ways to minimise the damage are fairly straightforwards:

(1) Make it obvious that HPMOR is not a day job, so that people can't say "MIRI gave EY time off work to work on HPMOR! People donate large amounts of money to MIRI to produce fanfiction!"

(2) Stay out of politics. There are mines that don't actually need to be stomped on. This means removing small sections of writing which do not advance the plot, but make a political point, such as in three worlds collide rape is legal (I struggle to imagine how this could work), or the section of HPMOR where it is explained that not believing in open borders (which the average person does not believe in) makes you as bad as Voldemort and that if you don't want someone in your country that means you think that they are not even worth spitting on.

Does he realise that there are non-racist reasons for wanting closed borders?

EY wrote 'politics is the mindkiller', which makes it even stranger when he mindkills his readers.

This came out more critical then I would prefer. But I just don't like to see pointless landmine-stamping in otherwise pretty awesome fiction.

But most of the damage that can been done, already has been done.

In my experience, this is an amazingly good assumption to avoid ever making.

Worse, there is some quality of it that seems to attract a certain sort of Sneer mindset – not just social-media sneertrolls, but the seething pools of corruption that are mainstream journalists.

There's a certain irony in phrasing it this way.

I think part of it is that EY writes for fun, rather than solely with some direct aim in mind, and he really likes stomping on landmines. (And who can blame him? Stomping on landmines is fun!) The fic includes a fair number of things that his average audience member dislikes, but which is there because he likes it, and would likely be unable to write as much if he kept restricting himself. There's a reason that there's a significant word count devoted to anime references, and I believe it's mostly related to what produces hedons for him.

I'm not saying he should remove the anime references, although they do go straight over my head. I'm saying that getting rid of the legalised rape would involve cutting one or two sentences - a tiny amount of the story which probably generates a hugely disproportionate amount of criticism.

For what its worth, when I write fiction, I just write whatever inspire me, and then go back over it later and remove the bits which no-one else will get.

Well, it also seems to me that external editors exist for this reason, among others, and that he can write all the landmine stomping he wants, because publishing it is the PR mistake.

The bit on legalized rape is an important way of conveying that the future will seem weird and surprising and immoral to us, just like 2015 would seem weird and surprising and immoral to someone from a few centuries ago. I want my science-fiction to show how weird things are likely to be (even if the specific kind of weirdness is of course likely to be very wrong), I don't want it to be a bowdlerized soap opera with robots and lasers in the background.

And if people can't understand that and read any kind of far-off weirdness through the lens of this decade's petty tribal politics, then basically, fuck 'em. I don't want Eliezer or anybody else to bend backwards to avoid being misread by idiots.

And sure, it's bad PR, but it's a bit of a self-fulfilling policy, a bit like how "openly criticizing the Government" is a bad career move for a Chinese citizen.

[-][anonymous]9y60

Any number of examples could have been chosen however. So why pick one which is legitimately a hot button issue for anyone who has been personally affected by it, a depressingly large portion of the population?

Yeah, I get the whole weirdtopia thing. But like Mark says, its probably not the best weird thing to be chosen.

And if people can't understand that and read any kind of far-off weirdness through the lens of this decade's petty tribal politics, then basically, fuck 'em.

In one way, I think this attitude is commendable - interlectual and artistic integrity and not having to kowtow to people who are offended. But at the same time, 'anyone who disagrees can just fuck off' ... its not the best PR. And I don't think 'not being scared of rape' is an important criteria for rationalists.

I've read a similar idea to legalised rape before, in the context of a future where it was considered extremely bad manners to refuse sex. I can kinda imagine this could work. But legalised violent rape...

What I imagine would happen is that one person would try to rape another, they would fight back, their friends would intervene, a full-blown bar fight would ensue, someone would smash a bottle, and people would end up in the hospital, or the mental asylum, or the morgue.

Or are they not allowed to fight back? Is it maybe just date rape which is legal, or can you for instance, kidnap and rape someone who is on their way to an important business meeting? Do people say "First on the agenda, Mrs Brown and Mr Black give their apologies that they are unable to attend the emergency meeting on disaster relief for Proxima centuri - alpha, as Mrs Brown has contracted space measles and Mr Black is otherwise engaged in being anally gang raped"?

I mean, even if everyone in the future are the sort of ultra-kinky people who enjoy being raped, and everyone is bisexual to avoid the problem of being raped by the wrong gender, it still doesn't make sense.

It might be worth rereading the passage in question:

The Confessor held up a hand. "I mean it, my lord Akon. It is not polite idealism. We ancients can't steer. We remember too much disaster. We're too cautious to dare the bold path forward. Do you know there was a time when nonconsensual sex was illegal?"

Akon wasn't sure whether to smile or grimace. "The Prohibition, right? During the first century pre-Net? I expect everyone was glad to have that law taken off the books. I can't imagine how boring your sex lives must have been up until then - flirting with a woman, teasing her, leading her on, knowing the whole time that you were perfectly safe because she couldn't take matters into her own hands if you went a little too far -"

"You need a history refresher, my Lord Administrator. At some suitably abstract level. What I'm trying to tell you - and this is not public knowledge - is that we nearly tried to overthrow your government."

"What?" said Akon. "The Confessors?"

"No, us. The ones who remembered the ancient world. Back then we still had our hands on a large share of the capital and tremendous influence in the grant committees. When our children legalized rape, we thought that the Future had gone wrong."

Akon's mouth hung open. "You were that prude?"

The Confessor shook his head. "There aren't any words," the Confessor said, "there aren't any words at all, by which I ever could explain to you. No, it wasn't prudery. It was a memory of disaster."

"Um," Akon said. He was trying not to smile. "I'm trying to visualize what sort of disaster could have been caused by too much nonconsensual sex -"

"Give it up, my lord," the Confessor said. He was finally laughing, but there was an undertone of pain to it. "Without, shall we say, personal experience, you can't possibly imagine, and there's no point in trying."

The passage is very clearly about value dissonance, about how very different cultures can fail to understand each other (which is a major theme of the story). They don't go into details because the only reasons characters bring it up is to show how values have changed.

And sticking to a less-controversial example would have defeated the point. And for illustrating this point, I much prefer this approach (meta talk between characters about how much things have canged) than one that would go into details of how the new system worked.

Oh yes I understand the value dissonance and controversy.

But... babyeating is certainly controversial, and yet I think it does not alienate people in the same way that rape will, largely because far more people have traumatic memories of rape than of infant cannibalism.

At the end of the day, I personally prefer the controversial writing, but its a trade off against PR. I would certainly prefer the really controversial bits get edited out to EY stopping writing because of negative PR.

I think the problem is that it's a scenario of being raped by someone you find attractive in some sense, which is necessarily how rape fantasies inside one's own head go. Even if it's a degradation fantasy, you're still running it.

I don't see how such rules can be made to be a generally good experience in the real world for all involved, unless there's some extreme improvement in people's ability to read each other for "this will be fun" and willingness to not override other people's real consent.

I think there are some, ah, highly unusual people who want to be raped by unattractive people because its even more degrading.

But anyway, the way that rules could be made to work for everyone would be to institute a code like 'everyone wearing a red hanky wants to be raped'. With smartphones this could be made more sophisticated, and you could set statuses such as only wanting to be raped by people who are rated at least 3/5 on looks.

But this is still a way of giving prior consent in general, rather than legalised rape.

See The Just City by Jo Walton for some descriptions of how unpleasant obligatory sex can be, even with consent. I think it's reasonable to frame it as system 2 consents, but system 1 doesn't.

There's plenty else going on in the book-- it's about an effort to create Plato's Republic.

In context, I thought that it was only date rape. More specifically, you could only rape somebody who had been leading you on. And they regularly discuss paying for sex, so it's not just free for the taking. ETA: Not to mention the rape in the epilogue, which is described as horrible.

Hmm. Well, if any date can end in rape, and this is somehow enforceable, then this is a lot more practical, although it still requires a 100% ultra-kinky population, otherwise the non-kinky people would not be able to interact with the opposite sex.

Did they embark on some sort of mass genetic or social engineering program to make everyone kinky?

I don't know; that whole world is pretty kinky by our standards, but that might just be Eliezer's attempt to show that the future will have values strange to us, rather than something with a specific reason.

This may be a stupid question, but is that mosquito laser drone thing really the best way to solve the problem of... what problem is it even solving ? "Too many mosquitoes" ? "Malaria" ?

Your confusion is a clever ruse, but your username gives away your true motives!

Curses ! I am undone !

[-][anonymous]9y160

There's a much cheaper and much older flying platform for mosquito elimination. It's called a bat.

EDIT: or perhaps the bred/genetically modified steriile mosquitos that can wipe out populations in large areas?

Self-perpetuating area-wide techniques like mass release of modified mosquitoes with gene-drive systems is very probably a superior answer if the problem is "there are too many (ie any) human-feeding mosquitoes".

If the problem is rather "what is the coolest-sounding possible way to wipe out mosquitoes", then drone-mounted lasers are in the running.

Wiki says the idea has been suggested in earnest as one of the forms a mosquito laser could take, and was rejected in favor of a better one.

I don't think there is any quadcopter that can fly for more than 30 minutes on one battery charge - and that's without mosquito recognition and zapping systems drawing on that same battery.

Also, having quadcopters flying around zapping insects is at least going to be visually distracting.

Not to mention, the leading cause of propeller-induced face laceration syndrome...

Right, this sort of thing is only practical given fully automated battery replacement.

Some perspective:

Instead of closed offices, most of people worked or learned under the sunlight, shielded by a glass screen overhead that kept out the rain and ultraviolet, with your own space concealed by curtains that could be opened or shut to indicate botherability. If you needed silence for concentration, you used earplugs. People who needed to have loud conversations without disturbing others would have enclosed rooms with doors and glass ceilings and air conditioning. If you showed the serious people a world where most people never saw the sun while they worked, they’d flip out and then correct the problem. Skyscrapers weren’t much built in dath ilan until we had extremely bright artificial light that could mostly substitute for sunlight, and hey were all put in locations where skyscrapers were explicitly allowed. Blocking out someone else’s sun would be a serious transgression, and symbolic.

We had laser zappers and other measures that destroyed bugs and mosquitos and wasps and bees - these were considered far more annoying in dath ilan than Earth, and our civilization put a lot of effort and technology into rooting them out, or preventing them from getting a foothold within the great city. On the “beware of trivial inconveniences” scale, I suspect that an absence of little flying bugs, to say nothing of bugs that bit and stung and made noises, might be part of why people did their daily work beneath sunlight, in open air. I think there was a variety of butterfly that was bred to pollinate flowers and such within cities, in place of bees - at least I know that we weren’t supposed to crush butterflies.

From here. Basically Eliezer thinks people should work outside but don't because of insect problems (among other things.)

So... people should work inside greenhouses? I can see more than one problem with this.

Air-conditioned greenhouses.

Not quite greenhouses. It seems like Eliezer is saying it would be a glass canopy without enclosing walls (so you would still get natural fresh air flow.

This might be a good idea if there was some way to stop screen glare

[-][anonymous]9y00

E-ink.

Laser-mounted mosquito-exterminating drones seems an uncommon and difficult-to-engineer enough approach that if anyone was serious or successful enough in even launching such an endeavor that they have a public-facing interface to raise awareness or gauge interest, I figure they'd be both impressive and rare enough Eliezer would want to at least get in touch with them. Either that, or he is trolling us with all his projects near the end, or is partially trolling us by interjecting fake interest in ridiculously ambitious projects between his real interest in other ridiculously ambitious projects.

Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years. From Eliezer's perspective, the rationality of him and his network, combined with the gusto of the surrounding community, may be enough to achieve very ambitious projects which start from seemingly ridiculous premises.

[-][anonymous]9y100

Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.

Yet another source of perspective is reading the documents he wrote around the turn of the millennium circa the founding of SIAI on the subject. They can only be described as 'hilarious'. There were specs for a programming language that would by its design 'do what i mean' that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.

Well, Eliezer's about... what, 35? Not far off that, anyway. I'm sure I wrote some stuff that was at least that embarrassing when I was 20, though it wouldn't have been under my own name or wouldn't have had any public exposure to speak of or both.

I just want to note we're not discussing laser-mounted mosquito-terminating drones anymore. That's fine. Anyway, I'm a bit older than Eliezer was when he founded the Singularity Institute for Artificial Intelligence. While starting a non-profit organization at that age seems impressive to me, once it's been legally incorporated I'd guess one can slap just about any name they like on it. The SIAI doesn't seem to have achieved much in the first few years of its operation.

Based on the their history from the Wikipedia page on the Machine Intelligence Research Institute, it seems to me how notable the organization's achievements are are commensurate with the time it's been around. For several years, as the Singularity Institute, they also ran the Singularity Summit, which they eventually sold as a property to Singularity University for one million dollars. Eliezer Yudkowsky contributed two chapters to Global Catastrophic Risks in 2008, at the age of 28, without having completed either secondary school or a university education.

On the other hand, the MIRI has made great mistakes in operations, research, and outreach in their history. Eliezer Yudkwosky is obviously an impressive person for various reasons. I think the conclusion is Eliezer sometimes assumes he's enough of a 'rationalist' he can get away with being lazy with how he plans or portrays his ideas. He seems like he's not much of a communications consequentialist, and seems relcuctant to declare mea culpa when he makes those sorts of mistakes. All things equal, especially if we hasn't tallied Eliezer's track record, we should remain skeptical of his plans based on shoddy grounds. I too don't believe we should take his bonus requests and ideas at the end of the post seriously.

There were specs for a programming language that would by its design 'do what i mean' that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.

Flare (the language) didn't sound that dumb to me - my impression wasn't that it would inherently 'do what i mean' but that it would somehow be both machine and human - readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.

Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that 'do what i mean' languages will appear, for instance the language could detect 'obvious' mistakes, correct them and inform the user.

e.g. "x * y=z does not work because the dimensions do not match. Nor does x' * y=z, but x * y'=z does, so I have taken the liberty of changing your code to x * y'=z"

or "'inutaliseation' is not a function or variable. I assume you meant 'initialization', which is a function, and I corrected this mistake"

Eventually, it might evolve into a natural language to code translator.

But yes, a nanowar by 2010 wasn't the smartest idea.

Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.

SIAI started before the rationality blogging. Vernor Vinge warned about AI causing the end of the human race back in 1993.

I have difficulty accepting that a substantial portion of FAI researchers were drawn to the subject by HPMOR.

(Of course, FAI researchers, LWers and HPMOR fans are distinct groups of people)

Information on the history of the MIRI from 2002 through 2006 is sparse, as gleaned from the Wikipedia page on the organization. As the SIAI in 2006, they successfully raised $200,000 as part of a donation campaign, with $100,000 matched as a donation by Peter Thiel. In the years since, the MIRI seems to have at least once annually held fundraisers that turn out just as successful. "The Sequences" were scarcely started in 2006, so I don't know if Peter Thiel got wind of Eliezer's ideas and organization on SL4, or Overcoming Bias, or what. Anyway, while Vinge, and earlier, I. J. Good, warned against the dangers of machine superintelligence, Eliezer founded a research organization aimed at solving this problem, formulated the mission for doing so, and popularized this through his meetings. I'm using metrics such as the raised profile of risks from machine intelligence, and the amount of vocal support and donations the MIRI receives as a proxy for how they and Eliezer specifically have raised the profile of this field of inquiry and concern. I assume others would not have done so much for the MIRI if they didn't believe in its mission. Most of the recent coverage should probably be attributed to Nick Bostrom ans his recent book, though.

At the 2014 Effective Altruism Summit, Eliezer reported there are only four full-time FAI researchers in the world. That is himself, Nate Soares, and Benja Fallenstein of the MIRI, and Stuart Armstrong of the FHI. I was incredulous, and guessed Eliezer's definition of 'FAI researcher' was more stringent than most sensible people would use. I asked Luke Muehlhauser for clarification. He remarked beyond those four Paul Christiano might count as 'half a FAI researcher' because he spends a portion of his time as a mathematician as UCB working on mathematics in line with the MIRI's research agenda. The MIRI has since hired Patrick LaVictoire, and perhaps others.

The point is, the MIRI itself thinks there's less than a dozen FAI researchers. For all we know, all FAI researchers might be users of LessWrong, and HPMoR fans. I could ask all of the known "FAI researchers" if they were first introduced to these research ideas, through LessWrong, through HPMoR. That indeed might be a "substantial portion". You or I might qualify "FAI researcher" differently, but Eliezer by his own admission believes writing more HPMoR is one of the surprisingly best way to draw more attention from Math Olympiad contestants to their research, as does the MIRI.

Vernor Vinge warned about AI causing the end of the human race back in 1993.

I.J. Good also warned about it in the 1960s.

Indeed, although since this was before the internet, it didn't start any sort of movement.

That may not be the only reason that didn't get off the ground as a movement. Movements have existed before the internet. However, in a different way the internet may matter: a world with internet and modern computers may make something like a superintelligent AI more viscerally plausible as a possibility.

Movements certainly have existed before the net, but generally where there is a high enough density of potential members to organise via word of mouth and print media. With the possible exception of a few places such as Silicon Valley, I don't think that exists in this case.

I do agree with you that in many ways superintelligence seems more plausible given modern technology, but OTOH people are cautious after the AI winters.

[-][anonymous]9y40

In what way is it a seriously substantial movement?

I guess those are pretty vague words. It's a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It's received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He's been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.

The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.

This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he's the embarrassing crank with the facial hair that will not shut up, but who's smart enough and informed enough to be making arguments that aren't trivially dismissed. No one with real power wants to have that guy in the room, and so they don't usually end up as the person giving TV interviews and going to White House dinners and such when it does turn into a viable intellectual current. But if you need to get a really weird concept off the ground, you need to have such a person pushing it until it stops being really weird and starts being merely weird, because that's when it becomes possible for Traditional Public Intellectuals to score some points by becoming early adopters without totally screwing up their credibility.

I wouldn't use the word "crank" myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn't be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don't know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn't get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.

These men haven't totally screwed up their credibility, but neither does it seem they've scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he's playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn't care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.

Well, at this point I think Eliezer's basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the "Traditional Public Intellectuals" of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won't be getting credit for it until it goes truly mainstream, but that's how the early adopter thing works in this context. I don't know much about Nick Bostrom on a strategic level, but from what I've read of his publications he seems to be taking a complementary approach.

But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I'm trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it's important to recognize that comes from the same place that the Sequences did.

That has implications in both directions, of course.

This is the message I missed inferring from your original reply. Yes, I concur we're in agreement.

[-][anonymous]9y20

I wouldn't use the word "crank" myself to describe either Yudkowsky or de Grey

I would.

[-][anonymous]9y30

Thousands, if not tens of thousands? Try a few dozen, maybe.

By "research projects followed by", I was again being vague. I didn't mean there are thousands of people reading each and every publication that comes out from the MIRI, or is even linked to by its website as related to its research. I meant there are people interested in the problem, whether through exposure from LessWrong, the MIRI, the Singularity Summit and similar events, who will return to think of the problem in future years. "Tens of thousands" means "at least twenty thousand", which I doubt is true. The 2014 LessWrong survey had 1506 participants, most of who I'd guess "are aware of the MIRI's ongoing work". As this sample is representative of a larger group of LessWrong users, along with the other sources I mentioned, I wouldn't be surprised if there are couple or few thousand people paying attention to the MIRI or related research in at least a cursory way. If it was actually ten thousand, that might surprise me.

Wouldn't a stationary laser system be simpler? At least as an initial minimum viable product?

I would also suggest that if you can introduce Eliezer to Jonathan or Christopher Nolan, you should. Probably by passing along Three Worlds Collide. (This mainly has to do with point 4.)

I think this would have a far higher chance of a positive outcome than any of the meetings Eliezer asked to arrange.

I think EY has a better chance of getting a Hugo in the Fan Writer category:

https://en.wikipedia.org/wiki/Hugo_Award_for_Best_Fan_Writer

" I would try to get in touch with J. K. Rowling to see if HPMOR could be published in book form,..., I’m not getting my hopes up, but I do have a rule telling me to try rather than automatically giving up and assuming something can’t be done."

More specifically, if you need permission from someone, just ask.

Regarding the three possible options Eliezer gave for his next work:

  • Try to write a more traditional-format novel, the sort that gets consumed in one sitting.

  • Dust off an old movie script of mine, and revise it to make the protagonist more agenty and intelligent in accordance with my current standards. I won’t do this unless some reader writes me with (a) an offer to possibly make a movie that would involve a pretty high level of special effects in the background, or (b) an offer to possibly produce an anime movie.

  • Try to produce a reader-choice-driven novel with readers bidding on options and my selecting whatever option I like (but having an incentive to pick higher-bid options). This would require software support, but it looks like that software might be something that can be made to exist. I liked the power of the Collective Intelligence of /r/hpmor, and I would like to interact with the CI again in a context where the character can actually do whatever the CI thinks is a good idea, where there’s few enough open plot parentheses that I can suddenly decide that yes they are still inside the Mirror of Vec. Does anyone have a recommendation for reader-driven story software that beats anonkun.com?

Which of the above would you prefer?

[pollid:841]

A moderator fixed it, but thanks for pointing it out regardless.

I am supremely interested in the plots Eliezer is running.

A lot of the stuff he mentions at the end are similar to things he mentioned in his "controversial" April Fool's Day post. I remember the anti-mosquito thing and the mobile/modular house thing in particular.

[-][anonymous]9y00

Oh boy...

[This comment is no longer endorsed by its author]Reply