Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: g_pepper 22 September 2017 01:47:30AM *  0 points [-]

I'm a two-boxer. My rationale is:

  1. As originally formulated by Nozick, Omega is not necessarily omniscient and does not necessarily have anything like divine foreknowledge. All that is said about this is that you have "enormous confidence" in Omega's power to predict your choices, and that this being has "often correctly predicted your choices in the past (and has never, as far as you know made an incorrect prediction about your choices)", and that the being has "often correctly predicted the choices of other people, many who are similar to you". So, all I really know about Omega is that it has a really good track record.

  2. So, nothing in Nozick rules out the possibility of the outcome "b" or "c" listed above.

  3. At the time that you make your choice, Omega has already irrevocably either put $1M in box 2 or put nothing in box 2

  4. If Omega has put $1M in box 2, your payoff will be $1M if you 1-box or 1.001M if you 2-box.

  5. If Omega has put nothing in box 2, your payoff will be $0 if you 1-box or $1K if you 2-box.

  6. So, whatever Omega has already done, you are better off 2-boxing. And, your choice now cannot change what Omega has already done.

  7. So, you are better off 2-boxing.

So, basically, I agree with your assessment that "two-boxers believe that all 4 are possible" (or at least I believe that all 4 are possible). Why do I believe that all 4 are possible? Because nothing in the problem statement says otherwise.

ETA:

Also, I agree with your assessment that "one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)". But, in thinking this way, one-boxers are reading something into the problem beyond what is actually stated or implied by Nozick.

Comment author: Dagon 22 September 2017 01:02:11AM 1 point [-]

Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven't polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:

  • a: Omega predicts two-box, player two-boxes, payout $1000
  • b: Omega predicts two-box, player one-boxes, payout $0
  • c: Omega predicts one-box, player two-boxes, payout $1001000
  • d: Omega predicts one-box, player one-boxes, payout $1000000

I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being "tricking Omega") and reason that c > d and a > b, so two-boxing dominates one-boxing.

Aside from "lizard man", what are the other reasons that lead to two-boxing?

Comment author: Benito 21 September 2017 11:29:23PM 0 points [-]

Woop!

Comment author: NancyLebovitz 21 September 2017 10:51:58PM 0 points [-]

I've done that. Still haven't gotten an email. I've checked my spam folder.

Comment author: Alicorn 21 September 2017 10:42:53PM 0 points [-]

Now that I'm looking at it more closely: Quoted text in comments does not seem sufficiently set off. It's slightly bigger and indented but it would be easy on casual inspection to mistake it for part of the same comment.

Comment author: Alicorn 21 September 2017 10:40:26PM 1 point [-]

That worked!

Comment author: Benito 21 September 2017 10:20:03PM 0 points [-]

I've just sent an email to the new address. Let me know if that works or not (very occasionally it goes into spam btw).

Comment author: ChristianKl 21 September 2017 08:54:43PM 0 points [-]

That might not be the only reason. The Reddit code base might very well do hashing in a complex way that they don't simply want to carry over.

Comment author: Caspar42 21 September 2017 08:37:22PM 0 points [-]

Yeah, I also think the "fooling Omega idea" is a common response. Note however that two-boxing is more common among academic decision theorists, all of which understand that Newcomb's problem is set up such that you can't fool Omega. I also doubt that the fooling Omega idea is the only (or even the main) cause of two-boxing among non-decision theorists.

Comment author: Alicorn 21 September 2017 08:20:01PM 0 points [-]

I did that yesterday and again today. PMing.

Comment author: gjm 21 September 2017 08:19:30PM 0 points [-]

PM sent. Thanks!

(Though I think the email address Lesser Wrong has for me is already the right one...)

Comment author: Benito 21 September 2017 07:22:10PM *  0 points [-]

Hi, I think the comment above was slightly inaccurate earlier - when you're logging in, hit 'forgot password' to get the email.

If you did that and still don't receive an email, PM me with the email you want to get it sent to, and I'll do it manually.

Comment author: Screwtape 21 September 2017 07:20:00PM 0 points [-]

The synopsis of The Wicked + The Divine does look like my kind of tale. Both "Mortal becomes god" and "Mortal kills god" show up with weird frequency in my favourite stories.I'll likely check that one out first :)

I thoroughly enjoyed last year's solstice. I'm hoping to be able to take a three day weekend for it, since I know there were some meetups before and after that I had to miss since I was just in town for the one night. Do you happen to know the best spot to watch for details on the solstice or adjacent activities, once things are more organized?

Comment author: Habryka 21 September 2017 07:16:06PM 0 points [-]

Sorry, there was a miscommunication at an earlier point. We did not send out password-reset emails to everyone, however you can request a password-reset email in the login form on the new LessWrong, which should work well.

Comment author: Habryka 21 September 2017 07:14:57PM 0 points [-]

Hmm, maybe you had a different email registered than the one you are checking? Can you send me a PM with your preferred email? I am happy to change it to that then.

Comment author: WalterL 21 September 2017 06:55:49PM 0 points [-]

"Or is it so obvious no one bothers to talk about it?"

Well, that's not it.

Comment author: Jiro 21 September 2017 05:14:08PM 0 points [-]

It seems to have been a cookie problem so I got it working.

However, I ended up with two logins here. One I never used much, and the other is this one. Naturally, lesserwrong decided that the one that it was going to associate with my email address is the one that I never used much.

I'd like to get "Jiro" on lesserwrong, but I can't, since changing password is a per-email thing and it changes the password of the other login. Could you please fix this?

Comment author: Dagon 21 September 2017 05:11:50PM 0 points [-]

My own response varies based on presentation of the problem, as does most people I've informally discussed it with. What conclusions would anyone be able to draw from a blend of such polls? The right answer is clearly "one-box unless you think you can fool Omega", and most formulations of the question can be taken as "do you think you can fool Omega?".

Now that I think about it, I've only seen it discussed here in the context of acausal decision theory, showing that in the perfect-information case, one-boxing is simply correct. What do we learn from any polls that don't specify the mechanism that closely?

What should I learn from polls showing that 40% of some demographic think they can fool omega, 60% of some other demographic think they can, and 4% of most polls vote for lizard man?

Comment author: gjm 21 September 2017 05:02:03PM 0 points [-]

Man, US healthcare is ridiculous.

(There's lots not to like about the National Health Service here in the UK, but if I had an episode like yours I would expect to be seen by a medical professional within a day, and either told "oh yes, that's a thing that happens and it isn't dangerous" or brain-scanned in short order, and it wouldn't cost me a penny[1].)

[1] Of course my taxes are higher in order to support such things; my point isn't that we magically get decent healthcare for free but that having this sort of thing done free-at-point-of-use sets up incentives that are better for everyone than the US system, where either you have private insurance and get over-tested and over-treated for everything or else you have no insurance and don't get examined at all even when you might have suffered some exciting brain malfunction.

Comment author: NancyLebovitz 21 September 2017 04:48:21PM 0 points [-]

I didn't get the password reset email.

Comment author: wearsshoes 21 September 2017 04:38:06PM *  0 points [-]

From recent releases, I really like Tillie Walden's ultrasoft scifi On a Sunbeam, (2015-2017), and Kieron Gillen's The Wicked + The Divine (2014-ongoing), which has a lot of similarity to American Gods.

For something rationalist-adjacent, I'd recommend Blue Delliquanti's O Human Star (2012-ongoing), which deals with LGBTQ issues in the context of FAI and transhumanism.

Would love to have you in attendance!

Comment author: gjm 21 September 2017 04:25:22PM 1 point [-]

On the other hand, I think this might come across as too "cute" and be felt insincere.

Comment author: Kaj_Sotala 21 September 2017 03:42:09PM 0 points [-]

I think the font feels okay (though not great) when it's "normal" writing, but text in italics gets hard to read.

Comment author: Kaj_Sotala 21 September 2017 03:39:38PM *  0 points [-]

Calling ourselves "Wrong" or "Wrongers" would also fix the problem of "rationalist" sounding like we'd claim to be totally rational!

Comment author: Screwtape 21 September 2017 03:14:53PM 0 points [-]

I don't know what the best algorithm is, but what I did was something like the following.

Step 1. Make a list of the things you enjoy doing. Attempt to be specific where possible- you want to get at the activity that's actually enjoyable, so "making up stories" is more accurate for me than "writing" is, since it's the storytelling part that's fun for me instead of the sitting down and typing specifically. Sort the list in the order that you most enjoy doing the thing, with an eye towards things you can do a lot of. (I do like chocolate, but there's a sharp limit in the amount of chocolate I can eat before it stops being fun.) There's no exact length you need, but 10~15 seems to be the sweet spot.

Step 2. Line up the things you enjoy doing with jobs that do them a lot. Make a list of those jobs, putting under each job the different things you would like about them along with things you know you'd dislike about doing the job. Talking to people in that field, reading interviews with them, and good old fashioned googling are good steps here. Sort the jobs by how many of your favourite things to do are in them and how few things you don't want to do are in them.

Step 3. Take the list of jobs, and look up how much money each job makes, along with how much demand there is for that job and how many qualifications you'd need to earn to reasonably expect to get the job. Hours worked per week and health risks are also good things to think about. (Note: Sitting at a computer for nine hours straight should really count as a health risk. I'm not joking.)

Step 4. You now have a good notion of enjoyment vs practicality. If there's a standout winner in both of them, do that. If not, then consider your tradeoffs carefully. You will probably enjoy things less when you have to wake up every morning and do them, but it also caught me by surprise how little time it feels like I have to work on personal projects after eight or nine hours plus commuting.

Step 5. Think about UBI and cry a little, then dedicate a side project towards ushering in the glorious post-scarcity future.

Comment author: g_pepper 21 September 2017 01:56:55PM 0 points [-]

Yep.

And, in the Maps of Meaning lecture series, Peterson gives a shout-out to Rowling's Harry Potter series as being an excellent example of a retelling of an archetypal myth. So, it was a good choice of material for Yudkowsky to use as he did.

Comment author: g_pepper 21 September 2017 01:36:29PM 0 points [-]

Using mythology to illustrate philosophical points has a lengthy tradition prior to Sartre. Achilles would have been a mythological figure by the time Zeno of Elea demonstrated the impossibility of motion by imagining a race between Achilles and a tortoise. And, in Phaedrus, Plato imagines a conversation between Thoth (from Egyptian mythology) and the Egyptian king Thamus to make a point about literacy.

Comment author: casebash 21 September 2017 01:29:58PM 0 points [-]

It works now.

Comment author: Brillyant 21 September 2017 01:29:54PM 0 points [-]

Plenty of evidence.

Any that you find particularly clear and compelling?

Comment author: plethora 21 September 2017 12:48:09PM 0 points [-]

I'd be surprised if Yudkowsky has read Sartre. But it's a natural thing to do. Harry Potter is (unfortunately) the closest thing we have to a national epic we have these days... well, an Anglosphere epic, but you get the idea.

If this is the sort of thing you're interested in, you might want to read Benedict Anderson's book Imagined Communities.

Comment author: kvas 21 September 2017 12:35:04PM 2 points [-]

I took the survey. It was long but fun. Thanks for the work you've put into designing it and processing the results.

Comment author: gjm 21 September 2017 10:51:10AM 1 point [-]

Summary:

Caspar Oesterheld and Johannes Treutlein, who are researchers at the Foundational Research Institute working on decision theory from a Less-Wrong-ish perspective, looked at all the polls and surveys they could find indicating people's preferred decision in the Newcomb problem.

They found, in line with conventional wisdom, that polls of professional philosophers, especially ones specializing in decision theory, tend to yield a substantial but not overwhelming majority in favour of two-boxing and that polls of other populations mostly yield results closer to 50:50 but tending to prefer one-boxing. ... Well, except that it looks to me as if those polls in fact tend to give results about as much in favour of one-boxing as the philosophers are in favour of two-boxing.

The surveys with the largest populations sampled give the nearest-to-50:50 results.

Two of their polls were annual LW surveys. Those yielded a very large majority in favour of one-boxing. Some of the others did likewise; they look to me as if they sample quite LW-like populations, but I don't have a strong opinion on whether it's more likely that LW has influence on those populations' ideas about Newcomb, or that LW-like people tend to prefer one-boxing in any case.

Comment author: plethora 21 September 2017 09:54:16AM 2 points [-]

I have taken the survey.

Comment author: cousin_it 21 September 2017 08:39:28AM *  0 points [-]

I'm puzzled by the FDT paper, it claims to be a generalization of UDT but it seems less general, the difference being this.

As to your first question, we already have several writeups that fit in the context of decision theory literature (TDT, FDT, ADT) but they omit many ideas that would fit better in a different context, the intersection of game theory and computation (like the paper on program equilibrium by Tennenholtz). Thinking back, I played a large part in developing these ideas, and writing them up was probably my responsibility which I flunked :-( Wei's reluctance to publish also played a role though, see the thread "Writing up the UDT paper" on the workshop list in Sep 2011.

Comment author: jkadlubo 21 September 2017 08:24:30AM 4 points [-]

I've taken the survey. Possibly my first activity here this year

Comment author: gjm 21 September 2017 07:50:01AM 0 points [-]

I requested another password reset email. Once again, I didn't get one.

(Replying here rather than via Intercom because I'm currently in one place and will soon be leaving for another, so if I contact you via Intercom then it will be hours before I see your response and can tell you, or try, anything more.)

Comment author: Alicorn 21 September 2017 05:31:32AM 0 points [-]

I don't seem to have received the password reset email either. (Also, you might want to have this information on the website itself somewhere accessible from the login box.)

Comment author: Evan_Gaensbauer 21 September 2017 04:34:16AM 0 points [-]

The Future of Humanity Institute recently hosted a workshop on the focus of Dr. Denkenberger's research called ALLFED.

Comment author: Habryka 21 September 2017 02:19:11AM 0 points [-]

Hmm, is there anything in particular that is not working? We fixed a few bugs over the last few hours, but the page should have been functional since 4PM.

Comment author: Habryka 21 September 2017 02:14:52AM *  0 points [-]

I apologize!

I noticed a bug with your user account in particular in our logs, though I am not exactly sure what caused it. I fixed it now. Sorry for the inconvenience. Requesting another password reset email now should work well. And if anything else goes wrong, always feel free to ping us on Intercom in the bottom right corner, we are currently on high-alert and so are responding within 5 minutes (and usually respond within the half hour)

Comment author: gjm 21 September 2017 01:31:08AM *  0 points [-]

I attempted to sign up using my LW 1.0 username and a newly generated password. I was told that an account already existed. I then said I'd forgotten my password and was told that a new one was being emailed to me. Some considerable time later, I have not received any such email. I do not believe any such email arrived and was binned as spam.

Is this a known problem? Is there any way to find out whether I did actually get sent a password-reset email, and if so whether it bounced?

[EDITED to add:] Nor did I receive any sort of password-reset email before doing the above.

Comment author: casebash 20 September 2017 11:18:22PM 0 points [-]

It does not seem to be working.

In response to Rational Feed
Comment author: Viliam 20 September 2017 11:14:22PM *  0 points [-]

Contra Yudkowsky On Quidditch And A Meta Point

Automatically assumes that an opinion expressed by a story character must be an opinion of the author.

Comment author: Elo 20 September 2017 10:47:57PM 0 points [-]

Edit made for formatting.

Comment author: Viliam 20 September 2017 10:38:32PM 1 point [-]

If LW2 remembers who read what, I guess "a list of articles you haven't read yet, ordered by highest karma, and secondarily by most recent" would be a nice feature that would scale automatically.

Comment author: Elo 20 September 2017 09:41:05PM 1 point [-]

You might like to read, "maps of meaning" by Jordan Peterson. He proposes that meaning sometimes will come from the stories that we tell help to form the meaning that we make for ourselves. All stories help us with meaning.

Comment author: denkenberger 20 September 2017 09:39:25PM 0 points [-]

This could potentially help many decades in the future. But it would need to be an order of magnitude or more reduction in energy costs for this to produce a lot of food. And I am particularly concerned about one of these catastrophes happening in the next decade.

Comment author: Elo 20 September 2017 09:33:05PM 0 points [-]

For that subset of the demographic there may be use in posts on relevant topics. Just as we have higher (double) depression rates than the normal population, and a post on depression may be relevant to them.

Comment author: denkenberger 20 September 2017 09:29:35PM 0 points [-]

Grains are all from the same family-grass. It is conceivable that a malicious actor could design a pathogen(s) that kills all grains. Or maybe it would become an endemic disease that would decrease the vigor of the plants permanently. I'm not arguing that any of these non-recovery scenarios are too likely. However, if together they represent 10% probability, and if there is a 10% probability of the sun being blocked this century, and a 10% probability of civilization collapsing if the sun is blocked, this would be a one in 1000 chance of an existential catastrophe from agricultural catastrophes this century. This is worth some effort to reduce.

Comment author: denkenberger 20 September 2017 09:19:12PM 1 point [-]

Sorry for my voice recognition software error-I now have fixed it. It turns out that if you want to store enough food to feed 7 billion people for five years, it would cost tens of trillions of dollars. What I am proposing is spending tens of millions of dollars for targeted research and development and planning. The idea is that we would not have to spend a lot of money on emergency use only machinery. I use the example of the United States before World War II-it hardly produced any airplanes. But once it entered World War II, it retrofitted the car manufacturing plants to produce airplanes very quickly. I am targeting food sources that could be ramped up very quickly with not very much preparation (in months, see graph here. The easiest killed leaves (for human food) to collect would be agricultural residues with existing farm equipment. For leaves shed naturally (leaf litter), we could release cows into forests. I also analyze logistics in the book, and it would be technically feasible. Note that these catastrophes would only destroy regional infrastructure. However, the big assumption is that there would still be international cooperation. Without these alternative food sources, most people would die, so it would likely be in the best interest of many countries to initiate conflicts. However, if countries knew that they could actually benefit by cooperating and trading and ideally feed everyone, cooperation is more likely (though of course not guaranteed). So you could think of this as a peace project.

Comment author: Elo 20 September 2017 09:04:02PM 0 points [-]

There is a reason people say 80k. And it's because they did the research already.

If not 80k. Read deep work, so good they can't ignore you and maybe others booms that suggest a "strategy" for employment. (short version - get a job in an area on purpose. Ie if you are a vampire, a job in a factory making garlic free whole foods.)

Ask people around you. Maybe 10. Why they chose their career, and if they like it. Ignore their answers and double check by observing them work.

Comment author: J_Thomas_Moros 20 September 2017 07:56:28PM 1 point [-]

What you label "implicit utility function" sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.

I'm not familiar with the pig that wants to be eaten, but I'm not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human's who think they have such a utility function are usually mistaken, but that is a much more involved discussion.

Not sure what the point of a dynamic utility function is. Your values really shouldn't change. I feel like you may be focused on instrumental goals that can and should change and thinking those are part of the utility function when they are not.

Comment author: Benito 20 September 2017 07:55:24PM *  0 points [-]

Hi! I just checked on Firefox, and the login dialog box opened for me. If you still have this issue, next time you try to log in (open beta will happen by 4pm today) please ping us in the intercom (bottom right-hand corner of the lesserwrong page), and let us know what browser version you're using.

If your intercom doesn't work, let me know here.

Comment author: Thomas 20 September 2017 07:52:41PM *  0 points [-]

Interesting line of inferring... I am quite aware how dense primes are, but that might not be enough.

I have counted all these 4x4 (decimal) crossprimes. There are 913,425,530 of them if leading zeros are allowed. But only 406,721,511 without leading zeros.

if leading zeros ARE allowed, then there are certainly arbitrary large crossprimes out there. But if leading zeros aren't allowed, I am not that sure. I have no proof, of course.

Comment author: Jiro 20 September 2017 07:35:05PM 0 points [-]

And the expected behavior when using IE or Firefox is that you can't even get to the login screen? I find that unlikely.

Comment author: DragonGod 20 September 2017 07:32:43PM 0 points [-]

I'll add the point you raise about downvotes to the "cons" of my argument.

Comment author: Raemon 20 September 2017 07:32:11PM 0 points [-]

Thanks, I attempted to read this and felt like I was missing enough context that doing so was annoying. Appreciate the summary.

Comment author: wMattDodd 20 September 2017 07:29:08PM *  0 points [-]

I've finally been able to put words to some things I've been pondering for awhile, and a Google search on the most sensible terms (to me) for these things turned up nothing. Looking to see if there's already a body of writing on these topics using different terms, and my ignorance of such would lead to me just re-inventing the wheel in my ponderings. If these are NOT discussed topics for some reason, I'll post my thoughts because I think they could be critically important to the development of Friendly AI.

implicit utility function ('survive' is an implicit utility function because regardless of what your explicit utility function is, you can't progress it if you're dead)

conflicted utility function (a utility function that requires your death for optimal value is conflicted, as in the famous Pig That Wants to be Eaten)

dynamic utility function (a static utility function is a major effectiveness handicap, probably a fatal one on a long enough time scale)

meta utility function (a utility function that takes the existence of itself into account)

Comment author: Raemon 20 September 2017 07:00:56PM 0 points [-]

Not 100% I understand your description, but currently the expected behavior when you attempt to login (if not already a part of the beta) is nothing happening when you click "submit" (although in the browser console there'll be an error message)

This is simply because we haven't gotten to that yet, but it should be something we make sure to fix before the open-beta launch later today so people have a clear sense of whether it's working.

Comment author: Habryka 20 September 2017 06:51:02PM *  2 points [-]

Update: Open beta will happen today by 4pm Pacific time. At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, you should request a password-rest email, as we did not copy over your passwords).

Comment author: Stuart_Armstrong 20 September 2017 06:49:29PM 1 point [-]

What would be required for UDT to be written up fully? And what is missing between FDT (in the Death in Damascus problem) and UDT?

Comment author: SaidAchmiz 20 September 2017 05:58:38PM 0 points [-]

Good point!

In that case, I'm not sure what the problem is (though, I, too, see a similar problem to yours, now that I just tried it in a different browser (Firefox 55.0.3, Mac OS 10.9) than my usual one (Chrome)). I suspect, as another commenter said, that login just isn't fully developed yet.

Comment author: IlyaShpitser 20 September 2017 05:38:09PM 1 point [-]

Yes, US football and boxing are very bad for the brain. Plenty of evidence.

Comment author: Brillyant 20 September 2017 05:35:52PM 0 points [-]

Anyone following the role American football may play in long term brain injuries? Subconcussive hits to the head accumulating to cause problems?

Anyone have thoughts?

Comment author: cousin_it 20 September 2017 05:25:36PM *  3 points [-]

Congratulations!

Just a quick note on another possible way to present this idea. A few years ago I realized that the simple subset of UDT can be formulated as a certain kind of single player game. It seems like the most natural way to connect UDT to standard terminology, and it's very crisp mathematically. Then one can graduate to the modal version which goes a little deeper and is just as crisp, and decidable to boot. That's the path my dream paper would take, if I didn't have a job and a million other responsibilities :-/

Comment author: ChristianKl 20 September 2017 04:44:22PM 0 points [-]

The person who created the last thread didn't bother to create a new one. If you think there should be a new one, there no reason not to start it.

Comment author: ChristianKl 20 September 2017 04:22:57PM 0 points [-]

That sounds like it just isn't a development priority to give feedback when there's a bad user/password.

Comment author: ChristianKl 20 September 2017 04:17:41PM 0 points [-]

I'm also uncertain about the gathering-leaves plan.

On the other hand I could imagine solutions that are easily scalable. If you would for example have an eatable fungi that you could feed with lumber that might be very valuable and you don't need to spend billions ,

Comment author: ChristianKl 20 September 2017 04:09:43PM 0 points [-]

At our LessWrong community camp the keynote was given by Josh Hall and he talked about why we don't have flying cars. He made a convincing case that the problem is that while in the 50's where people predicted flying cars energy costs had been getting cheaper every year. Since the 70's they didn't and thus the energy that's required for flying cars is too expensive.

He then went to say that the same goes for underwater cities.

If we would have cheap energy we would have no problem growing food indoors with LEDs. Currently that only makes economic sense for marijuana and some algea that produces high quality nutrients. Indoors growing has the advantage that you need less pesticides when you can control the environment better.

It seems to me like next generation nuclear that has the potential to produce more energy for a cheaper price would help with making us independent of the sun.

It meshes well with Peter Thiel, Bill Gates and Sam Altman all having invested money into nuclear solutions.

Comment author: Jiro 20 September 2017 04:02:41PM 0 points [-]

That can't explain it, unless the private beta is accessed by going somewhere other than lesserwrong.com. The site isn't going to know that someone is a participant in the private beta until they've logged in. And the problems I described happen prior to logging in.

Comment author: Lumifer 20 September 2017 03:50:31PM *  0 points [-]

Governments do it all the time -- see e.g. this. Also, in this context feasability is relative -- how politically feasible is it to construct emergency-use-only machinery to gather and process leaves from a forest?

Comment author: ChristianKl 20 September 2017 03:47:18PM 0 points [-]

Why should there be a permanent loss of grains? It seems to me like reserve seeds are stored in many different places with some of those places getting forgotten in the time of a catastrophe and people rediscovering the contents later.

Comment author: ChristianKl 20 September 2017 03:42:11PM 0 points [-]

To me it seems politically unfeasible to pay for the creation of a multi-year storage of non-perishable food.

Comment author: Lumifer 20 September 2017 02:39:29PM *  0 points [-]

sun being blocked by comments impact

<grin>

Extracting human edible calories from leaves would only work for those leaves that were green when the catastrophe happened. They could provide about half a year of food for everyone

What kind of industrial base that will continue to function in the catastrophe's aftermath do you expect to be able to collect and process these green leaves while they are still green -- on the time scale of weeks, I assume?

And what is the advantage over having large stores of non-perishables?

Also, it's my impression that the biggest problem with avoiding famines is not food production, but rather logistics -- storage, transportation, and distribution. Right now the world has more then enough food for everyone, but food shortages in the third world, notably Africa, are common.

In the catastrophe scenario you have to assume political unrest, breakdown of transportation networks, etc.

Comment author: Lumifer 20 September 2017 02:23:56PM 0 points [-]

Figure out what satisfies the three criteria:

  • You like doing this
  • You are good at doing this
  • Other people value this (aka will pay you money for doing this)
Comment author: tenthkrige 20 September 2017 01:55:40PM 7 points [-]

I have also taken the survey.

Comment author: ChristianKl 20 September 2017 01:20:29PM 1 point [-]

I don't think downvotes should be costly. On StackExchange mediocre content can get a high score if it relates to a popular topic.

Given that this website has the goal of filtering content in a way that allows people who only want to read a subset to read the high quality posts downvotes of medicore content as useful information.

Comment author: ChristianKl 20 September 2017 01:09:04PM 1 point [-]

When deciding whether to publish content it seems to me to be important whether content is welcome or isn't. Unclarity about the policy can hold people back from contributing.

Comment author: denkenberger 20 September 2017 01:07:59PM *  2 points [-]

In the case of the sun being blocked by comet impact, super volcanic eruption, or full-scale nuclear war with the burning of cities, there would be local devastation, but the majority of global industry would function. Most of our energy is not dependent on the sun. So it turns out the biggest problem is food, and arable land would not be valuable. Extracting human edible calories from leaves would only work for those leaves that were green when the catastrophe happened. They could provide about half a year of food for everyone, or more realistically 10% of food for five years.

I also work on the catastrophes that could disrupt electricity globally, such as an extreme solar storm, multiple high-altitude detonations of nuclear weapons around the world creating electromagnetic pulses (EMPs), and a super computer virus. Since nearly everything is dependent on electricity, this means we lose fossil fuel production and industry. In this case, energy is critical, but there are ways of dealing with it. So the food problem still turns out to be quite important (the sun is still shining, but we don't have fossil fuel based tractors, fertilizers and pesticides), though there are solutions for that.

Comment author: Dagon 20 September 2017 01:06:40PM 0 points [-]

Thomas probably had the right idea. Trying to deconstruct the "business is not human" confusion or "public interest is distinct from other behaviors" weirdness requires a lot more effort than I'm likely to put in.

View more: Next