All of NineDimensions's Comments + Replies

I lost 25kg in 9 months using a very similar method. Some suggestions to help with the hunger/willpower:

1. Brush your teeth straight after dinner. It adds a point of friction between you and more food ("I'd have to brush my teeth again"). Then drink herbal teas in the evening.

2. You don't have to feel hungry all the time. Choose when you consume your calories so that you're hungry at the least inconvenient times and for the fewest waking hours. I usually eat a decent breakfast as late as I can, because I'm not as hungry in the mornings. Then I eat two half... (read more)

In some cases something like this might work:

"The plumber says it's fixed, so hopefully it is"

Or

"The plumber says it's fixed, so it probably is"

Which I think conveys"there's an assumption I'm making here, but I'm just putting a flag in the ground to return to if things don't play out as expected"

I don't share your intuition here. I think many people would see blue as the "band together" option and would have confidence that others will do the same. For the average responder, the question would reduce to "choose blue to signal trust in humanity, choose red to signal selfish cowardice".

"Innate faith in human compassion, especially in a crisis" is the co-ordination mechanism, and I think there is pretty strong support for that notion if you look at how we respond to crises in real life and how we depict them in fiction. That is the narrative we tell ourselves at least, but narrative is what's important here.

I would be surprised if blue was less than 30%, and would predict around 60%.

I'm not quite sure what you mean by that.

Unless I expect the pool of responders to be 100% rational and choose red, then I should expect some to choose blue. Since I (and presumably other responders) do expect some to choose blue, that makes >50% blue the preferred outcome. Universal red is just not a realistic outcome.

Whether or not I choose blue then depends on factors like how I value the lives of others compared to mine, the number of responders, etc - as in the equations in your post.

Emperically, as GeneSmith points out, something is wrong with Wal... (read more)

5dr_s
Both all red and all blue are rational if I can expect everyone else to follow the same logic as me. Which one you prefer depends only on amount of disagreement you expect and value you place on other lives compared to your own. In any world that goes "I am perfectly rational, everyone else is too, and thus they will do the same as me", it's irrelevant what you pick.

A LOT depends on how you model the counterfactual of this poll being real and having consequences.  I STRONGLY predict that 90+% of people who are given the poll, along with enough evidence that they believe the consequences are real, will pick red.  Personal safety aligns with back-of-the-envelope calculations here - unless you can be pretty sure of meeting the blue threshold, you're basically committing suicide by picking blue.  And if it's well over 50% blue without you, you may as well choose red then, too.

There IS a superrationality arg... (read more)

No matter what the game theory says, a non-zero number of people will choose blue and thus die under this equilibrium. This fact - that getting to >50% blue is the only way to save absolutely everyone - is enough for me to consider choosing blue and hope that others reason the same (which, in a self-fulfilling way, strengthens the case for choosing blue).

2Ege Erdil
That would be questioning the assumption that your cost function as an altruist should be linear in the number of lives lost. I'm not sure why you would question this assumption, though; it seems rather unnatural to make this a concave function, which is what you would need for your logic to work.

Edit - This turned out pretty long, so in short:

What reason do we have to believe that humans aren't already close to maxing out the gains one can achieve from intelligence, or at least having those gains in our sights?



One crux of the ASI risk argument is that ASI will be unfathomably powerful thanks to its intelligence. One oft-given analogy for this is that the difference in intelligence between humans and chimps is enough that we are able to completely dominate the earth while they become endangered as an unintended side effect. And we should expect the... (read more)

4AnthonyC
It's a good question. There's a lot of different responses that all contribute to my own understanding here, so I'll just list out a few reasons I personally do not think humans have anywhere near maxed out the gains achievable through intelligence. 1. Over time the forces that have pushed humanity's capabilities forward have been knowledge-based (construed broadly, including science, technology, engineering, law, culture, etc.). 2. Due to the nature of the human life cycle, humans spend a large fraction of our lives learning a small fraction of humanity's accumulated knowledge, which we exploit in concert with other humans with different knowledge in order to do all the stuff we need to do.  3. No one has a clear view of every aspect of the overall system, leading to very easily identifiable flaws, inefficiencies, and should-be-avoidable systemic failures. 4. The boundaries of knowledge tend to get pushed forward either by those near the forefront of their particular field of expertise, or by polymaths with some expertise across multiple disciplines relevant to a problem. These are the people able to accumulate existing knowledge quickly in order to reach the frontiers, aka people who are unusually intelligent in the forms of intelligence needed for their chosen field(s). 5. Because of the way human minds work, we are subject to many obvious limitations. Limited working memory. Limited computational clock speed. Limited fidelity and capacity for information storage and recollection. Limited ability to direct our own thoughts and attention. No direct access to the underlying structure of our own minds/brains. Limited ability to coordinate behavior and motivation between individuals. Limited ability to share knowledge and skills. 6. Many of these restrictions would not apply to an AI. An AI can think orders of magnitude faster than a human, about many things at once, without ever forgetting anything or getting tired/distracted/bored. It can copy itself, includ

Humans still get advantages from more or less smarts within the human range - we can use this to estimate the slope of returns on intelligence near human intelligence.

This doesn't rule out that maybe at the equivalent of IQ 250, all the opportunities for being smart get used up and there's no more benefits to be had - for that, maybe try to think about some equivalent of intelligence for corporations, nations, or civilizations - are there projects that a "smarter" corporation (maybe one that's hired 2x as many researchers) could do that a "dumber" corporat... (read more)

1Domenic
I am sympathetic to this viewpoint. However I think there are large-enough gains to be had from "just" an AI that: matches genius-level humans; has N times larger working memory; thinks N times faster; has "tools" (like calculators or Mathematica or Wikipedia) integrated directly into it's "brain"; and is infinitely copyable. That gets you to https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message territory, which is quite x-risky. Working memory is a particularly powerful intuition pump here, I think. Given that you can hold 8 items in working memory, it feels clear that something which can hold 800 or 8 million such items would feel qualitatively impressive. Even if it's just a quantitative scale-up. You can then layer on more speculative things from the ability to edit the neural net. E.g., for humans, we are all familiar with flow states where we are at our best, most-well-rested, most insightful. It's possible that if your brain is artificial, reproducing that on demand is easier. The closest thing we have to this today is prompt engineering, which is a very crude way of steering AIs to use the "smarter" path when it's navigating through their stack of transformer layers. But the existence of such paths likely means we, or an introspective AI, could consciously promote their use.