Most of your post is good, but you're too eager to describe trends as mysterious.
Also, your link to "a previous post" is broken.
Moore's law appears to be a special case of Wright's Law. I.e. it seems well explained by experience curve effects (or possibly economies of scale).
...Secondly, we have strong reasons to suspect that there won't be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These pheno
I'm sometimes able to distinguish different types of feeling tired, based on what my system 1 wants me to do differently: sleep more, use specific muscles less, exercise more slowly, do less of a specific type of work, etc.
Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence.
If I try to apply this to protein folding instead of intelligence, it sounds really strange.
Most people who make useful progress at protein folding appear to use a relatively tool-boxy approach. And they all appear to believe that quantum mechanics provides a very good theory of protein folding. Or it least it would be, given unbounded computing power.
Why is something similar not true for intelligence?
I agree with most of what you said. But in addition to changing the community atmosphere, we can also change how guarded we feel in reaction to a given environment.
CFAR has helped me be more aware of when I'm feeling guarded (againstness), and has helped me understand that those feelings are often unnecessary and fixable.
Authentic relating events (e.g. Aletheia) have helped to train my subconscious to feel more safe about feeling less guarded in contexts such as LW meetups.
There's probably some sense in which I've lowered my standards, but that's mostly be...
It isn't designed to describe the orthodox view. I think the ideas it describes are moderately popular among mainstream experts, but probably some experts dispute them.
I enjoyed Shadow Syndromes, which is moderately close to what you asked for.
Henrich's The Secret of our Success isn't exactly about storytelling, but it provides a good enough understanding of human evolution that it would feel surprising to me if humans didn't tell stories.
I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.
Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.
Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.
Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to sol...
You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.
From http://blog.givewell.org/2012/03/26/villagereach-update/:
We are also more deeply examining the original evidence of effectiveness for VillageReach’s pilot project. Our standards for evidence continue to rise, and our re-examination has raised significant questions that we intend to pursue in the coming months.
I had donated to VillageReach due to GiveWell's endorsement, and I found it moderately easy to notice that they had changed more than just the room for funding conclusion.
That update does seem straightforward, thanks for finding it. I see how people following the GiveWell blog at the time would have a good chance of noticing this. I wish it had been easier to find for people trying to do retrospectives.
how much should I use this as an outside view for other activities of MIRI?
I'm unsure whether you should think of it as a MIRI activity, but to the extent you should, then it seems like moderate evidence that MIRI will try many uncertain approaches, and be somewhat sensible about abandoning the ones that reach a dead end.
I think your conclusion might be roughly correct, but I'm confused by the way your argument seems to switch between claiming that an intelligence explosion will eventually reach limits, and claiming that recalcitrance will be high when AGI is at human levels of intelligence. Bostrom presumably believes there's more low-hanging fruit than you do.
See Rosati et al., The Evolutionary Origins of Human Patience: Temporal Preferences in Chimpanzees, Bonobos, and Human Adults, Current Biology (2007). Similar to the marshmallow test.
My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.
Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.
My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.
I expect that MIRI would mostly disagree with claim 6.
Can you suggest something specific that MIRI should change about their agenda?
When I try to imagine problems for which imperfect value loading suggests different plans from perfectionist value loading, I come up with things like "don't worry about whether we use the right set of beings when creating a CEV". But MIRI gives that kind of problem low enough priority that they're acting as if they agreed with imperfect value loading.
No, mainly because Elon Musk's concern about AI risk added more prestige than Thiel had.
There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics).
When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right?
Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident.
Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.
See Seasteading. No good book on it yet, but one will be published in March (by Joe Quirk and LWer Patri Friedman).
I suggest reading Henrich's book The Secret of our Success. It describes a path to increased altruism that doesn't depend on any interesting mutation. It involves selection pressures acting on culture.
There used to be important differences between stocks and futures (back when futures exchanges used open outcry) that (I think) enabled futures brokers to delay decisions about which customer got which trade price.
It has nearly the opposite effects for ideas I haven't yet bet on but might feel tempted or obligated to bet on.
The bad effects are weaker if I can get out of the bet easily (as is the case on a high-volume prediction market).
Peer pressure matters, and younger people are less able to select rationalist-compatible peers (due to less control over who their peers are).
I suspect younger people have short enough time horizons that they're less able to appreciate some of CFAR's ideas that take time to show benefits. I suspect I have more intuitions along these lines that I haven't figured out how to articulate.
Maybe CFAR needs better follow-ups to their workshops, but I get the impression that with people for whom the workshops are most effective, they learn (without much follow-up) to generalize CFAR's ideas in ways that make additional advice from CFAR unimportant.
I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.
Another factor to consider: If AGI is 30+ years away, we're likely to have another "AI winter". Saving money to donate during that winter has some value.
I've felt that lack of curiosity a fair amount over the past 5-10 years. I suspect the biggest change that reduced my curiosity was becoming financially secure. Or maybe some other changes which made me feel more secure.
I doubt that I ever sought knowledge for the sake of knowledge, even when it felt like I was doing that. It seems more plausible that I had hidden motives such as the desire to impress people with the breadth or sophistication of my knowledge.
LessWrong attitudes toward politics may have reduced some aspects of my curiosity by making it clea...
For Omnivores:
The level is healthy for individuals. But that includes way to much meat that has been processed dangerously (bacon, sausage), and not enough minimally processed seafood.
It's not good for the planet. I want to deal with that by uploading my mind. Some large changes of that nature will make current meat production problems irrelevant in a few decades.
Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).
It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).
The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.
One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See http://earthops.org/immigration/Mesquida_Wiener99.pdf.
Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.
Some of the discussion has moved to CFAR, although that involves more focus on how to get better cooperation between System 1 and System 2, and less on avoiding specific biases.
Maybe the most rational people don't find time to take surveys?
Signing up didn't bring me peace of mind, except for brief relief at not having the paperwork on my to-do list.
I've heard other cryonicists report feeling something like peace of mind as a result of signing up, but they appear to be a minority.
In Chinese grocery stores and restaurants, I see about as much veggie fish/shrimp as veggie beef/chicken, and it tastes about as good. But the veggie fish and shrimp take less like real fish/shrimp than veggie beef/chicken taste like real beef/chicken. So it may be that similar effort went into each, and many cultures were less satisfied with the results for fish.
See discussions of utility monsters. Don't assume that many people here support pure utilitarianism.
Crickets at $38/pound dry weight are close to being competitive with salmon (more than 3 pounds needed to get the equivalent nutrition). Or $23/pound in Thailand (before high shipping fees), suggesting the cost in the U.S. will drop a bit as increased popularity causes more competition and economies of scale.
What evidence do we have about whether cryonics will work for those who die of Alzheimer's?
In many wars, those who fight get a much higher reputation than those who were expected to fight but refused. This has often translated into a reproductive advantage for those who fought. It's not obviously irrational to want that reproductive advantage or something associated with it.
I started alternate day calorie restriction last month. I expect it to be one of the best lifestyle changes for increasing my life expectancy.
I've become comfortable enough with it that it no longer requires significant willpower to continue. I think I have slightly more mental energy than before I started (but for the first 17 days, I had drastically lower mental energy).
I have a longer post about this on my blog.
Ralph Merkle's cryonics page is a good place to start. His 1994 paper on The Molecular Repair of the Brain seems to be the most technical explanation of why it looks feasible.
Since whole brain emulation is expected to use many of the same techniques, that roadmap (long pdf) is worth looking at.
I'm unclear on how the probability distribution over utility functions would be implemented. A complete specification of how to evaluate evidence seems hard to do right. Also, why should we expect we can produce a pool of utility functions that includes an adequate one?
If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.
If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.
This post is definitely an attempt to answer the question 'What should I eat?', not "What's the best thing I can do about multipolar takeoff?". I didn't mean to imply that my concerns over multipolar takeoff are the only reason for my change in diet. I focused on that because others have given it too little attention.
I would certainly like to do more to increase respect for property rights, but the obvious approaches involve partisan politics that already attract lots of effort on both sides.
I suggest Geoffrey Miller's book The Mating Mind. Or search for sexual selection.
There's something about reading the new style that makes me uncomfortable, and prompts me to skim some posts that I would have read more carefully on the old site. I'm not too clear on what causes that effect. I'm guessing that some of it is the excessive amount of white, causing modest sensory overload.
Some of it could be the fact that less of a post fits on a single screenful: I probably form initial guesses about a post's value based on the first screenful, and putting less substance on that first screenful leads me to guess that the post has less subst... (read more)