All of pcm's Comments + Replies

pcm10

There's something about reading the new style that makes me uncomfortable, and prompts me to skim some posts that I would have read more carefully on the old site. I'm not too clear on what causes that effect. I'm guessing that some of it is the excessive amount of white, causing modest sensory overload.

Some of it could be the fact that less of a post fits on a single screenful: I probably form initial guesses about a post's value based on the first screenful, and putting less substance on that first screenful leads me to guess that the post has less subst... (read more)

0PECOS-9
I think your point about less information per screen identifies what has been bothering me. It makes it much harder to skim or to refer back to previous paragraphs.
pcm20

Most of your post is good, but you're too eager to describe trends as mysterious.

Also, your link to "a previous post" is broken.

Moore's law appears to be a special case of Wright's Law. I.e. it seems well explained by experience curve effects (or possibly economies of scale).

Secondly, we have strong reasons to suspect that there won't be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These pheno

... (read more)
pcm10

I'm sometimes able to distinguish different types of feeling tired, based on what my system 1 wants me to do differently: sleep more, use specific muscles less, exercise more slowly, do less of a specific type of work, etc.

pcm10

Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence.

If I try to apply this to protein folding instead of intelligence, it sounds really strange.

Most people who make useful progress at protein folding appear to use a relatively tool-boxy approach. And they all appear to believe that quantum mechanics provides a very good theory of protein folding. Or it least it would be, given unbounded computing power.

Why is something similar not true for intelligence?

pcm00

I agree with most of what you said. But in addition to changing the community atmosphere, we can also change how guarded we feel in reaction to a given environment.

CFAR has helped me be more aware of when I'm feeling guarded (againstness), and has helped me understand that those feelings are often unnecessary and fixable.

Authentic relating events (e.g. Aletheia) have helped to train my subconscious to feel more safe about feeling less guarded in contexts such as LW meetups.

There's probably some sense in which I've lowered my standards, but that's mostly be... (read more)

pcm00

It isn't designed to describe the orthodox view. I think the ideas it describes are moderately popular among mainstream experts, but probably some experts dispute them.

pcm00

I enjoyed Shadow Syndromes, which is moderately close to what you asked for.

0ChristianKl
I'm seeking for a book that lies out the orthodox mainstream view, is that the case for the book you recommend? (I generally don't have a problem with unorthodox views, but in this case I seek to develop clear knowledge of the orthodox view)
pcm00

Henrich's The Secret of our Success isn't exactly about storytelling, but it provides a good enough understanding of human evolution that it would feel surprising to me if humans didn't tell stories.

0simbyotic
Already read it :) If you liked Henrich you will probably enjoy Kevin Laland's newest which gives a better picture of how humans evolved the capacities that Henrich talks about, and the extent to which some of those capacities are present in other animals as well.
pcm20

I'd guess the same fraction of people reacted disrespectfully to Gleb in each community (i.e. most but not all). The difference was more that in an EA context, people worried that he would shift money away from EA-aligned charities, but on LW he only wasted peoples' time.

pcm60

Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.

Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.

Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to sol... (read more)

pcm10

You write about its importance, yet I suspect EAs mostly avoid it due to doubts about tractability and neglectedness.

pcm100

From http://blog.givewell.org/2012/03/26/villagereach-update/:

We are also more deeply examining the original evidence of effectiveness for VillageReach’s pilot project. Our standards for evidence continue to rise, and our re-examination has raised significant questions that we intend to pursue in the coming months.

I had donated to VillageReach due to GiveWell's endorsement, and I found it moderately easy to notice that they had changed more than just the room for funding conclusion.

Benquo100

That update does seem straightforward, thanks for finding it. I see how people following the GiveWell blog at the time would have a good chance of noticing this. I wish it had been easier to find for people trying to do retrospectives.

pcm50

how much should I use this as an outside view for other activities of MIRI?

I'm unsure whether you should think of it as a MIRI activity, but to the extent you should, then it seems like moderate evidence that MIRI will try many uncertain approaches, and be somewhat sensible about abandoning the ones that reach a dead end.

pcm00

I think your conclusion might be roughly correct, but I'm confused by the way your argument seems to switch between claiming that an intelligence explosion will eventually reach limits, and claiming that recalcitrance will be high when AGI is at human levels of intelligence. Bostrom presumably believes there's more low-hanging fruit than you do.

pcm20

I subsidized some InTrade contracts in 2008. See here, here and here.

pcm10

See Rosati et al., The Evolutionary Origins of Human Patience: Temporal Preferences in Chimpanzees, Bonobos, and Human Adults, Current Biology (2007). Similar to the marshmallow test.

pcm40

See ontological crisis for an idea of why it might be hard to preserve a value function.

pcm20

My equivalent of this document focused more on the risks of unreasonable delays in uploading me. Cryonics organizations have been designed to focus on preservation, which seems likely to bias them toward indefinite delays. This might be especially undesirable in an "Age of Em" scenario.

Instead of your request for a "neutral third-party", I listed several specific people, who I know are comfortable with the idea of uploading, as people whose approval would be evidence that the technology is adequate to upload me. I'm unclear on how hard it would be to find a genuinely neutral third party.

My document is 20 years old now, and I don't have a copy handy. I suppose I should update it soon.

pcm00

I expect that MIRI would mostly disagree with claim 6.

Can you suggest something specific that MIRI should change about their agenda?

When I try to imagine problems for which imperfect value loading suggests different plans from perfectionist value loading, I come up with things like "don't worry about whether we use the right set of beings when creating a CEV". But MIRI gives that kind of problem low enough priority that they're acting as if they agreed with imperfect value loading.

0WhySpace_duplicate0.9261692129075527
I'm pretty sure I also mostly disagree with claim 6. (See my other reply below.) The only specific concrete change that comes to mind is that it may be easier to take one person's CEV than aggregate everyone's CEV. However, this is likely to be trivially true, if the aggregation method is something like averaging. If that's 1 or 2 more lines of code, then obviously it doesn't really make sense to try and put those lines in last to get FAI 10 seconds sooner, except in a sort of spherical cow in a vacuum sort of sense. However, if "solving the aggregation problem" is a couple years worth of work, maybe it does make sense to prioritize other things first in order to get FAI a little sooner. This is especially true in the event of an AI arms race. I’m especially curious whether anyone else can come up with scenarios where a maxipok strategy might actually be useful. For instance, is there any work being done on CEV which is purely on the extrapolation procedure or procedures for determining coherence? It seems like if only half our values can easily be made coherent, and we can load them into an AI, that might generate an okay outcome.
pcm80

No, mainly because Elon Musk's concern about AI risk added more prestige than Thiel had.

pcm20

There's no particular reason to believe all of his predictions. But that's also true of anyone else who makes as many predictions as the book does (on similar topics).

When you say "anticipate the future the way he does", are you asking whether you should believe there's a 10% chance of his scenario being basicly right?

Nobody should have much confidence in such predictions, and when Robin talks explicitly about his confidence, he doesn't sound very confident.

Good forecasters consider multiple models before making predictions (see Tetlock's work). Reading the book is a better way for most people to develop an additional model of how the future might be than reading new LW comments.

0MrMind
If your model doesn't even get to 10%, then I say: unless you have hundreds of competing models in your mind (who has?), then do not even bother. Your comment helped me reach the conclusion that reading AoE would be a waste of time.
pcm10

See Seasteading. No good book on it yet, but one will be published in March (by Joe Quirk and LWer Patri Friedman).

pcm-20

I suggest reading Henrich's book The Secret of our Success. It describes a path to increased altruism that doesn't depend on any interesting mutation. It involves selection pressures acting on culture.

pcm00

There used to be important differences between stocks and futures (back when futures exchanges used open outcry) that (I think) enabled futures brokers to delay decisions about which customer got which trade price.

pcm10

It has nearly the opposite effects for ideas I haven't yet bet on but might feel tempted or obligated to bet on.

The bad effects are weaker if I can get out of the bet easily (as is the case on a high-volume prediction market).

pcm00

Peer pressure matters, and younger people are less able to select rationalist-compatible peers (due to less control over who their peers are).

I suspect younger people have short enough time horizons that they're less able to appreciate some of CFAR's ideas that take time to show benefits. I suspect I have more intuitions along these lines that I haven't figured out how to articulate.

Maybe CFAR needs better follow-ups to their workshops, but I get the impression that with people for whom the workshops are most effective, they learn (without much follow-up) to generalize CFAR's ideas in ways that make additional advice from CFAR unimportant.

pcm00

I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.

0Squark
Hi Peter! I am Vadim, we met in a LW meetup in CFAR's office last May. You might be right that SPARC is important but I really want to hear from the horse's mouth what is their strategy in this regard. I'm inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It's not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven't seen such an analysis by CFAR.
pcm30

Another factor to consider: If AGI is 30+ years away, we're likely to have another "AI winter". Saving money to donate during that winter has some value.

pcm10

I've felt that lack of curiosity a fair amount over the past 5-10 years. I suspect the biggest change that reduced my curiosity was becoming financially secure. Or maybe some other changes which made me feel more secure.

I doubt that I ever sought knowledge for the sake of knowledge, even when it felt like I was doing that. It seems more plausible that I had hidden motives such as the desire to impress people with the breadth or sophistication of my knowledge.

LessWrong attitudes toward politics may have reduced some aspects of my curiosity by making it clea... (read more)

0timujin
I am definitely not better off without what I lost. Genuine curiosity had tremendously powerful effect on my learning.
pcm10

For Omnivores:

  • Do you think the level of meat consumption in America is healthy for individuals? Do you think it's healthy for the planet?

The level is healthy for individuals. But that includes way to much meat that has been processed dangerously (bacon, sausage), and not enough minimally processed seafood.

It's not good for the planet. I want to deal with that by uploading my mind. Some large changes of that nature will make current meat production problems irrelevant in a few decades.

  • How do you feel about factory farming? Would you pay twice as much
... (read more)
pcm20

Yes, for strategies with low enough transaction costs (i.e. for most buy-and-hold like strategies, but not day-trading).

It will be somewhat hard for ordinary investors to implement the inverse strategies, since brokers that cater to them restrict which stocks they can sell short (professional investors usually don't face this problem).

The EMH is only a loose approximation to reality, so it's not hard to find strategies that underperform on average by something like 5% per year.

pcm80

One of the stronger factors influencing the frequency of wars is the ratio of young men to older men. Life extension would change that ratio to imply fewer wars. See http://earthops.org/immigration/Mesquida_Wiener99.pdf.

Stable regimes seem to have less need for oppression than unstable ones. So while I see some risk that mild oppression will be more common with life extension, I find it hard to see how that would increase existential risks.

3knb
But why do young men cause wars (assuming they do)? If everyone remains biologically 22 forever, are they psychologically more similar to actual 22 year-olds or to whatever their chronological age is? If younger men are more aggressive due to higher testosterone levels (or whatever) agelessness might actually have the opposite effect, increasing the percentage of the male population which is aggressive.
4G0W51
Oppression could cause an existential catastrophe if the oppressive regime is never ended.
pcm00

Some of the discussion has moved to CFAR, although that involves more focus on how to get better cooperation between System 1 and System 2, and less on avoiding specific biases.

Maybe the most rational people don't find time to take surveys?

pcm60

Signing up didn't bring me peace of mind, except for brief relief at not having the paperwork on my to-do list.

I've heard other cryonicists report feeling something like peace of mind as a result of signing up, but they appear to be a minority.

pcm40

In Chinese grocery stores and restaurants, I see about as much veggie fish/shrimp as veggie beef/chicken, and it tastes about as good. But the veggie fish and shrimp take less like real fish/shrimp than veggie beef/chicken taste like real beef/chicken. So it may be that similar effort went into each, and many cultures were less satisfied with the results for fish.

0NancyLebovitz
It may be possible to do better vegetarian "fish" with modern technology, but I haven't heard of anyone working on it.
pcm00

See discussions of utility monsters. Don't assume that many people here support pure utilitarianism.

0Jookidook
Thanks for the link, and sorry for the presumption. The question occurred to me and this was the first place I thought to ask.
pcm70

Crickets at $38/pound dry weight are close to being competitive with salmon (more than 3 pounds needed to get the equivalent nutrition). Or $23/pound in Thailand (before high shipping fees), suggesting the cost in the U.S. will drop a bit as increased popularity causes more competition and economies of scale.

1Omid
Could you breed crickets at home?
pcm10

It is sometimes possible to die by refusing to eat/drink. Ben Best has some conflicting claims about how feasible that is with Alzhiemer's here and here.

pcm30

What evidence do we have about whether cryonics will work for those who die of Alzheimer's?

3Baughn
If you have Alzheimer's, and you want to use cryonics, you should do your very best to get frozen well before you die of the disease. This is problematic in all jurisdictions I can think of. Even where euthanasia is legal, I don't know of any cryonics organisations taking advantage, and there might be problems for them if they do. I'd very much like to be proven wrong in this.
3JoshuaZ
Decidedly mixed. In the very late stages of Alzheimer's large sections of brain tissue are literally gone. See e.g. here. On the other hand, even with fairly late stage patients they do have better and worse days where they remember more or less, which suggests that some memories are still present. We also know that in some animal models treatment can apparently restore some amount of memory. See for example here (which may be behind a paywall). That last link is to some very new, very recent research suggesting a form of high powered ultrasound may actually help Alzheimer's in mouse models, and there's decent reason to believe that this will work in humans.
pcm20

In many wars, those who fight get a much higher reputation than those who were expected to fight but refused. This has often translated into a reproductive advantage for those who fought. It's not obviously irrational to want that reproductive advantage or something associated with it.

pcm50

I started alternate day calorie restriction last month. I expect it to be one of the best lifestyle changes for increasing my life expectancy.

I've become comfortable enough with it that it no longer requires significant willpower to continue. I think I have slightly more mental energy than before I started (but for the first 17 days, I had drastically lower mental energy).

I have a longer post about this on my blog.

pcm00

Ralph Merkle's cryonics page is a good place to start. His 1994 paper on The Molecular Repair of the Brain seems to be the most technical explanation of why it looks feasible.

Since whole brain emulation is expected to use many of the same techniques, that roadmap (long pdf) is worth looking at.

pcm30

I'm unclear on how the probability distribution over utility functions would be implemented. A complete specification of how to evaluate evidence seems hard to do right. Also, why should we expect we can produce a pool of utility functions that includes an adequate one?

pcm10

If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.

If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.

pcm20

This post is definitely an attempt to answer the question 'What should I eat?', not "What's the best thing I can do about multipolar takeoff?". I didn't mean to imply that my concerns over multipolar takeoff are the only reason for my change in diet. I focused on that because others have given it too little attention.

I would certainly like to do more to increase respect for property rights, but the obvious approaches involve partisan politics that already attract lots of effort on both sides.

pcm00

I suggest Geoffrey Miller's book The Mating Mind. Or search for sexual selection.

Load More