Comment author: NancyLebovitz 12 April 2015 06:10:55AM 1 point [-]

Does anyone know why a lot of work has gone in to vegetable-based imitation beef and chicken, but not into good imitation fish?

Comment author: pcm 12 April 2015 07:23:10PM 2 points [-]

In Chinese grocery stores and restaurants, I see about as much veggie fish/shrimp as veggie beef/chicken, and it tastes about as good. But the veggie fish and shrimp take less like real fish/shrimp than veggie beef/chicken taste like real beef/chicken. So it may be that similar effort went into each, and many cultures were less satisfied with the results for fish.

Comment author: Jookidook 29 March 2015 04:33:09PM 1 point [-]

Lets say we have the capability to create living creatures and some bored scientist makes one that is relatively intelligent (enough to be considered a person in a meaningful, human definition), capable of language, requiring of little sustenance, capable of reproduction and completely and utterly happy except under to most terrible circumstances. Would the utilitarian view of the situation be to convert all usable resources to create habitats for these critters? Would the moral thing be to give the world over to them because they're better at not making each others lives terrible/are happier for the same amount of resources?

I'm pretty new around here so please forgive the sheer newbishness of my question.

Comment author: pcm 30 March 2015 03:55:55PM 0 points [-]

See discussions of utility monsters. Don't assume that many people here support pure utilitarianism.

Comment author: pcm 14 March 2015 09:08:40PM 5 points [-]

Crickets at $38/pound dry weight are close to being competitive with salmon (more than 3 pounds needed to get the equivalent nutrition). Or $23/pound in Thailand (before high shipping fees), suggesting the cost in the U.S. will drop a bit as increased popularity causes more competition and economies of scale.

Comment author: Baughn 13 March 2015 01:23:08AM 2 points [-]

If you have Alzheimer's, and you want to use cryonics, you should do your very best to get frozen well before you die of the disease.

This is problematic in all jurisdictions I can think of. Even where euthanasia is legal, I don't know of any cryonics organisations taking advantage, and there might be problems for them if they do. I'd very much like to be proven wrong in this.

Comment author: pcm 14 March 2015 08:46:12PM 1 point [-]

It is sometimes possible to die by refusing to eat/drink. Ben Best has some conflicting claims about how feasible that is with Alzhiemer's here and here.

Comment author: pcm 12 March 2015 07:43:05PM 2 points [-]

What evidence do we have about whether cryonics will work for those who die of Alzheimer's?

Comment author: G0W51 24 February 2015 12:29:40AM 0 points [-]

I thought of a situation in which individuals seem to act irrationally, but I don’t know of any cognitive bias that would cause them to. Some individuals seem willing to fight in wars to “help out” in it despite having a small risk of being killed in it. E.g. some are willing to have a 1/100 chance of being killed if they have a 1/100,000 chance of causing their nation to win the war, meaning that if they decided not to join the war, their nation would be 1/100,000 more likely to lose. However, people seem much less willing to have a 1/1 chance of being killed and a 1/1000 chance of causing their nation to win the war. Assuming one’s utility is a weighted linear sum of whether they died or not (with 1 meaning they died and 0 meaning they lived) and whether their nation lost the war or not (with 1 meaning lose and 0 meaning win), I don’t know of any weights that would make it both worth fighting in the former scenario but not worth fighting in the latter. Are people just acting irrationally or is my model wrong? If they are acting irrationally, what bias is causing them to do so?

Comment author: pcm 24 February 2015 04:20:54PM 2 points [-]

In many wars, those who fight get a much higher reputation than those who were expected to fight but refused. This has often translated into a reproductive advantage for those who fought. It's not obviously irrational to want that reproductive advantage or something associated with it.

Comment author: pcm 23 February 2015 05:38:37PM 4 points [-]

I started alternate day calorie restriction last month. I expect it to be one of the best lifestyle changes for increasing my life expectancy.

I've become comfortable enough with it that it no longer requires significant willpower to continue. I think I have slightly more mental energy than before I started (but for the first 17 days, I had drastically lower mental energy).

I have a longer post about this on my blog.

Comment author: Capla 20 February 2015 06:38:48PM 5 points [-]

I want to spend a few weeks seriously looking into cryonics: how it works, the costs, the theory about revival, the changes in the technology in the past 60 years, the options that are available.

I want to become an expert in cryonics to the extent that I can answer, in depth, the questions that people typically have when they hear about this "crazy idea" for the first time. {Hmm...That sounds a little like bottom-line reasoning, trying to prepare for objections, instead of ferreting out the truth. I'll have to be careful of that. To be fair, I will need to overcome objections to get my family to sign up. Still, be careful of looking for data just to affirm my naive presumption.}

What should I read?

Comment author: pcm 22 February 2015 03:37:28PM 0 points [-]

Ralph Merkle's cryonics page is a good place to start. His 1994 paper on The Molecular Repair of the Brain seems to be the most technical explanation of why it looks feasible.

Since whole brain emulation is expected to use many of the same techniques, that roadmap (long pdf) is worth looking at.

Comment author: KatjaGrace 03 February 2015 12:53:42AM 1 point [-]

What do you think of Dewey's proposal?

Comment author: pcm 05 February 2015 07:41:18PM 2 points [-]

I'm unclear on how the probability distribution over utility functions would be implemented. A complete specification of how to evaluate evidence seems hard to do right. Also, why should we expect we can produce a pool of utility functions that includes an adequate one?

In response to Ethical Diets
Comment author: MrMind 13 January 2015 08:04:29AM 4 points [-]

If the way an AGI treats us would depend upon the way we treat animals, the problem of a Friendly AI would already be partially solved. But there's no way to think it will: if you don't want an AI to treat the way you treat a cow, <easier said than done> then don't program it that way. </easier said than done>

In response to comment by MrMind on Ethical Diets
Comment author: pcm 13 January 2015 07:08:12PM 1 point [-]

If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.

If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.

View more: Prev | Next