All of aelephant's Comments + Replies

The spirit of Seth Roberts lives!

Here's a randomized trial from JAMA showing more than 1000% increase in urinary BPA after consuming canned soup: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3367259/

BPA used to be in a lot of plastics, but I think it has been phased out. Perhaps someone else can confirm or refute that.

Another source is transdermally through handling receipts. I've heard of at least one health conscious workplace giving their cashiers wooden tongs to handle the receipts. Best practice would probably be to email them instead. Saves paper too.

0taryneast
I can't access the actual article, though the abstract certainly indicates that BPA is harmful. Does the article include evidence showing that eating canned food is a significant exposure risk? What kinds of cans specifically? what kinds of foods? Are there other significant sources of BPA?

I would recommend against eating canned food to limit your exposure to Bisphenol A.

I would also not eat tuna every single day for such an extended period of time!

0taryneast
supporting data ?

Curious where you got the USMLE questions. Are you able to share them?

Creatine will make you retain water in the muscles, which will make them look bulkier than they otherwise would.

Wow, how do you master Mandarin AND French with difficulties with akrasia & drive?

1[anonymous]
Ze speaks Mandarin and English natively. Ze was born in China and moved to USA at a young age. French was learned in high school, then more in college. French is also a relatively easy language to get the basics of if you speak English, and ze spent a year living in France which no doubt helped a great deal.
0Kaj_Sotala
Akrasia doesn't necessarily mean you're incapable of studying everything. It could just mean that you e.g. spend all your time studying languages when you should be working. Also, three languages isn't that much if you have the right background: I speak Finnish because I live in Finland, Swedish because I had Swedish-speaking relatives and went to a Swedish-speaking elementary school, and English because that was the language most of the most interesting entertainment was in. I don't remember really needing to actively study any of them: I just picked them up via childhood immersion, not unlike many other people from the same background. (Well, I did keep asking my parents about the English terms in video games and such early on, but that never felt like studying.)

Nick Winter used a similar scheme (albeit with a towel) & found that not only did he get better at pullups, his 1 mile time & bench press improved as well!

aelephant-10

Yes, I consider them outside the realm of morality. If a mentally disabled person committed murder, for example, he or she could not be held morally liable for their actions -- instead the parent or guardian has the moral & legal responsibility for making sure that he or she doesn't steal, kill, etc.

-2Mestroyer
So are you saying it should only be considered "wrong" to torture mentally disabled people because of agreements made between non-mentally-disabled people, and if non-mentally-disabled people made a different agreement, then it would be okay? Say the only beings in existence are you and a mentally disabled person. Are you bound by any morality in how you treat them?

Feeding grain to cattle is an awful practice that needs to stop; the sooner, the better.

Re: grazing cattle, have you seen Allan Savory's TED talk? http://www.ted.com/talks/allan_savory_how_to_green_the_world_s_deserts_and_reverse_climate_change.html

Actually grazing cattle don't kill plants, they just trim off the ends.

0kalium
Depends on plant species (not all survive trimming well) and cattle density (trampling certainly kills plants). However, most meat and dairy are not sustained purely by grazing. That said, harvesting grain to feed to cattle doesn't have to kill plants either, unless we consider the embryo in a seed to have the same moral status as a mature plant. In practice, growing grain to feed to cattle to feed to humans will involve killing a lot more weeds than growing grain to feed straight to humans.

And yet those horrible vegetarians continue to murder & eat these sentient lifeforms!

Because each step in the food chain involves energy loss, the shorter the chain, the fewer plants need to be killed to support you. Thus being a vegetarian saves plant lives too.

aelephant230

If it is just the caffeine you want, why not get some caffeine pills? Virtually no calories & lightyears better for your teeth.

0hyporational
Should also be better for your GI-tract than coffee at least. It's so easy to take pills I think tolerance develops easier than with other delivery methods. I've found that steps of about 25 mg per day are sufficient to reduce caffeine intake without much side effects for me. The pills can be cut to pieces.
0Gunnar_Zarncke
I used to drink coke when I had headaches (migraine) because I had discovered that it did help. When I later learned that it is due to the coffeine I first switched to ASS and then to pure coffein pills (as that is what has the main effect). Interestingly the coffein pills don't really make me jittery or unusually awake.

I think I'll do that actually.

EDIT: Done.

From Daniel's post, it seems like the categorical imperative defines whether some behavior could be considered morally required, not whether a particular behavior is immoral. Being a computer programmer couldn't be morally required of everyone, but that doesn't mean that it is immoral for some people to be computer programmers.

Can you expand on that a bit?

0Daniel_Burfoot
Sure. The principle says If everyone were to stop having kids so they could donate more money to charity, the following would happen: * there would be a lot less starvation and disease in the next couple of decades * humanity would vanish in about seven decades So you can't want the maxim "stop having kids so you can donate more to charity" to become a universal law; thus (according to Kant) you shouldn't follow the maxim yourself.
0Raemon
...no? Probably?

Doesn't have the same effect. There was a study where they gave one group Vitamin C & one group ate oranges. The group that got the Vitamin C had no change in Antioxidant activity; only the orange-eating group saw the benefit. Probably there are some other factors involved that we just haven't figured out yet. Eventually maybe reductionism will solve this one, but it hasn't yet.

Here's an article about the study in case I remembered incorrectly: http://www.nature.com/news/2007/070416/full/news070416-15.html

0passive_fist
That's an interesting study, but they also mention that if you could obtain the other chemicals from the oranges (perhaps the flavanones and carotenoids) separately you could get the same effect as eating the fruit. Obviously, oranges are not just vitamin C. It's true that we don't yet know all the biochemical pathways of nutrition. My question isn't about current knowledge.

What vegetables should you eat?

Dark green ones seem to have more nutritional content in general, but yellow, orange, & red vegetables are typically good sources of Vitamin A & some other pigment-like compounds that might not be in large amounts in the green ones.

I had an Organic Chemistry teacher at the small community college in my home town that blew away my big university's Medicinal Chemistry prof. The big university prof was tenured & could treat his students as abusively as he desired without any fear of significant consequences.

Right. Like I said, I find it hard to come up with a good argument. I don't like arguments that extend things into the future, because everything has to get all probabilistic. Is it possible to prove that any particular child is going to grow into an adult? Nope.

0Watercressed
But if we're 99.9% confident that a child is going to die (say, they have a very terminal disease), is being cruel to the child 99.99% less bad?

Don't know. I imagine any answer I could produce would be a rationalization.

To be completely honest, I agree with you but find it hard to come up for a good argument for why that should be. One way I've thought about it in the past is that the parents or caretakers of a child are sort of like stewards of a property that will be inherited one day. If I'm going to inherit a mansion from my grandfather on my 18th birthday, my parents can't arbitrarily decide to burn it down when I'm 17 & 364 days old. Harming children (physically or emotionally) is damaging the person they will be when they are an adult in a similar way.

0Solitaire
What about a mentally disabled person, or other groups of humans who will never be capable of consciously entering into a 'moral agreement' with society? Should they also be considered 'outside the realm of morality'? What makes them different from an animal, other than anthropocentricism?
0Jiro
By this reasoning, if the child is 5 years old but the world is going to be hit by an asteroid tomorrow, unavoidably killing everyone, it would be okay to be cruel to the child. To save the original idea, I'd suggest modifying it to distinguish between having impaired ability to come to agreements and not having the ability to come to agreements. Children are generally in the former category, at least if they can speak and reason. This extends to more than just children; you shouldn't take advantage of someone who's stupid, but you can "take advantage" of the fact that a stick of broccoli doesn't understand what it means to be eaten and can't run away anyway.

To me morality is an agreement that people can come to with one another. Since animals can't come to agreements with one another, what happens between animals is amoral. It isn't immoral when a bird kills a worm or a cat kills a rat and it doesn't make me feel bad either. Humans could make agreements between themselves about how they want to treat other animals, but humans can't make agreements with other animals. For this reason, I consider all interactions with animals to be outside the realm of morality, although there are certain behaviors that disgust me & that are probably indicative of mental illness & a sign that someone is probably a danger to others (eg torturing kittens).

0[anonymous]
Thanks for the answer, I think I formulated my original question incorrectly: why do you care about human suffering?
2Salemicus
What about the way we treat others with whom we can't come to agreements? Is that a matter of morality? For example, consider young children. I suspect most people regard cruelty to a young child as a particular moral horror, precisely because the child cannot argue back or defend itself. Indeed, I would argue that our moral obligations are strongest to groups such as children.

Nested bags work well too. I have one of those huge waterproof messenger bags & it is like a bottomless pit if you don't organize it somehow.

This can also be looked at as a willpower issue too. I want to do X, but I didn't do it in the morning or the afternoon, now it is the evening & I've still got to cook myself dinner & then clean the dishes & I'm already exhausted & I was working hard all day & I really just want to relax...

Pharmacy stuff Brand & generic name pairs for prescription drugs. Classes & mechanisms of action for prescription drugs. 1st line therapies for various diseases. Etc.

Mandarin Chinese *Mostly just doing vocab at the moment, but have used it for listening (MP3 clips), writing, & grammar.

Misc work stuff Names of new employees (they're Chinese names, so difficult to remember) Who is the contact person for what (eg if you want a new email account, you need to contact Mrs. Wu YiJun for approval)

I was thinking "rapid sequence intubation".

I've noticed that in published works, the 1st instance of a term is usually spelled out / clarified. So in the title, you could use "repetitive strain injury (RSI)" & then use RSI for every instance after that.

2NancyLebovitz
I've corrected it. Thanks.

I wonder if getting everyone to agree to use Beeminder could help with the cleanliness. When I lived in a group house I found that my mate whom I shared a bathroom with had a significantly lower dirtiness threshold than I did. I don't consider myself particularly disgusting & never even noticed the bathroom was getting dirty, but it drove him crazy. I didn't want to be a dick & never clean the bathroom, but I never cleaned the bathroom because he ended up flipping out & doing it himself. I probably would've agreed to using Beeminder or some other similar system to help motivate me, had I known about these kinds of things at the time.

"Getting everyone to agree to use Beeminder" strikes me as pretty tough to negotiate, though probably less tough to negotiate in a LW-cluster household. There are lots of potential failure modes for division of cleanliness labour, and commitment strategies address only a subset of those. The beauty of outsourcing it to a third party is that it bypasses them all.

I'd assume that 3^^^3 people would prefer to have a barely noticeable dust speck in their eye momentarily over seeing me get tortured for 50 years & thus I'd choose the dust specks. If I'm wrong, that is fine, any of those 3^^^3 people can take me to court & sue me for "damages", of which there were none. Maybe appropriate reimbursement would be something like 1/3^^^3 of a cent per person?

I would rather observe you & see what you do to avoid becoming a wirehead. I'd put saying you want to avoid becoming a wirehead & saying you want to want to pay to save the squirrels in the same camp -- totally unprovable at this point in time. In the future maybe we can scan your brain & see which of your stated preferences you are likely to act on; that'd be extremely cool, especially if we could scan politicians during their campaigns.

I've actually built & abandoned several decks. When I was studying in university I was focusing more on the specific usage of certain words in context, so my cards were ridiculously complex. It was good for studying for the exams, but exhausting to keep up with. Now my cards are pure vocabulary. I don't go through lists & add all the words. I wait for the word to come up in daily life, then add it. In the past I studied a lot of words I never used, now I mostly study words I'm encountering regularly. I do think this depends on where you are though.... (read more)

When you're talking about the utility of squirrels, what exactly are you calculating? How much you personally value squirrels? How do you measure that? If it is just a thought experiment ("I would pay $1 per squirrel to prevent their deaths") how do you know that you aren't just lying to yourself & if it really came down to it, you wouldn't pay? Maybe we can only really calculate utility after the fact by looking at what people do rather than what they say.

0DSimon
I may not actually want to pay $1 per squirrel, but if I still want to want to, then that's as significant a part of my ethics as my desire to avoid being a wire-head, even though once I tried it I would almost certainly never want to stop.
0MugaSofer
How do you know those people aren't still "lying to themselves"? Humans are not known for being perfect, bias-free reasoners. Maybe we can only really calculate utility after the fact by looking at what perfect Bayesian agents do rather than mere mortals.

Congrats man. I also highly suggest you use Anki if you aren't already. It reminds me of the words & phrases I learned months ago & keeps them fresh in my memory. Not that it is my sole motivation, but you can really impress native speakers when you pull out some obscure saying someone mentioned in passing a long while back.

0Barry_Cotter
I am using Anki though sub-optimally. Until recently I was just using one of the shared decks, New Practical Chinese Reader. Now I'm catching up with myself inputting the phrases and vocabulary from the book I'm using, "Chinese for Foreigners". Making cards is a massive PITA but worth it for things you actually care about learning. How long did it take you to do a textbook's worth of cards when in university?

How about intentionally surrounding yourself with people who are excited about the thing you want to be excited about?

0ygert
While this is a good idea, it may not always be practical. Many other factors go into setting which people surround you.

It sucks to experience it personally, but maybe it serves an evolutionary purpose that we don't yet fully understand & eliminating it completely would be a mistake?

0Desrtopa
I think we already have more than an inkling of the usefulness of suffering over warning signs which are less burdensome to experience. It can be awfully tempting to override such warning signs when we can. Imagine a group of hunters who're chasing down a valuable game animal. All the hunters know that the first one to spear it will get a lot of extra respect in the group. One hunter who's an exceptional runner pulls to the head of the group, bearing down on the animal... and breaks a bone in his leg. In a world where he gets a signal of his body's state, but it's not as distressing as pain is, he's likely to try to power on and bring down the game animal. He might still be the first to get a spear in it, at the cost of serious long term disability, more costly to him than the status is valuable. The hunter's evolutionary prospects are better in a world where the difficulty in overriding the signal is commensurate with the potential costs of doing so. If attempting to override such signals were not so viscerally unpleasant, we'd probably only be able to make remotely effective tradeoffs on them using System 2 reasoning, and we're very often not in a state to do that when making decisions regarding damage to our own bodies.
5ThisSpaceAvailable
"It serves an evolutionary purpose" and "eliminating it completely would be a mistake" are two completely different claims. While there is correlation between evolutionary purposes and human purposes, the former has no value in and of itself.
1DanielLC
It serves an evolutionary purpose that's pretty obvious and eliminating it entirely would cause a lot of problems. We can still find a way to improve the status quo though. We didn't evolve to maximize net happiness, and we're going to have to do things we didn't evolve to do if we want to maximize it.

In the US certain employers are required to provide health insurance for employees who work 40 hours per week or more, but not for employees who work 20 hours per week, so that is at least one incentive that would encourage hiring part-time employees vs full-time employees.

0Davidmanheim
And we've noticed that many of the newly created jobs coming out of the recession are part time; the ones that were lost were full time. This is a reduction in employment-hours, even if it's not a reduction in number of employed people.
aelephant-40

None. You get thrown in jail or put to death for that kind of thing.

9Vladimir_Nesov
See The Least Convenient Possible World, Better Disagreement.
9Mestroyer
When you bring up things like the law, you're breaking thought experiments, and dodging the real interesting question someone is trying to ask. The obvious intent of the question is to weigh how much you care about one day old infants vs how much you care about mentally healthy adults. Whoever posed the experiment can clarify and add things like "Assume it's legal," or "Assume you won't get caught." But if you force them to, you are wasting their time. And especially over an online forum, there is no incentive to, because if they do, you might just respond with other similar dodges such as "I don't want to kill enough that it becomes a significant fraction of the species," or "By killing, I would damage my inhibitions against killing babies which I want to preserve," or "I don't want to create grieving parents," or "If people find out someone is killing babies, they will take costly countermeasures to protect their babies, which will cause harm across society." If you don't want to answer without these objections out of the way, bring up the obvious fix to the thought experiment in your first answer, like "Assuming it was legal/ I wouldn't caught, then I would kill N babies," or "I wouldn't kill any babies even if it was legal and I wouldn't get caught, because I value babies so much," and then explain the difference which is important to you between babies and chickens, because that's obviously what Locaha was driving at.

If a moral theory accepted and acted upon by all moral people led to an average decrease in suffering, I'd take that as a sign that it was doing something right. For example, if no one initiated violence against anyone else (except in self defense), I have a hard time imagining how that could create more net suffering though it certainly would create more suffering for the subset of the population who previously used violence to get what they wanted.

aelephant-30

To me it is not the suffering per se that bothers me about factory farming. I'm having trouble finding the right words, but I want to say it is the "un-naturalness" of it. Animals are not meant to live their whole lives in cages pumped full of antibiotics. I also believe it is harmful to humans, both to the humans who operate these factories (psychologically) & to the humans that consume the product (physically).

On the other hand, it is natural for animals to eat other animals, and properly raised animal products are arguably one of the best ... (read more)

3lavalamp
Meaning requires a mind to provide it. Animals are not "meant" to do anything...

If dogs & cats were raised specifically to be eaten & not involved socially in our lives as if they were members of the family, I don't think I'd care about them any more than I care about chickens or cows.

This article seems to assume that I oppose all suffering everywhere, which I'm not sure is true. Getting caught stealing causes suffering to the thief and I don't think there's anything wrong with that. I care about chickens & cows significantly less than I care about thieves because thieves are at least human.

-1[anonymous]
How does this make you care?
-1Jabberslythe
If you found that you cared much more about your present self than your future self, you might reflect on that and decide that because those two things are broadly similar you would want to change your mind about this case. Even if those selves are not counted as such by your sentiments right now. This article is trying to get you to undertake similar reflections about pets and humans vs. other animals.
-12DxE
2MTGandP
Why don't you care about non-humans? If other animals suffer in roughly the same way as humans, why should it matter at all what species they belong to? In this case I think that's justified because catching a thief leads to less suffering overall than failing to catch the thief.
4A1987dM
Indeed, few westerners appear to be that bothered that it is customary to eat dog meat in China.

I really don't know the probability of a person saying hello to a stranger who said hello to them. It depends on too many factors, like the look & vibe of the stranger, the history of the person being said hello to, etc.

Given a time constraint, I'd agree that I'd be more likely to predict that the girl would reply hello than to predict Deep Blue's next move, but if there were not a time constraint, I think Deep Blue's moves would be almost 100% predictable. The reason being that all that Deep Blue does is calculate, it doesn't consult its feelings befo... (read more)

7Rob Bensinger
I'm not asking for the probability. I'm asking for your probability -- the confidence you have that the event will occur. If you have very little confidence one way or the other, that doesn't mean you assign no probability to it; it means you assign ~50% probability to it. Everything in life depends on too many factors. If you couldn't make predictions or decisions under uncertainty, then you wouldn't even be able to cross the street. Fortunately, a lot of those factors cancel out or are extremely unlikely, which means that in many cases (including this one) we can make approximately reliable predictions using only a few pieces of information. Without a time constraint, the same may be true for the girl (especially if cryonics is feasible), since given enough time we'd be able to scan her brain and run thousands of simulations of what she'd do in this scenario. If you're averse to unlikely hypotheticals, then you should be averse to removing realistic constraints.

I'm not trying to argue that humans are completely unpredictable, but neither are AIs. If they were, there'd be no point in trying to design a friendly one.

About your point that humans are less able to predict AI behavior than human behavior, where are you getting those numbers from? I'm not saying that you're wrong, I'm just skeptical that someone has studied the frequency of girls saying hello to strangers. Deep Blue has probably been studied pretty thoroughly; it'd be interesting to read about how unpredictable Deep Blue's moves are.

4Rob Bensinger
Right. And I'm not trying to argue that we should despair of building a friendly AI, or of identifying friendliness. I'm just noting that the default is for AI behavior to be much harder than human behavior for humans to predict and understand. This is especially true for intelligences constructed through whole-brain emulation, evolutionary algorithms, or other relatively complex and autonomous processes. It should be possible for us to mitigate the risk, but actually doing so may be one of the most difficult tasks humans have ever attempted, and is certainly one of the most consequential. Let's make this easy. Do you think the probability of a person saying "hello" to a stranger who just said "hello" to him/her is less than 10%? Do you think you can predict Deep Blue's moves with greater than 10% confidence? Deep Blue's moves are, minimally, unpredictable enough to allow it to consistently outsmart the smartest and best-trained humans in the world in its domain. The comparison is almost unfair, because unpredictability is selected for in Deep Blue's natural response to chess positions, whereas predictability is strongly selected for in human social conduct. If we can't even come to an agreement on this incredibly simple base case -- if we can't even agree, for instance, that people greet each other with 'hi!' with higher frequency than Deep Blue executes a particular gambit -- then talking about much harder cases will be unproductive.

I'm trying to protect Rolf because he can't seem to interact with others without lashing out at them abusively.

I would gather we have much more certainty about Deep Blue's algorithms considering that we built them. You're getting into hypothetical territory assuming that we can obtain near perfect knowledge of the human brain & that the neural state is all we need to predict future human behavior.

4Rob Bensinger
And you'd gather wrong. Our confidence that the woman says "hello" (and a fortiori our confidence that she does not take a gun and blow the man's head off) exceeds our confidence that Deep Blue will make a particular chess move in response to most common plays by several couple orders of magnitude. We started off well into hypothetical territory, back when Stuart brought Clippy into his thought experiment. Within that territory, I'm trying to steer us away from the shoals of irrelevance by countering your hypothetical ('but what if [insert unlikely scenario here]? see, humans can't be predicted sometimes! therefore they are Unpredictable!') with another hypothetical. But all of this still leaves us within sight of the shoals. You're missing the point, which is not that humans are perfectly predictable by other humans to arbitrarily high precision and in arbitrarily contrived scenarios, but that our evolved intuitions are vastly less reliable when predicting AI conduct from an armchair than when predicting human conduct from an armchair. That, and our explicit scientific knowledge of cognitive algorithms is too limited to get us very far with any complex agent. The best we could do is build a second Deep Blue to simulate the behavior of the first Deep Blue.
aelephant-10

That would be in the "More Likely" bucket, or rather an "Extremely Likely" bucket. You said that the girl would say "hello" & that is in the "More Likely" bucket too, but far from a certainty. She could ignore him, turn the other way, poke him in the stomach, or do any of an almost infinite other things. Either way, you're resorting to insults & I've barely engaged with you, so I'm going to ignore you from here on out.

-2Rob Bensinger
If you had to guess, would you say you're probably ignoring Rolf to protect your epistemically null feelings, or to protect your epistemology? (In terms of the actual cognitive mechanism causally responsible for your avoidance, not primarily in terms of your explicit linguistic reason.)
1Rob Bensinger
This statement is true but not relevant, because it doesn't demonstrate a disanalogy between the woman and Deep Blue. In both cases we can only reason probabilistically with what we expect to have happen. This is true even if our knowledge of the software of Deep Blue or the neural state of the woman is so perfect that we can predict with near-certainty that it would take a physics-breaking miracle for anything other than X to occur. This doesn't suffice for 'certainty' because we don't have true certainty regarding physics or regarding the experiences that led to our understanding Deep Blue's algorithms or the woman's brain.
7RolfAndreassen
What if the chess position is mate in one move? Cases that are sufficiently special to ride the short bus do not make a general argument.

I think we need to clarify how God-like this hypothetical AI is. If the AI is not very God-like, then trying to turn humans into hamburgers could be very costly for it. If we made the AI, maybe we could make a competing AI to resist it or use some backdoor built into the AI's programming to pull the plug. At the very least, we could launch missiles at it.

If the AI is very God-like, then there are more resource rich sources than human beings it could easily obtain. It'd be sort of like humans gathering up all of the horses for transportation when we already have cars & planes.

aelephant-20

There's no scarcity of air. If the AI can turn air into hamburgers, I don't think the resources contained in my body would be the AI's preferred source of energy given that they will be more costly to extract (I will fight to keep them) & contain less energy overall than many other potential sources. If the AI can turn air into hamburgers, it could just leave the earth & convert the core of a huge star into hamburgers instead.

If we want hamburgers & the AI can make them much more efficiently than we can, why wouldn't we just willingly give our resources to the AI so that it can make hamburgers? Resisting the AI would be dangerous for us depending on the AI's military capabilities & the AI trying to overpower us could be dangerous for the AI as well.

The thing about nations is that they can externalize the costs & consolidate the benefits of invading a country -- the politicians & corporations that benefit from the invasion don't have to fight & die in the battles -- that's what poor young men & women are for; nor do they pay for the costs of the military supplies -- that's what taxes & debt are for.

0wedrifid
If your preferences are maximised by you being turned into a hamburger then by all means do so.
7Manfred
Because "resources" means things like clothing, air, water, electricity, and the minerals contained in your body.
Load More