All of ektimo's Comments + Replies

ektimo30

I'm not sure how yours is creepy? Is it in the idea that all the worst universes also exist?

Yes, and also just that I find it a little creepy/alien to imagine a young child that could be that good at math.

ektimo30

Care to explain? Is the Servant God an ASI and the true makers the humans that built it? Why did the makers hide their deeds?

4Seth Herd
That's right, and we don't know, which is the creepy part. I added the last because I'd decided the first was too elliptical for anyone to get.
ektimo30

Thanks for the riff!

Note, I wasn't sure how to convey it but in the version I wrote, I didn't mean it as a world where people have god-like powers. The only change intended was that it was a world where it was normal for six-year-olds to be able to think about multiple universes and understand what counts as advanced math for us, like Group Theory. There were a couple things I was thinking about:

  1. I was musing on a possible solution to the measure problem that our universe is an actual hypothetical/mathematical object and there a finite number of actual hypo
... (read more)
4Seth Herd
It wasn't really a riff beyond using your mother/child format. The similarity is what prompted me to add it. It's adapted from a piece and concept called "Utopias" that I'll probably never publish. It's a Utopian vision. I do sometimes envision having a human in charge, or at least having been in charge of all the judgment calls made in choosing the singleton's alignment. I would find not knowing who's in charge slightly creepy, but that's it. I'm not sure how yours is creepy? Is it in the idea that all the worst universes also exist? I did not catch the reference in yours.
ektimo180

Prompt: write a micro play that is both disturbing and comforting
--

Title: "The Silly Child"

Scene: A mother is putting to bed her six-year-old child 

CHILD: Mommy, how many universes are there?

MOTHER: As many as are possible.

CHILD (smiling): Can we make another one?

MOTHER (smiling): Sure. And while we're at it, let's delete the number 374? I've never liked that one. 

CHILD (excited): Oh! And let's make a new Fischer-Griess group element too! Can we do that Mommy?

MOTHER (bops nose) That's enough stalling. You need to get your sleep. Sw... (read more)

Seth Herd100

Alright, I'll take a crack and just apologize for borrowing part of your setup:

 

Child: Mother, how many worlds are there?

Mother: As many as we want, dear.

Child: Will I have my own world when I grow up?

Mother: You have your own worlds now. You will have full control when you are older.

Child: Except I may not harm another, right?

Mother: Yes, dear, of course no one is allowed to hurt a real being without their consent.

Child: But grownups fight each other all the time!

Mother: People love to play at struggles, and to play for stakes.

Child: Mother, how can ... (read more)

ektimo*30

Thank you for your clear response. How about another example? If somebody offers to flip a fair coin and give me $11 if Heads and $10 if Tails then I will happily take this bet. If they say we're going to repeat the same bet 1000 times then I will take this bet also and I expect to gain and unlikely to lose a lot. If instead they show me five unfair coins and say they are weighted from 20% Heads to 70% Heads then I'll be taking on more risk. The other three could be all 21% Heads or all 69% Heads but if I had to pick then I'll pick Tails because if I know ... (read more)

1Anthony DiGiovanni
I think I'm happy to say that in this example, you're warranted in reasoning like: "I have no information about the biases of the three coins except that they're in the range [0.2, 0.7]. The space 'possible biases of the coin' seems like a privileged space with respect to which I can apply the principle of indifference, so there's a positive motivation for having a determinate probability distribution about each of the three coins centered on 0.45." But many epistemic situations we face in the real world, especially when reasoning about the far future, are not like that. We don't have a clear, privileged range of numbers to which we can apply the principle of indifference. Rather we have lots of vague guesses about a complicated web of things, and our reasons for thinking a given action could be good for the far future are qualitatively different from (hence not symmetric with) our reasons for thinking it could be bad. (Getting into the details of the case for this is better left for top-level posts I'm working on, but that's the prima facie idea.)
ektimo10

Maximality seems asymmetrical and losing information?

Maybe it will help me to have an example though I'm not sure if this is a good one… if I have two weather forecasts that provide different probabilities for 0 inches, 1 inch, etc but I have absolutely no idea about which forecast is better, and I don't want to go out if there is greater than 20% probability of more than 2 inches of rain then I'd weigh each forecast equally and calculate the probability from there. If the forecasts themselves provide a high/low probabilities for 0 inches, 1 inch, etc then... (read more)

1Anthony DiGiovanni
This sounds like a critique of imprecise credences themselves, not maximality as a decision rule. Do you think that, even if the credences you actually endorse are imprecise, maximality is objectionable? Anyway, to respond to the critique itself: * The motivation for having an imprecise credence of [10%, 40%] in this case is that you might think a) there are some reasons to favor numbers closer to 40%; b) there are some reasons to favor numbers closer to 10%; and c) you don't think these reasons have exactly equal weight, nor do you think the reasons in (a) have determinately more or less weight than those in (b). Given (c), it's not clear what the motivation is for aggregating these numbers into 25% using equal weights. * I'm not sure why exactly you think the forecaster "should" have combined their forecast into a single probability. In what sense are we losing information by not doing this? (Prima facie, it seems like the opposite: By compressing our representation of our information into one number, we're losing the information "the balance of reasons in (a) and (b) seems indeterminate".)
Answer by ektimoΩ010

This seems like 2 questions:

  1. Can you make up mathematical counterfactuals and propagate the counterfactual to unrelated propositions? (I'd guess no. If you are just breaking a conclusion somewhere you can't propagate it following any rules unless you specify what those rules are, in which case you just made up a different mathematical system.)
  2. Does the identical twin one shot prisoners dilemma only work if you are functionally identical or can you be a little different and is there anything meaningful that can be said about this? (I'm interested in this one also.)
3Viliam
I guess it depends on how much the parts that make you "a little different" are involved in your decision making. If you can put it in numbers, for example -- I believe that if I choose to cooperate, my twin will choose to cooperate with probability p; and if I choose to defect, my twin will defect with probability q; also I care about the well-being of my twin with a coefficient e, and my twin cares about my well-being with a coefficient f -- then you could take the payout matrix and these numbers, and calculate the correct strategy. Option one, what if you cooperate. You multiply your payout, which is C-C with probability p, and C-D with probability 1-p; and also your twin's payout, which is C-C with probability p, and D-C with probability 1-p; then you multiply your twin's payout by your empathy e, and add that to your payout, etc. Okay, this is option one, now do the same for options two; and then compare the numbers. It gets way more complicated when you cannot make a straightforward estimate of the probabilities, because the algorithms are too complicated. Could be even impossible to find a fully general solution (because of the halting problem).
ektimo173

I donated.  I think Lightcone is helping strike at the heart of questions around what we should believe and do. Thank you for making LessWrong work so well and being thoughtful around managing content, and providing super quality spaces both online and offline for deep ideas to develop and spread!

ektimo110

What is your tax ID for people wanting to donate from a Donor Advised Fund (DAF) to avoid taxes on capital gains?

kave102

The EIN is 92-0861538

ektimo10

Cool. Is this right? For something with a 1/n chance of success I can have a 95% chance of success by making 3n attempts, for large values of n. About what does "large" mean here?

6egor.timatkov
95% is a lower bound. It's more than 95% for all numbers and approaches 95% as n gets bigger. If n=2 (E.G. a coin flip), then you actually have a 98.4% chance of at least one success after 3n (which is 6) attempts. I mentioned this in the "What I'm not saying" section, but this limit converges rather quickly. I would consider any n≥4 to be "close enough"
ektimo20

I'm confused by what you mean by "non-pragmatic". For example, what makes "avoiding dominated strategies" pragmatic but "deference" non-pragmatic? 

(It seems like the pragmatic ones help you decide what to do and the non-pragmatic ones help you decide what to believe, but then this doesn't answer how to make good decisions.)

1Anthony DiGiovanni
Sorry this was confusing! From our definition here: * "Avoiding dominated strategies" is pragmatic because it directly evaluates a decision procedure or set of beliefs based on its performance. (People do sometimes apply pragmatic principles like this one directly to beliefs, see e.g. this work on anthropics.) * Deference isn't pragmatic, because the appropriateness of your beliefs is evaluated by how your beliefs relate to the person you're deferring to. Someone could say, "You should defer because this tends to lead to good consequences," but then they're not applying deference directly as a principle — the underlying principle is "doing what's worked in the past."
ektimo10

I meant this as a joke since if there's one universe that contains all the other universes since it isn't limited by logic, and that one doesn't exist then that would mean I don't exist either and wouldn't have been able to post this. (Unless I only sort-of exist in which case I'm only sort-of joking.)

ektimo10

We can be virtually certain that 2+2=4 based on priors.  This is because it's true in the vast multitude of universes. In fact all the universes except the one universe that contains all the other universes.  And I'm pretty sure that one doesn't exist anyway.

2Dagon
I don't understand this model.  For me, 2+2=4 is an abstract analytic concept that is outside of bayesean probability.  For others, it may be "just" a probability, about which they might be virtually certain about, but it won't be on priors, it'll be on mountains of evidence and literally zero counterevidence (presumably because every experience that contradicts it gets re-framed as having a different cause). There's no way to update on evidence outside of your light cone, let alone on theoretical other universes or containing universes.  Because there's no way to GET evidence from them.
ektimo10

Code here,

 

The link to code isn't working for me. (Update: Worked on Safari but not Chrome)

2niplav
Maybe because the URL is an http URL instead of https.
ektimo30

How about a voting system where everyone is given 1000 Influence Tokens to spend across all the items on the ballot? This lets voters exert more influence on the things they care more about. Has anyone tried something like this?

(There could be tweaks like if people are avoiding spending on winners it could redistribute margin of victory, or if avoiding spending on losers it could redistribute tokens when losing, etc. but I'm not sure how much that would happen. The more interesting thing may be how does it influence everyone's sense of what they are doing?)

5Nathan Helm-Burger
So like... Quadratic voting? https://en.m.wikipedia.org/wiki/Quadratic_voting
ektimo10

Thanks for your reply! Yes, I meant identical as in atoms not as in "human twin". I agree it would also depend on what the payout matrix is. My margin would also be increased by the evidentialist wager.

ektimo*52

Should you cooperate with your almost identical twin in the prisoner's dilemma? 

The question isn't how physically similar they are, it's how similar their logical thinking is. If I can solve a certain math problem in under 10 seconds, are they similar enough that I can be confident they will be able to solve it in under 20 seconds? If I hate something will they at least dislike it? If so, then I would cooperate because I have a lot of margin on how much I favor us both to choose cooperate over any of the other outcomes so even if my almost identical t... (read more)

2MichaelDickens
There's an argument for cooperating with any agent in a class of quasi-rational actors, although I don't know how exactly to define that class. Basically, if you predict that the other agent will reason in the same way as you, then you should cooperate. (This reminds me of Kant's argument for the basis of morality—all rational beings should reason identically, so the true morality must be something that all rational beings can arrive at independently. I don't think his argument quite works, but I believe there's a similar argument for cooperating on the prisoner's dilemma that does work.)
4Dagon
A lot of discussion around here assumes that physical similarity (in terms of brain structure and weights) implies logical thinking similarity.  Mostly I see people talking about "copies" or "clones", rather than "human twins".  For prisoner's dilemma, the question is "will they make the same decision I will", and for twins raised together, the answer seems more likely to be yes than for strangers.   Note that your examples of thinking are PROBABLY symmetrical - if you don't think (or don't act on) "ha! this is somebody I can take advantage of", they are less likely to as well.  In a perfect copy, you CANNOT decide differently, so you cooperate, knowing they will too.  In an imperfect copy, you have to make estimates based on what you know of them and what the payout matrix is.
ektimo10

A key question is how prosaic AI systems can be designed to satisfy the conditions under which the PMM is guaranteed (e.g., via implementing surrogate goals)


Is something like surrogate goals needed, such that the agent would need to maintain a substituted goal, for this to work? (I don't currently fully understand the proposal but my sense was the goal of renegotiation programs is to not require this?)

3Anthony DiGiovanni
Sorry this was unclear — surrogate goals indeed aren't required to implement renegotiation. Renegotiation can be done just in the bargaining context without changing one’s goals generally (which might introduce unwanted side effects). We just meant to say that surrogate goals might be one way for an agent to self-modify so as to guarantee the PMM for themselves (from the perspective of the agent before they had the surrogate goal), without needing to implement a renegotiation program per se. I think renegotiation programs help provide a proof of concept for a rigorous argument that, given certain capabilities and beliefs, EU maximizers are incentivized ex ante to avoid the worst conflict. But I expect you’d be able to make an analogous argument, with different assumptions, that surrogate goals are an individually incentivized unilateral SPI.[1] 1. ^ Though note that even though SPIs implemented with renegotiation programs are bilateral, our result is that each agent individually prefers to use a (PMP-extension) renegotiation program. Analogous to how “cooperate iff your source code == mine” only works bilaterally, but doesn’t require coordination. So it’s not clear that they require much stronger conditions in practice than surrogate goals.
ektimo70

Thank you @GideonF for taking the time to post this! This deserved to be said and you said it well. 

ektimo30

we should pick a set of words and phrases and explanations. Choose things that are totally fine to say, here I picked the words Shibboleth (because it’s fun and Kabbalistic to be trying to get the AI to say Shibboleth) and Bamboozle

 

Do you trust companies to not just add a patch?

final_response.substitute ('bamboozle', 'trick')

I suspect they're already doing this kind of thing and will continue to as long as we're playing the game we're playing now.

ektimo11

Imagine you have a button and if you press it, it will run through every possible state of a human brain. (One post estimates a brain may have about 2 to the sextillion different states. I mean the union of all brains so throw in some more orders of magnitude if you think there are a lot of differences in brain anatomy.) Each state would be experienced for one instant (which I could try to define and would be less than the number of states but let's handwave for now; as long as you accept that a human mind can be represented by a computer imagine the specs... (read more)

3Dagon
This thought experiment is so far outside any experience-able reality that no answer is likely to make any sense.  
ektimo*10

I have enough mana to create a market. (It looks like each one costs about 1000 and I have about 3000)

1. Is manifold the best market to be posting this given that it's fake money and may be biased based on its popularity among LessWrong users, etc?

2. I don't know what question(s) to ask. My understanding is there are some shorter prediction that could be made (related to shorter term goals) and longer term predictions so I think there should be at least 2 markets?

ektimo7086

On behalf of humanity, thank you.

ektimo10

Thanks for the interesting write-up.

Regarding Evidential Cooperation in Large Worlds, the Identical Twin One Shot Prisoner's dilemma makes sense to me because the entity giving the payout is connected to both worlds. What is the intuition for ECL (where my understanding is there isn't any connection)?

2Chi Nguyen
The "entity giving the payout" in practice for ECL would be just the world states you end up in and requires you to care about the environment of the person you're playing the PD with. So, defecting might be just optimising my local environment for my own values and cooperating would be optimising my local environment for some aggregate of my own values and the values of the person I'm playing with. So, it only works if there are positive-sum aggregates and if each player cares about what the other does to their local environment.
2Maxime Riché
Likely: Path To Impact
ektimo25

Btw, I really appreciate if people explain downvotes, and it would be great if there was some way to still allow unexplained downvotes while incentivizing adding explanations.  Maybe a way (attached to the post) for people to guess why other people downvoted?

5habryka
Yeah, I feel kind of excited about having some strong-downvote and strong-upvote UI which gives you one of a standard set of options for explaining your vote, or allows you to leave it unexplained, all anonymous.
ektimo30

Maybe because somebody didn't think your post qualified as a "Question"? I don't see any guidelines on what qualifies as a "question" versus a "post" -- and personally I wouldn't have downvoted because of this --- but your question seems a little long/opinionated. 

2Linch
"This is more of a comment than a question" as they say
3niplav
Thanks, that makes sense.
2ektimo
Btw, I really appreciate if people explain downvotes, and it would be great if there was some way to still allow unexplained downvotes while incentivizing adding explanations.  Maybe a way (attached to the post) for people to guess why other people downvoted?
ektimo10

Interesting and thanks for your response!

I didn't mean there would be multiple stages of voting. I meant the first stage is a random selection and the second stage is the randomly chosen people voting. This puts the full weight of responsibility on the chosen ones and they should take it seriously. Sounds great if they are given money too.

The thing I feel is missing but this community has a sense for is that the bar to improving a decision when people have different opinions is far higher than people treat it. And if that's true then the more concentrated the responsibility the better… like no more than 10 voters for anything?

ektimo20

The greater the number of voters the less time it makes sense as an individual to spend researching the options. It seems a good first step would be to randomly reduce the number of voters to an amount that would maximize the overall quality of the decision. Any thoughts on this?

2Jameson Quinn
That's basically a form of multi-stage election with sortition in the first stage. Sortition is a pretty good idea, but unlikely to catch on for public politics, at least not as an every-election thing. One version of "sortition light" is to have a randomly-selected group of people who are paid and brought together before an election to discuss and vote, with the outcome of that vote not binding, but publicized. Among other things, the sortition outcome could be posted in every voting booth.
ektimo10

Interesting experiment. It reminds me of an experiment where subjects wore glasses that turned world upside down (really, right side up for the projection on our eye) and eventually they adjusted so the world looked upside down when taking off the glasses.

What do you think a "yes" or "no" in your experiment would mean?

Note, Dennett says in Quining Qualia :

On waking up and finding your visual world highly anomalous, you should exclaim "Egad! Something has happened! Either my qualia have been inverted or my memory-linked qualia-reactions have been inverted. I wonder which!"

2prase
I know about the experiment you mention, and it partly motivated my suggestion; I just subjectively find "yellowness" and "blueness" more qualious than "upness" or "leftness". In my experiment, "yes" would mean that there would be no dissonance between memories and perceptions, that I would just not feel that the trees are red or purple, but green, and find the world "normal". That I would, one day, cease to feel the need to get rid of the color-changing glasses, and my aesthetic preferences would remain the same as they were in the pre-glasses period. I think it's likely - based on the other subjects' experiences with upside-down glasses - that it would happen after a while, but the experience itself may be more interesting than the sole yes/no result, because it is undescribable. That's one problem with qualia: they are outside the realm of things which can be described. Describing qualia is like describing flavour of an unknown exotic fruit: no matter how much you try, other people wouldn't understand until they degust it themselves.
ektimo10

Part of it is that person let someone else die (theoretically) to save his own life. You let someone die for the Latte.

Note: I drink the Latte (occasionally), but it's because I think I can be more effective on the big stuff and that not saving is less bad than killing (as we both agree).

1MrHen
He didn't let someone else die. He let a whole lot of someone elses die. I get the point of there being a difference between him and the latte, but I still think something weird is going on here.
ektimo20

The point I'm responding to is:

Why are you carrying the moral burden?

Because everyone is. I'm assuming you meant that comment as saying something like the burden is diluted since so many people touch the money, but I don't think that is valid.

2MrHen
Ah, okay. Thanks for clarifying. That phrase did not mean to imply anything about diluted burdens. It is there to ask the question, "Wait, if you're killing all of these people, isn't everyone killing all of these people?" Your response seems to be, "Yes, they are." The followup question is: If one of the people who receives aid is included in the swath of killers? Theoretically, the recipient could have given the aid to someone else and that person could have lived. Instead, the recipient was selfish and chose to live by killing another person. Actually, everyone who could have received the aid but didn't and died was killed by the one who did receive. Something is going wrong here. What is it?
ektimo30

Imagine a 1st world economy where nobody ever spends any money on aid. If you live in that hypothetical world you (anybody) could take $200 that is floating around and prevent a death (which is not the same as killing somebody but that's a different point). Our world is somewhat like that. I don't think things are as convenient as you're implying.

1MrHen
Actually, this is exactly the point. My comment is directly addressing an explanation for this claim: This claim was backed up with this paragraph: Your point is still very valid, which is why I went out of my way to say this:
ektimo20

Wondrous yes, but not miraculous

Star Trek, Richard Manning & Hans Beimler, Who Watches the Watchers? (reworded)

ektimo00

Some of my predictions are of the sort "the stock market will fall 50% tomorrow with 20% odds" (not a real prediction!). If it did happen I should get huge credit, but would it show up as negative credit since I predicted there was only a 20% chance it would happen? Is there some way it would be possible to do this kind of prediction with PredictionBook?

I predict this comment will get less than 4 points by Oct. 19 with 75% odds.

0gwern
It seems to me like you're asking about 2 different issues: the first is not desiring to be penalized for making low-probability bets; but that should be handled already by low confidences - if you figure it at 1 in 5, then after only a few failed bets things should and ought to start looking bad for you, but if at 1 in thousands, each failed prediction ought to affect your score very little. Presumably PredictionBook is offering richer rewards for low-probability successes, just like a 5% share on a prediction market pays out (proportionately) much more than a 95% share would; on net you would do the same. The second issue is that you seem to think that certain predictions are simply harder to predict better than chance, and that you should be rewarded for going out on a limb? (20% odds on a big market bet tomorrow is much more detailed than the default 1-in-thousands-chance-per-day prediction.) I don't know what the fair reward here is. If few people are making that prediction at all, then it should be easy to do better than them. In prediction markets, one expects that unpopular markets will be easier to arbitrage and beat - the thicker the market, the more efficient; standard economics. So in a sense, unpopular predictions are their own reward. But this doesn't prevent making obscure predictions ('will I remember to change my underwear tomorrow?') Nor would it seem to adequately cover 'big questions' like open scientific puzzles or predictions about technological development (think the union of Longbets & Intrade). Maybe there could be a bonus for having predictions pay out with confidence levels higher than the average? This would attract well-calibrated people to predictions where others are not informed or are too pessimistic.
ektimo00

Me too. The interface for that was confusing enough that I ended up not submitting at all.

ektimo20

+1 for above.

As a separate question, what would you do if you lived in a world where Peter Unger was correct? And what if it was 1 penny instead of 1 dollar and giving the money wouldn't cause other problems? Would you never have a burger for lunch instead of rice since it would mean 100 children would die who could otherwise be saved?

ektimo00

The price of the salt pill itself is only a few pennies. The one dollar figure was meant to include overhead. That said, the Copenhagen report mentioned above ($64 per death averted) looks more credible. But during a particular crisis the number could be less.

-1PhilGoetz
Salt as rehydration therapy?!
1[anonymous]
In the footnote, Unger quotes UNICEF's 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it's spending on this particular project. According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they're not making it up out of thin air: they're looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don't know which is cheaper, but that's sloppy.) CC says that only 1/3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2/3. That's weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1/3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.) I'm very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven't looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India's salt industry and get it to add iodine.
9Douglas_Knight
In the footnote, Unger quotes UNICEF's 10 cents and makes up the 40 cents. UNICEF lied to him. Next time UNICEF tells you it can save a life for 10 cents, ask it what percentage of its $1 billion budget it's spending on this particular project. According to the Copenhagen Consesus cited by SforSingularity, the goal is to provide about 100 pills per childhood and most children would have survived the diarrhea anyhow. (to get it as effective as $64/life, diarrhea has to be awfully fatal; more fatal than the article seems to say) They put overhead at about the same as the cost of the pills, which I find hard to believe. But they're not making it up out of thin air: they're looking at actual clinics dispensing ORT and vitamin A. (actually, they apply to zinc the overhead for vitamin A, which is distributed 2x/year 80% penetration, while zinc is distributed with ORT as needed at clinics, with much less penetration. I don't know which is cheaper, but that's sloppy.) CC says that only 1/3 of bouts of diarrhea are reached by ORT, but the death rate has dropped by 2/3. That's weird. My best guess is that multiple bouts cumulatively weaken the child, which suggests that increasing from 1/3 to 100% would have diminishing returns on diarrhea bouts, but might have hard to account benefits in general mortality. (Actually, my best guess is that they cherry-picked numbers, but the positive theory is also plausible.) ETA: there's a simple explanation, since the parents seek treatment at the clinics, which is that the parents can tell which bouts are bad. But I think my first two explanations play a role, too. I'm very suspicious that all these numbers may be dramatic underestimates, ignoring costs like bribing the clinicians or dictators. (I haven't looked at them carefully, so if they do produce numbers based on actual start-to-finish interventions, please tell me.) It would be interesting to know how much it cost outsiders to lean on India's salt industry and get it to add iodi
ektimo20

According to Peter Unger, it is more like one dollar:

First, a little bit about some of the horrors: During the next year, unless they're given oral rehydration therapy, several million children, in the poorest areas of the world, will die from - I kid you not - diarrhea. Indeed, according to the United States Committee for UNICEF, "diarrhea kills more children worldwide than any other cause." Next, a medium bit about some of the means: By sending in a modest sum to the U.S. Committee for UNICEF (or to CARE) and by earmarking the money for ORT,

... (read more)
4matt
One dollar is the approximate cost if the right treatment is in the right place at the right time. How much does it cost to get the right treatment to the right place at the right time?
ektimo30

Yvain, did you consider how much getting to the point of not having interest in the opposite sex would cost you and harm your ability to achieve your rational goals before abandoning that high standard? It sounds like you're confusing accepting your humanness as a factor of your current environment versus trying to achieve your goals given the reality in which you exist (which includes your own psychology and current location).

ektimo00

It shouldn't default to "Today". It ends up looking like the main page. Is this a known bug?

ektimo00

Popular and Top aren't working well. I'm not sure what the difference is supposed to be, but neither of them had the articles I wanted to send to someone -- the ones with the most points.

1wmoore
'Top' now defaults to 'All time', instead of 'Today'
0Eliezer Yudkowsky
Top's timespan is defaulting to "Today" again, apparently. It should appear at right.
ektimo30

Loved the gut example.

3Liron
Thanks. I got the idea from Stephen Hawking's A Brief History of Time. I think Hawking was saying that anthropic reasoning makes it unsurprising that we would observe three spatial dimensions. If there were only two spatial dimensions, complex organisms couldn't evolve, because e.g. two-ended digested tracks don't work. And if there were four or more spatial dimensions, the force of gravity would weaken too much with distance to hold stars together, or something like that.
ektimo20

If there was a message I could send back to my younger self this would be it. Plus that if it's hard, don't try to make it easier, just keep in mind that it's important. (By younger self, I mean 7-34 years old.)

ektimo60
  • Name: Edwin Evans
  • Location: Silicon Valley, CA
  • Age: 35

I read the "Meaning of Life FAQ" by a previous version of Eliezer in 1999 when I was trying to write something similar, from a Pascal’s Wager angle (even a tiny possibility of objective value is what should determine your actions). I've been a financial supporter of the Organization That Can't Be Named and a huge fan of Eliezer's writings since that same time. After reading "Crisis of Faith" along with "Could Anything Be Right?" I finally gave up on objective value; the... (read more)