All of Yaakov T's Comments + Replies

Hi I am working on Rob Miles' Stampy project (https://aisafety.info/), which is creating a centralized resource for answering questions about AI safety and alignment. Would we be able to incorporate your list of frequently asked questions and answers into our system (perhaps with some modification)? I think they are really nice answers to some of the basic questions and would be useful for people curious about the topic to see.

@drocta @Cookiecarver We started writing up an answer to this question for Stampy. If you have any suggestions to make it better I would really appreciate it. Are there important factors we are leaving out? Something that sounds off? We would be happy for any feedback you have either here or on the document itself https://docs.google.com/document/d/1tbubYvI0CJ1M8ude-tEouI4mzEI5NOVrGvFlMboRUaw/edit#

2drocta
In the line that ends with "even if God would not allow complete extinction.", my impulse is to include " (or other forms of permanent doom)" before the period, but I suspect that this is due to my tendency to include excessive details/notes/etc. and probably best not to actually include in that sentence. (Like, for example, if there were no more adult humans, only billions of babies grown in artificial wombs (in a way staggered in time) and then kept in a state of chemically induced euphoria until the age of 1, and then killed, that technically wouldn't be human extinction, but, that scenario would still count as doom.) Regarding the part about "it is secular scientific-materialists who are doing the research which is a threat to my values" part: I think it is good that it discusses this! (and I hadn't thought about including it) But, I'm personally somewhat skeptical that CEV really works as a solution to this problem? Or at least, in the simpler ways of it being described. Like, I imagine there being a lot of path-dependence in how a culture's values would "progress" over time, and I see little reason why a sequence of changes of the form "opinion/values changing in response to an argument that seems to make sense" would be that unlikely to produce values that the initial values would deem horrifying? (or, which would seem horrifying to those in an alternate possible future that just happened to take a difference branch in how their values evolved) [EDIT: at this point, I start going off on a tangent which is a fair bit less relevant to the question of improving Stampy's response, so, you might want to skip reading it, idk] My preferred solution is closer to, "we avoid applying large amounts of optimization pressure to most topics, instead applying it only to topics where there is near-unanimous agreement on what kinds of outcomes are better (such as, "humanity doesn't get wiped out by a big space rock", "it is better for people to not have terrible diseases"
3Cookiecarver
Overall I agree with this. I give most of my money for global health organizations, but I do give some of my money for AGI safety too because I do think it makes sense with a variety of worldviews. I gave some of my thoughts on the subject in this comment on the Effective Altruism Forum. To summarize: if there's a continuation of consciousness after death then AGI killing lots of people is not as bad as it would otherwise be and there might be some unknown aspects about the relationship between consciousness and the physical universe that might have an effect on the odds.
Yaakov TΩ230

But in that kind of situation, wouldn't those people also pick A over B for the same reason?

3Steven Byrnes
If the 5 lovely plates were literally identical in the two sets, I think (for many people) it might serve as a sort of "hint" that they should consider the clever course of action, the one that involves splitting up the B set (i.e. doing one thing with the 10 cracked & chipped plates, and doing a different thing with the 5 other B plates). That same clever splitting idea might also pop into some people's heads for the B-versus-C comparison, but I think it would be less obvious / salient, so fewer people would think of that, leaving at least a subset of people who would choose both B-over-A if that were the choice, and C-over-B if that were the choice.

I really liked this post since it took something I did intuitively and haphazardly and gave it a handle by providing the terms to start practicing it intentionally. This had at least two benefits:

First it allowed me to use this technique in a much wider set of circumstances, and to improve the voices that I already have. Identifying the phenomenon allowed it to move from a knack which showed up by luck, to a skill.

Second, it allowed me to communicate the experience more easily to others, and open the possibility for them to use it as well. Unlike many less... (read more)

Cool. So in principle we could just as well use the rationals from the standpoint of scientific inference. But we use the reals because it makes the math easier. Thank you.

5tailcalled
I should probably also mention that if you actually used rationals, the way you would do it (when running into the tough integrals) would be by just phrasing everything in terms of bounded approximations, which is basically just unrolling a construction of the real numbers. So you might as well just use real numbers.

Thank you.

I am a little confused. I was working with a definition of continuity mentioned here https://mathworld.wolfram.com/RationalNumber.html : "It is always possible to find another rational number between any two members of the set of rationals. Therefore, rather counterintuitively, the rational numbers are a continuous set, but at the same time countable." 

I understand that Rationals aren't complete, and my question is why this is important for scientific inference. In other words, are we using the reals only because it makes the math easier, or... (read more)

2JBlack
The word used for the property referred to in the Wolfram article really should be dense, not continuous. The set of rationals is dense, but incomplete and totally disconnected. The main property lacking is exactly what I stated earlier: for some perfectly reasonable questions, rationals only allow you to work with approximations that you can prove are always wrong. That's mathematically very undesirable. It's much better to have a theory in which you can prove that there exists a correct result, and then if you only care about rational approximations you can just find a nearby rational and accept the error.

You might find the book Is Water H2O? by Hasok Chang, 2012 useful. It was mentioned by Adam Shimi in this post https://www.lesswrong.com/posts/wi3upQibefMcFs5to/levels-of-pluralism

It also reminds me of Richard Feynman not wanting a position at the institute for advance study. 

"I don't believe I can really do without teaching. The reason is, I have to have something so that when I don't have any ideas and I'm not getting anywhere I can say to myself, "At least I'm living; at least I'm doing something; I am making some contribution" -- it's just psychological.

When I was at Princeton in the 1940s I could see what happened to those great minds at the Institute for Advanced Study, who had been specially selected for their tremendous... (read more)

Do you suspect that black-box knowledge will be transferable between different models, or that the findings will be idiosyncratic to each system? 

3Buck
I suspect that some knowledge transfers. For example, I suspect that increasingly large LMs learn features of language roughly in order of their importance for predicting English, and so I'd expect that LMs that get similar language modeling losses usually know roughly the same features of English. (You could just run two LMs on the same text and see their logprobs on the correct next token for every token, and then make a scatter plot; presumably there will be a bunch of correlation, but you might notice patterns in the things that one LM did much better than the other.) And the methodology for playing with LMs probably transfers. But I generally have no idea here, and it seems really useful to know more about this.

according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

1Qwake
Thank you for finding the source (I read it in a book and was to lazy to fact check it).
1somervta
Thanks! I didn't fine it with my minute of googling, good to know it's legit.
1arundelo
Yep, it shows up in Google Books's copy of Ideas and Opinions.

probably not, but why are you certain

1fezziwig
Because certainty is higher status than uncertainty.
3linkhyrule5
More importantly, whether or not it happened is irrelevant to its use as a rationality quote...

That doesn't strike me as how psychology works, since in the real world people often repeatedly make the same mistakes. It also seems that even if your proposal would work, it doesn't address the original issue since you are assuming that the person has a clear idea of his goals and only needs time to pursue them, whereas I think the bigger issue which aging encourages is reorienting ones values.

I appreciate your taking the time to address my question, but it seems to me that this conversation isn't really making progress so I will probably not respond to future comments on this thread. Thank you

I would have to look around to see if there is non-anecdotal evidence, but anecdotally ~40 is when I have heard people start mentioning it.

I don't think your proposal would work since I don't think the time factor is the biggest issue, How often do people make big plans for summer vacation and not actually do them? They probably wouldn't say "I'll put it off for thirty years", but rather repeatedly say " I'll put it off till tomorrow" .

0[anonymous]
And then they get a reminder that they only have a year left before they go back to work. And then they get a reminder that they only have six months left. Then three months. At that point, the time crunch is palpable. They have a concrete deadline, not a nebulous one. And if they miss it? Well, they've learned for next time. That's an option unavailable to a dead person.

yes and that was the meaning of my initial comment, and that is a concern in today's world where we do have limited resources so that not everyone would be able to make use of such a technology. The country that has it (or the subset of people that have it within one country) will be motivated to defend their resources necessary to use it., This isn't an argument against such research in a world without any scarcity, but that isn't our world.

I am still not sure whether it is likely to be more beneficial or not for heavily emotional and biased humans like us.

Thank you for all of your clarifications, I think I now understand how you are viewing morality.

Maybe, but on the other hand there is inequity aversion: http://en.wikipedia.org/wiki/Inequity_aversion

Also there is the possibility of fighting over the resources to use that technology (either within society or without). Do you disagree with the general idea that without greater rationality extreme longevity will not necessarily be beneficial or do you only disagree with the example?

0RowanE
That sounds more like something that would motivate the side that's not already long-lived. They'd already have plenty of motivation. I'm saying the country that has access to the tech but wants to restrict is isn't going to have the will to fight. Well, "not necessarily be beneficial" strictly means "is not certain to be beneficial", but connotationally means "is likely enough to prove not-beneficial that we shouldn't do it", so I ADBOC - it's conceivable that it could go wrong, but I think it's likely enough to have a beneficial enough outcome that we should do it anyway.

Why don't you view the consequentialist imperative to always seek maximum utility as a deontological rule? If it isn't deontological where does it come from?

0Alicorn
To me, it looks like consequentialists care exclusively about prudence, which I also care about, and not at all about morality, which I also care about. It looks to me like the thing consequentialists call morality just is prudence and comes from the same places prudence comes from - wanting things, appreciating the nature of cause and effect, etc.
2Said Achmiz
The imperative to maximize utility is utilitarian, not necessarily consequentialist. I know I keep harping on this point, but it's an important distinction. Edit: And even more specifically, it's total utilitarian.

"You keep using the words "we" and "our", but "we" don't have lifespans; individual humans do." Of course, but "we" is common shorthand for decisions which are made at the level of society, even though that is a collection of individual decisions (e.g. should we build a bridge, or should we legalize marijuana). Do you think that using standard english expressions is problematic? (I agree that both the question of benefit for the self and benefit for others is important and think the issue of cognitive biase... (read more)

Thank you, but that post doesn't seem to answer my question, since it doesn't take up how death interplays with our cognitive biases. I agree that if we were perfectly rational beings immortality would be great, however I don't see how that implies that considering our current state that the choice to live forever (or a really long time) would be in our best interest.

Similarly I don't see how that argument indicates that we should develop longevity technologies until we solve the problem of human irrationality and evil. For example, would having a technology to live 150 years cause more benefit or would it cause wars over who gets to use the technology?

0Said Achmiz
You keep using the words "we" and "our", but "we" don't have lifespans; individual humans do. So the relevant questions, it seems to me, are: is removing the current cap on lifespan in the interest of any given individual? And: is removing the current cap on lifespan, for all individuals who wish it removed, in the interests of other individuals in their (family, country, society, culture, world)? Those are different questions. Likewise, the choice to make immortality available to anyone who wants it, and the choice to actually continue living, are two different choices. (Actually, the latter is an infinite sequence[1] of choices.) No one is necessarily claiming that we should. Like I say in my top-level comment, this is a perfectly valid question, one which we would do well to consider in the process of solving the engineering challenge that is human lifespan. [1] Maybe. Someone with a better-exercised grasp of calculus correct me if I'm wrong — if I'm potentially making the choice continuously at all times, can it still be represented as an infinite sequence?
0RowanE
I don't think anyone's willing to fight a war just to prevent another country's life expectancy from increasing.

We are all arrogant to some degree or another, knowledge of or mortality helps keep it in check. What would the world look like with an unrestrained god complex?

Taking 10 years off after 30 years doesn't seem to solve the problem of the psychological issue, in today's world, as we get older we start noticing the weakness of our bodies which push us to act, since "if not now, when".

Unless we solve the various cognitive biases we suffer from, extreme longevity seems like a mixed blessing at best, and it seems to me that it would cause more problem... (read more)

0[anonymous]
Do we have any evidence regarding this? I know there are parables serving to emphasize humility due to mortality, but I have no information on their effectiveness. It seems like it needs some immediacy to be effective, which means it only takes place when you start feeling old -- I'm guessing this will be forties to sixties for most Westerners. A well-funded, extended retirement is a perfect opportunity to do all the things you haven't had time to do while working. The threat of having to work for another few decades should be a reasonable proxy for the fear of death. Specifically, people don't tell themselves they'll put things off for thirty years until the next retirement phase; they tell themselves they'll do it eventually. Thirty years is subjectively a very long time, and people won't be inclined to happily delay for that long. are not included in anything I said here. My suggestion would require large societal changes and provides no mechanism to enact them, but it accounts for normal people, not rational agents.

Thank you, I think I understand this now.

To make sure I understand you correctly. are these correct conclusions from what you have said? a. It is permitted (i.e. ethical) to lie to yourself (though probably not prudent) b. It is permitted (i.e. ethical) to act in a way which will force you to tell a lie tomorrow c. It is forbidden (i.e. unethical) to lie now to avoid lying tomorrow (no matter how many times or how significant the lie in the future) d. The differences between the systems will only express themselves in unusual corner cases, but the underly... (read more)

0Alicorn
A, B, and C all look correct as stated, presuming situations really did meet the weird criteria for B and C. I think differences between consequentialism and deontology come up sometimes in regular situations, but less often when humans are running them, since human architecture will drag us all towards a fuzzy intuitionist middle. I don't think I understand the last paragraph. Can you rephrase?

To those who think that death should be a choice. What about the benefits of knowing that we are mortal, which death by choice doesn't allow for. e.g. as a counter force to arrogance and as a force to act now, and so as we age to start reevaluating our priorities, in other words, the benefits while we live to knowing that we are mortal may outweigh the benefit of immortality. I suspect these concerns have been dealt with on this site, so if they have feel free to link me to an appropriate post instead of writing a new response,

1[anonymous]
Does that even work? I'm thinking that an arrogant person will generally shrug off the mortality thing and go on with being arrogant, barring some near-death experience. Or at least "this decade" rather than "some day". But death seems like a steep cost for this benefit. Is there another way to get it? Like, if we've got immortal people anyway, we're going to want to have a retirement equivalent, but it won't be a matter of working forty years and taking the rest of your life off. What if we had a system whereby people took ten years off work after every thirty or so, with a guaranteed salary during that time that's more than sufficient for living? Then you would have a specific timeframe in which you are expected to relax, take long vacations, knock off a life goal or two, that sort of thing. That requires reworking social security / state pensions and probably requires a lot more wealth in general to enact. But we don't currently have a cure for death, so there's time to work out how to deal with a lack of death and enact those policies.
2Said Achmiz
Some commentary on the matter is here: How to Seem (and Be) Deep.

Yes I read that post, (Thank you for putting in all this time clarifying your view)

I don't think you understood my question. since "The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. " is not viewing 'not lying' as a terminal value but rather as an instrumental value. a terminal value would mean that lying is bad not because of what it will lead to (as you explain in that post), but if that is the case, must I act in a situation so as not to be forced to lie. For example, lets say... (read more)

3Alicorn
That's a very long paragraph, I'm going to do my best but some things may have been lost in the wall of text. I understand the difference between terminal and instrumental values, but your conclusion doesn't follow from this distinction. You can have multiple terminal values. If you terminally value both not-lying and also (to take a silly example) chocolate cake, you will lie to get a large amount of chocolate cake (where the value of "large" is defined somewhere in your utility function). Even if your only terminal value is not-lying, you might find yourself in an odd corner case where you can lie once and thereby avoid lying many times elsewhere. Or if you also value other people not lying, you could lie once to prevent many other people from lying. AAAAAAAAAAAH It is prudent to be prudent in achieving your deontological obligations. Putting "should" in that sentence flirts with equivocation. I think it's possible to act completely morally acceptably according to my system while having whopping defects of character that would make any virtue ethicist blush. It might be unlikely, but it's not impossible.

Why isn't saying "I don't know" a reasonable approach to the issue when ones knowledge is vague enough to be useless for knowledge (and can only be made useful if the case was a bizarre thought experiment), Just because one couldtheoretically bet on something doesn't mean one is in a position to bet. (For example to say that I don't know how to cure a disease so I will go to the doctor, or I don't know what that person's name is (even though I know it isn't "Xpchtl Vaaaaaarax") so I should ask someone, Or I don't know how life began. Or I don't know how many apples are on the tree outside (even though I know it isn't 100 million))

Lets take truth telling as an example. What is the difference between saying that there is an obligation to tell the truth, or honesty being a virtue or that telling the truth is a terminal value which we must maximize in a consequentialist type equation. Won't the different frameworks be mutually supportive since obligation will create a terminal value, virtue ethics will show how to incorporate that into your personality and consequentialism will say that we must be prudent in attaining it? Similarly prudence is a virtue which we must be consequentialis... (read more)

1Alicorn
The first thing says you must not lie. The second thing says you must not lie because it signifies or causes defects in your character. The third thing says you must not lie unless there is a compensatory amount of something else encouraging you to lie. The systems really don't fuse this prettily unless you badly misunderstand at least two of them, I'm afraid. (They can cooperate at different levels and human agents can switch around between implementing each of them, but on a theoretical level I don't think this works.) Absolutely not. Did you read Deontology for Consequentialists? I still don't know what you mean by "emerge from the self", but if I understand the class of thing you're pointing out with the suicide example, I don't think I have any of those.

"No. There's morality, and then there's all the many things that are not morality."

Is this only a linguistic argument about what to call morality? With ,e.g. , virtue ethics claiming that all areas of life are part of morality, since ethics is about human excellence, and your claim that ethics only has to do with obligations and rights? Is there a reason you prefer to limit the domain of morality? Is there a concept you think gets lost when all of life is included in ethics (in virtue ethics or utilitarianism)?

Also, could you clarify the idea of ... (read more)

2Alicorn
You could re-name everything, but if you renamed my deontological rules "fleeb", I would go on considering fleeb to be ontologically distinct in important ways from things that are not fleeb. I'm pretty sure it's not just linguistic. Because there's already a perfectly good vocabulary for the ontologically distinct non-fleeb things that people are motivated to act towards - "prudence", "axiology". Unassailable priority. People start looking at very large numbers and nodding to themselves and deciding that these very large numbers mean that if they take a thought experiment as a given they have to commit atrocities. Yes; I have a secondary rule which for lack of better terminology I call "the principle of needless destruction". It states that you shouldn't go around wrecking stuff for no reason or insufficient reason, with the exact thresholds as yet undefined. "Humanity" is the wrong word; I apply my ethics across the board to all persons regardless of species. I'm not sure I understand the question even if I substitute "personhood".

Is your wife still teaching your kids religion? How do you work out conflicts with your wife over religious issues (I assume she insists on a kosher kitchen, wants the kids to learn Jewish values etc)

Which of the 3 would you recommend? Does someone know why MIRI recommends Chang and Keisler if it is somewhat outdated?

0Gvaerg
Marker is the closest to the state of the art. Hodges is a bit verbose and for beginners. Poizat is a little idiosyncratic (just look at the Introduction!). I am also interested in the basis of MIRI's recommendation. Perhaps they are not too connected to actual mathematicians studying it, as model theory is pretty much a fringe topic.

For ordinary investors won't there still be an issue of buying these funds at the right time, so as not to buy when the market is unusually high?

4memoridem
You can migitate the problem by making the investment gradually.

Thank you, I basically use this method now and am glad to have it corroborated by an expert.

What method of backing up data do you recommend for a computer with windows? How often do you recommend doing it?

1Said Achmiz
I don't use Windows nearly as much, but one idea (depending on use case, as zedzed said) is cloud storage. Dropbox is free up to 2 GB. Paid services exist. Synchronization is regular and automatic; some services keep some file history, as well.
5zedzed
It depends on your use case. My "life work" consists exclusively of things I've typed. These types of files tend to be small, and lend themselves to being written in Google Documents. If I use Emacs, then the files are tiny and I back them up to Google Drive in about 2 seconds. This costs me all of $0 and is very easy. But maybe your life work also includes a bunch of pictures documenting your experiences. These, and other large files, will quickly exceed your 15 gigs of free storage. Then you're probably looking at an external hard drive or cloud storage. The better fit will depend on things like your internet connection, which USB standard your computer has, your tech level, how much stuff you need backed up, whether you travel a lot, whether you'll lose or damage the external hard drive, etc. And then just use Yvain's method to find the best one. Of course, there's more elaborate solutions for power users, but by the time you're high enough level for them, you're a power user and don't need to ask.

A rubric is a tool for assessment. It identifies criterion for evaluating work by identifying the categories of achievement and the measurements of levels of achievement in each category. This seems like a basic summary with examples: http://learnweb.harvard.edu/alps/thinking/docs/rubricar.htm

0Emile
Thanks, that's useful, I didn't know there was a word for that!
Yaakov T390

Train your tongue to say "I don't know", lest you be brought to falsehood -Babylonian Talmud

Are you familiar with Doug Lemov's "Teach like a champion"? If so how does is compare with "How Learning Works"?

3whales
I'm just now skimming it. It looks orthogonal to HLW, which talks about models of learning and general strategies. Lemov seems to focus more on the mechanics of elementary- and middle-school classroom management. He apparently found a number of exceptionally effective teachers, observed them closely, and extracted common activities and techniques. I'm not in a position to evaluate that sort of thing, but tanagrabeast's take sounds reasonable.
8tanagrabeast
I skimmed through Teach Like a Champion when it was first released, largely on the strength of the New York Times article about it. My take on it closely echoes this fair and critical Amazon review. In summary, Champion can show new teachers a lot of low-hanging fruit -- valuable techniques veterans like myself already use but remember figuring out the hard way. In particular, Champion shines a light on hard-to-explain non-verbal concepts that good teachers don't always realize they've mastered and wouldn't think to tell newbies about. I expect that a new teacher will get more immediate mileage out of Champion than from How Learning Works. Veteran teachers, though, are more likely to be unimpressed and notice some real blind spots in Champion. For example, the linked review's discussion of SSR (sustained silent reading) vs. "popcorn" reading is, in my own experience, spot on. I will make a note to revisit this comparison when I have read HLW.

My point is that we can't help but think of ourselves as having free will, whatever the ontological reality of free will actually is.

It seems impossible to choose whether to think of ourselves as having free will, unless we have already implicitly assumed that we have free will. More generally the entire pursuit of acting more rational is built on the implicit premise that we have the ability to choose how to act and what to believe.

0ChristianKl
That just means that you lack the ability to think in non-free will mental frameworks. I don't think a calvinist who thinks that God is in control from everything has a problem with the word choose and thinking of himself as making choices. Not completely. It's build on the idea that we have an influence on what we choose and what you believe. If you take a belief like: "There's no ego depletion." it's not straightforward to acquire that belief. All the studies that tell me that having that belief means not having ego depletion are not enough to help me acquire that belief while I'm suffering from ego depletion in the present.
0TheOtherDave
For my own part, I didn't find it too difficult to reconceptualize my understanding of what a "choice" was when dealing with the knowledge that I will predictably fail to choose certain options which nevertheless feel like choices I'm free to make. The experience of choosing to do something is part of the experience of doing certain things, just like the experience of feeling like I could have chosen something different is. These feelings have no particular relationship to the rest of the world, any more than my feeling scared of something necessarily means that thing is frightening in any objective sense.

nice post. However it might be better to characterize the first two classes as beliefs which are true because of the belief, instead of as false beliefs (Which is important so as not to unconsciously weaken our attachment to truth). For example in your case of believing that water will help you feel better, the reason you believe it is because it is actually true by virtue of the belief, similarly when the want to be rock star enjoys making music for its own sake the belief that making music is fun is now true.

typo: There seems to be an extra 'Y' in column 4 of the first image (it should be CYYY instead of CYYYY)

6Rob Bensinger
Thanks, jazmt. The Y is no more.

Thanks, I understood the mathematical point but was wondering if there is any practical significance since it seems in the real world that we cannot make such an assumption, and that in the real world we should trust the results of the two researchers differently (since the one researcher likely published no matter what, whereas the second probably only published the experiments which came out favorably (even if he didn't publish false information)). What is the practical import of this idea? In the real world with all of people's biases shouldn't we dist... (read more)

Does the publication of the result tell you the same thing, since the fact that it was published is a result of the plans?

5yrudoy
I think in this case, we are assuming total and honest reporting of results (including publication); otherwise, we would be back to the story of filtered evidence. Therefore, the publication is not a result of the plans - it was going to happen in either case.

By 'their decision' do you mean the decision to cooperate or defect? If so you would predict people would not offer to donate if there was no choice involved (e.g. all participants in the survey automatically receive one entry)?

It does not seem like this is what people are describing e.g. http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a3xl http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a2zz and http://lesswrong.com/lw/j4y/2013_less_wrong_censussurvey/a36h

Yaakov T150

I noticed a bunch of people saying that they will donate the money if they win. I find that a surprisingly irrational sentiment for lesswrong. Unless I am missing something, it seems people are ignoring the principle of the fungibility of money. It seems like the more rational thing to do would be to commit to donating 60$ whether or not you win. (If your current wealth level is a factor in your decision, such that you will only donate with the higher wealth level with the prize, then this can be modified to donating whether or not you win if you receive ... (read more)

4Jiro
People intend to donate the money when they win because they don't want the prospect of gaining money to influence their decision. Donating it is just an alternative to burning it. (It does also follow that those people who donate it for this reason must find the utility of such a donation to be very small.)
Yaakov T320

I took the survey.

Thank you for putting this together Some of the questions were unclear to me, for example: does living with family mean my parents or my spouse and children? (I guessed the former, but was unsure) For the politics question, there should be an option for not identifying with any label (or if that will lead to everyone not wanting to be labeled an option for disinterest in politics could be an alternative.) Should an atheist who practices a religion (e.g. buddhism) skip the question on religion? P(aliens), this question leaves out the time ... (read more)

the original is superior in a number of ways(to any translation have seen, but I suspect that it is superior to all translations since much is of necessity lost in translation generally). But is there a specific aspect you are wondering about so that I could address your question more particularly?

thanks but I prefer reading in the original Hebrew to reading in translation.

4somervta
Ah, excellent. I've always wanted to ask someone who read Hebrew - Is the writing in the bible of lesser or greater quality in the original (compared to the english - I know translation vary, but is there a distinct difference, or is the Hebrew within the range?)

It seems like Proverbs has a lot of important content for gaining rationality, perhaps it should be added to our reading lists

1AlexanderD
I highly recommend Robert Alter's translation in "The Wisdom Books," if you're interested in reading it.
gwern150

The wisdom books of the Bible are pretty unusual compared to the rest of the Bible, because they're an intrusion of some of the best surviving wisdom literature. As such, they're my favorite parts of the Bible, and I've found them well worth reading (in small doses, a little bit at a time, so I'm not overwhelmed).

Load More