by [anonymous]
1 min read16th Apr 2012123 comments

6

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

New to LessWrong?

New Comment
123 comments, sorted by Click to highlight new comments since: Today at 6:09 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

As far as I can tell, some of the most recent conversations to have the most uncvil remarks are conversations involving whether AI risk is a serious problem and if so what should be done about it. The thread on Luke's discussion with Pei Wang seems to be the most recent example. This also appears to be more common in threads that discuss mainstream attitudes about AI risk and where they disagrees with common LW opinion. Given that, I'm becoming worried that AI risk estimates may be becoming a tribalized belief category. Should we worry that AI risk is becoming or has become a mindkiller?

8Wei Dai12y
It sure seems like something is going on. Another example is Dmytry being pretty civil before he started debating LWers on AI risk.
2Dorikka12y
I want to upvote this again.
-2gRR12y
Isn't it something to celebrate? If the idea of AI risks is to be regarded seriously, it can't not become political.
4JoshuaZ12y
The politicalization of existential risk is not something to be happy with. Existential risk has higher stakes (the highest stakes arguably), we need to be more careful about failures of rationality, not happy that it has infected this as well.
0gRR12y
'Politicalization' seems to be an unavoidable stage. And it's much better than total unconcern.
2Dorikka12y
False dichotomy?
-2gRR12y
Not quite dichotomy. I'm thinking in terms of an evolution of a painful topic in the noosphere. Some kind of 'five stages of grief' - denial, anger, bargaining, etc :)

Fun with Umeshisms:

  • If you've never accidentally built an UFAI, you're spending too much time debugging.
  • If you've never intentionally built an UFAI, you're thinking too much about moral philosophy.
  • If you've never joined a cult, you're not being credulous enough.
  • If you've never been wrongly accused of being a cult leader, you're spending too much money on PR.
  • If you've never unintentionally created a real personality cult, you haven't worked enough on your charisma.
  • If a speculative idea never caused you to have a nightmare, you're not taking ideas seriously enough.
  • If you've never been banned from some community, you're being too nice.
1Grognor12y
* If you can't think of anyone you've treated badly, you're not having enough social interaction. (I can't think of any good ones myself.)
-6wedrifid12y
0Multiheaded12y
I can't agree enough. In fact, I might begin using this as a subtle filter for how impressed I should be with my close acquaintances' abstract thinking.
0Will_Newsome12y
Heh, good list. For people who can't build AIs but have to deal with the AIs built by other nincompoops: * If you've never confused God with a demon, you're being too cautious with your paracletics. * If you've never decided to just worship the Antichrist, you're thinking too much about moral philosophy.
0thomblake12y
I'm still not quite sure I get the form of these things. Does this work: If you've never shot yourself in the head, you're avoiding Russian Roulette too much.
1Wei Dai12y
I think the idea is that the second part is something you'd normally be expected to do (debug your FAI, think about moral philosophy, etc.), but might do too much of. So your "avoiding Russian Roulette too much" doesn't quite work. Here's sort of the prototypical "fun Umeshism":

Reading this argument...

For most of human history, physicians were incapable of effectively treating serious diseases. Indeed their efforts frequently resulted in their unfortunate patients dying and suffering at far higher rates than they would have otherwise endured. Physicians only gained the ability to have any worthwhile impact on the course of major illnesses in the 1940s- largely due to technological improvements secondary to ww1 and ww2 which included the development of new drugs (sulfonamides, antibiotics, first anti-cancer drugs, first effective anti-hypertensive drugs, better vaccines etc).

Note that physicians have had almost zero input in developing all of the drugs and technology which now allow them to be somewhat effective in practising medicine.

Since a significant number of people who get into medical school have always been money and power-hungry, but lesser and timid, CONmen- they took full advantage of the situation to market themselves as mini-gods who required tons of money to exert their magic on their patients. Make no mistake.. few people who enter that profession care about anything beyond enriching themselves and bossing around sick or dying people.

When

... (read more)
2Viliam_Bur12y
Openly questioning the motives of people that have power over you is kinda dangerous. Even if the doctors had higher probability of killing you than healing you, you don't increase your changes of survival by making them angry at you. Instead you should treat them respectfully... and avoid them whenever possible.
1JoshuaZ12y
We don't really have that strong a taboo. Look at how the alt med groups function quite successfully. Incidentally, the article in question doesn't address some things that doctors were successful at before the rise of modern pharmaceuticals. Many types of surgery helped save lives. Amputations from gangrene, removal of bad appendices, and Caesarean sections are all examples that substantially predate the 1940s.

Q: How many LessWrongians does it take to change a lightbulb?

A1: In some Everett branches the lightbulb is still undamaged. If you kill yourself in all remaining branches, the problem with lightbulb is solved. (While you are at it, why not also buy a lottery ticket, so you don't have to worry about broken lightbulbs anymore?)

A2: Changing a lightbulb would bring us closer to Singularity, and until we solve the problem of Friendly AI, this would be a dangerous thing to do.

A3: One LessWrongian writes an article about why it is rational to change the lightbulb, fifty LessWrongians upvote the article, forty LessWrongians downvote it. A discussion has soon over 200 comments, most of them discussing when it is correct to upvote or downvote something, and what could we do to avoid karma assassinations. Then a new chapter of HP:MoR is published, and the whole lightbulb topic is quickly forgotten.

A4: Eliezer already wrote an article about lightbulbs in 2007. What, you mean to really change a lightbulb? Please stop saying that; it sounds a bit cultish to me.

Also see here.

[-]gRR12y90

A question for rationalist parents (and anyone else who has ideas): are there good child-accessible rational arguments for why do right?

Me: Please do X.
Child: No.
Me: You know it's the right thing to do.
Child: Yes.
Me: Well?
Child: I don't want to.
Me: ???

7sixes_and_sevens12y
I am not a parent, and probably shouldn't be. While I think this is a sound argument, it may very well be a terrible thing to tell a child. When writing this, it occurred to me that I have no idea how many of the concepts in it would carry over to an eight year old's level of understanding. Would a 21st-century first-world child even have a sense of their life being made harder?
0gRR12y
Thanks, it's an interesting thought. Yes, I think a child may understand the difference between popularity and real respect, if examples in real life or fiction could be found.
0siodine12y
How old are they? I'm not at all certain about this, but I think it's not until around age 3-4 that children develop theory of mind and so you won't have much luck explaining right and wrong in terms of other people's feelings.
0gRR12y
Eight.
0Incorrect12y
You could explain to them that it doesn't make much sense to call it the right thing to do if they don't want/value it.
4tut12y
I don't think that many small children talk about the right thing to do in this sense. More likely, what the kid means by 'the right thing to do' is 'the right thing according to a value system which is not really my own, but which I won't openly dispute because my parents and other people I value/fear seem to want me to agree with it'. Except they could not possibly articulate that because they don't really have the concepts of 'value system' or 'right thing to do'. Their concept of 'the right thing to do' is a model of another person's values created without the meta level of understanding what a value system is or that there can be more than one value system.
0thomblake12y
I hope you find a good answer for this one. In my experience, either children understand what "it's the right thing to do" means and so will do what's right, or parents say "You have to do what I say!", or parents lie to their children / tell them religious stories.
2wedrifid12y
Usually all three.
0gRR12y
Well, it appears the particular child does understand what it means, in the sense of the morality computation being internalized, so actions can reliably be classified as "right" or "wrong". But unfortunately, "so will do what's right" does not follow. Obviously, I don't want to simply demand obedience or lie. Emotional pressure works - me being disappointed, etc - but it doesn't feel right to me, and so I'm not good at it.
-2thomblake12y
Sorry, I was making Socrates's mistake there, I think.

Old discussion that I'd like to see revived if for no other reason than I think the subject matter is fantastic: Taking Occam Seriously.

I wouldn't have seen it if I hadn't tried to go through all of LW's archive, so I hope someone sees it for the first time by virtue of reading the open threads.

[Meta] I hope it's okay that I posted the new open thread. Don't know what the procedure is, if any. I wanted to post something, but saw the last open thread was out of date. Please moderate/correct as appropriate.[/Meta]

you broke the code

Edit: Not really, anyone can make the open threads. But I've been doing it for a little while and I think it's a little strange that someone else did it when I'm only two hours late. C'est la vie.

3[anonymous]12y
Eeep! I throw myself upon the mercy of the Prophets, including but not limited to, Yudkousky, Jaynes, and OpenThreadGuy. Please don't excommunicate me. Or worse, karma-assassinate me. Edit: To be clear, I didn't mean it as a judgment against you. It wasn't "Oh, OpenThreadGuy is late. Let me do his job for him." It was more of a "Oh, the last open thread just expired and I wanted to post something. Let me make the new one, I don't think anyone would mind." If that's a little weird, then I happily accept the label ;)

How do you pronounce Eliezer's name? I've heard his name pronounced a number of ways. Originally, I thought it was pronounced El-eye-zer. Then I watched a video where I think it was pronounced El-ee-ay-zer. And today I watched another where Robin Hanson pronounced it as El-ee-eye-zer. So which is it? I doubt he really cares that much, but I'd like to know I'm not pronounceable it wrong when I tell people about him.

2arundelo12y
"el-ee-EHZ-er". (In IPA: /ɛliˈɛzər/.) (I hear it as a secondary stress on the first syllable and a primary stress on the third, but it may be the other way around. I have never seen him describe his name's pronunciation in print.) Edit: I'm surprised that Robin Hanson doesn't pronounce it like Eliezer does. Do you remember what video that was?
2RobertLumley12y
It's this video. It's an hour and a half, and I'm trying to find where I heard him pronounce it. If I find it I'll recomment (so you get a notification). It's possible that I just misheard him the first time, and the next times were just confirmation bias. Edit: It's at 19:41.
0arundelo12y
Sounds like /aɪ/ rather than /iː/ to me too, though it goes by pretty fast.
0RobertLumley12y
See edit.
[-]smk12y40

Are most people here transhumanists? If you are, do you have some specific transhumanist wishes? What about transhumanist possibilities that you want to avoid?

5wedrifid12y
Probably. If I don't want to die and would upgrade myself beyond my current physical limitations that makes me transhumanist, right? Volcano lair with catgirls. (I'll use my spare time from there to work out where I want to go next. Right now I mostly want the 'not dying' part with the ongoing potential for improvement.)

(this is not too political, I hope: just general talk about social attitudes)

I think I don't understand much of social-conservative sentiment - not policy suggestions, but the general thurst of it.
For example, people who exhibit it often use the term "permissive" as somewhat of a perjorative for several of today's societies. I don't get it: "permissive" towards what - stuff like drug use? But they don't typically use any qualifiers; they just seemingly say that not erring on the side of banning any slightly controversial thing is automa... (read more)

5hesperidia12y
Social conservatism has a very healthy respect for the concept of a slippery slope, which in and of itself is just fine from an epistemic point of view. The idea that social issues themselves are one unified slippery slope, though, is crucial to US-like social conservatism. The idea of social issues being one unified slippery slope may or may not be true. (Unlikely. p<0.1, I think.) It is definitely informed by contemporary religious organizations, though.
2Multiheaded12y
I understand and mostly agree; e.g. in the last infanticide thread, I went so far as to suggest that I'd bite the other bullet and consider banning abortion when technology blurs the line between pregnancy and birth even more. Yeah, that would bring real disutility to people, and banning post-birth infanticide in itself is already bringing some, but I hate the idea of putting up with infanticide enough that I'd rather have it this way. [1] Here I'm definitely with mainstream conservatives, as their objection to it is stronger and more principled than that of mainstream liberalism. On the other hand, I think that e.g. any possible slippery-slope threats that could result from recognizing homosexual civil unions as completely equal to "traditional" ones [2] is vastly outweighed by the huge social good it'd create, both as direct utility to homosexual couples and as an improvement in moral climate and the destruction of obsolete in-group boundaries within larger society. It is in such cases that social conservative positions appear to go from defensible to utter crap in my eyes. But, as I already said, I'm inconsistent and biased by nature, and I'm OK with that. [1] (save for excruciating and incurable afflictions - in which cases, as rumour has it, medical authorities already seem reluctant to investigate, accepting that euthanasia is the least evil solution) [2] Religious and other private organizations should be free to restrict their "Traditional/Christian/Straight/whatever-they-want Marriage" contracts and rites to any group they want, IMO; it's just that we need a clear distinction between that and government-recognized legal union. I mean, such privately sanctified marriage shouldn't have any legal significance; people would still need a civil union in addition to it if they want to enjoy the legal benefits. Also, ending the meaningless connection between such a civil union and sex, child-raising, etc would bring interesting opportunities, such as allowing an
3Viliam_Bur12y
When your opinion depends on long-term consequences of X or Y, different models of the world can lead to very different models of the future. One possible chain of thoughts: by recognizing homosexual civil unions as equal to traditional ones, at the cost of small change in definition, people will become equal. Other possible chain of thoughts: any successful redefinition of marriage will lead to more and more redefintions, until the whole topic becomes completely arbitrary, meaning nothing more than "two (or more) people at given moment decide to call themselves partners, but can change their minds at any moment". As a consequence there will be less stable marriages. As a consequence, children will on average grow up in worse condition, and that will have massive negative impact on the following generations. (Your solution of separating religious marriage vs civil union would only limit this impact to a part of population; but still, a negative impact on their children, causing e.g. higher crime rate of the next generation, would influence everyone's quality of life.)
2Multiheaded12y
I've read that particular argument a million times, and haven't been impressed. Even Konkvistador, of all people, has objected to that. (I don't wish to imply that he's somehow hidebound or inimical to progress, I just find him very reasonably cautious in all matters - not to sound like a sycophant.) I remember him trying to convince Alicorn that formal matrimonal arrangements are more or less pointless because people can have stable and loving relationships outside of a formal marriage. Indeed, I'm totally unconvinced that most people are so irresponsible that they need formal shackles to provide a healthy and stable environment for their children, and are incapable of having a stable relationship of their own volition and without oversight. And if they are that irresponsible, they probably shouldn't have children yet.
4TheOtherDave12y
It seems worth saying out loud that VB was not making the argument you aren't impressed by, he was referring to it, as are you. Not that you said he was making it, but it's a volatile enough subject that it's easy for people to infer conflicts. With respect to actual content: it's often useful to make commitments to behave in certain ways. People do this with respect to fitness goals, employment, cleaning their houses, finishing personal projects... all kinds of things. Sometimes it's even useful for me to formalize those commitments and agree to suffer penalties if I violate them. This not only signals my commitment to others in ways that are costly to fake, but it creates different incentive structures for myself. For example, if I want to do twenty pushups three times a week but I don't seem able to motivate myself to actually do them, I might agree with a friend that once a week they will ask me if I've done twenty pushups three times that week, and if I haven't I will give them $20. That might give me more motivation to do those pushups. (Or it might not... it depends on me, and how much I value $20, and how much I negatively value lying to my friend, and all kinds of other stuff.) Marriage seems like precisely this sort of formal precommitment to me, and seems potentially valuable on that basis. I disagree that being the sort of person whose behavior is changed due to such a formal precommitment means I'm too irresponsible to have children. Indeed, knowing what techniques serve to motivate me and being willing to use those techniques to achieve my desired goals seems pretty responsible to me. I disagree still more with the connotative implications of words like "shackles," "incapable of having a stable relationship," or "their own volition"
0Multiheaded12y
But I"m only making these negative connotations because it feels to me that my equality and the legality of my hypothetical union with a man are at stake here. Consequently I'm not inlcined to think highly of anyone who gets in the way ;)
2TheOtherDave12y
Sure, I understand your reasons for it. Speaking as a married man who is currently livid over the fact that filling out my taxes in the U.S. requires telling the federal government I'm single because it refuses to acknowledge that I'm married, I have reasons of my own. We are of course free to think whatever we wish of anyone we wish, but what conclusions we're justified in coming to is a whole different matter.
3pedanterrific12y
You're missing a [1].
2Multiheaded12y
Oops. Thanks.
0JoshuaZ12y
I suspect given the context the intention was for footnote 1 to go right ater "I'd rather have it this way."
[-][anonymous]12y30

An interesting review from Aurini's channel of the book Worthless by Aaron Clarey. Very relevant to those deciding what kind of a degree they would like to get.

This was recently posted in the Server Sky thread: The Political Economy of Very Large Space Projects. The title kind of says it all. Basically, whenever anyone tries to put forward a Very Large Space Project they tend to gloss over the political costs and realities, hence they don't actually get done. This seems like a pretty clear cut case of Far Mode bias to me. Rationalists trained to recognize and account for this may have a better chance of getting things done.

[-][anonymous]12y30

I was recently reading through LW discussions about OKCupid. Those discussions (as well as some other factors) prompted me to make a profile. If anyone cares to critique, please do so. I have my own opinions on what I've done well and what I need to improve on, but I'll keep them to myself for the time being. I don't want to anchor your reactions.

Chris_8128

Making a few minor edits, but I consider this first draft just about done. If you'd like me to review your profile, or if by serendipity you are interested in me and live close by, then do let me know.

Do you want us to discuss your profile in this thread, or message you privately?

0[anonymous]12y
Hmm, good question... Message me privately.
6Larks12y
I would totally go on two dates with you for a pony.
5wedrifid12y
That's legal where you live? Do you at least need a license?
1[anonymous]12y
Ah, but my questions is: Given my profile, would you go on a date with me, even without the promise of a pony?

I'd like to congratulate LW on the fact that five of the seven most recent posts are on negative karma. Good work people! Keep up the selection pressure!

Can someone concisely explain why this is true:

  • for expected utility, the difference between 50% and 51% is the same as the difference between 80% and 81%.
  • for credence, the difference between 50% and 51% is much smaller than the difference between 80% and 81%.
2vi21maobk9vp12y
It is measuring different things. You have evidence E and possible explanations A and B. A implies payoff 1 soon, and B implies payoff 0. When you have utility calculation, estimated p(A) gets multiplied by value of A, estimated p(B) gets multiplied by value of B and you care about difference between to estimations. Why difference? Well, you should be very happy that you are a human with cognitive capabilities and not a vivisected frog. Or you could be very unhappy that you are not guaranteed to be immortal. Adding either of these large constants to evaluations of all outcomes doesn't change the difference of two possible situations here and now. So you calculate p(A)v(A)+p(B)v(B), and if you take a bit of p(A) and give it to p(B) you get Δp(A)(v(A)-v(B)) change. If you talk about credence, it is another story. You can relatively easily calculate p(E|A) and p(E|B). It is hard to find out p(A) and p(B). Bayesian approach hints that maybe you should just try to find a lot of independent evidence pieces so that their voice can drown the prior difference. But how is evidence summarized? You want to find (prior(A)*p(E|A))/(p(E|A)+p(E|B)) = prior(A)/(1+p(E|B)/p(E|A)). There is ratio of p(E|A) and p(E|B), so you need to look at Δ(p(E|A)/p(E|B)), and this depends on absolute values of probabilities in addition to change.
0thomblake12y
Thanks! I wonder if it would be worth having a Less Wrong Q&A site, or if that purpose is better served by existing Q&A sites.
-2thomblake12y
Anyone?
0tut12y
What is the context of these claims? I don't really see why either of them would be necessary.
0thomblake12y
They are necessary. For expected utility, .51v-.50v ==.01 v == .81v-.80v When evaluating the strength of evidence, 50% is 0dB, 51% is 0.17dB (diff 0.17dB), 80% is 6.02dB, and 81% is 6.30dB (diff 0.28dB). Please offer a correction if I've made a mistake there.
0tut12y
Ah, ok. For EU you care about the actual difference in probability, whereas for strength of evidence you care about the amount of information you would have to receive in order to change your credence that much, and this is compressed near the ends of the probability range. Edit: What was your question again? At first I thought that you did not understand what you were talking about, and thought that your answer re the context would be a link or quote where somebody else brought it up. But what you gave me is almost a set of proofs, and the explanation is clear from your use of dB in the evidence/information case.
0thomblake12y
I was aware of the difference (what you note in your first paragraph), but could not articulate a good explanation of why there is a difference. I found this comment by vi21maobk9vp to be a good explanation, though I feel it could be more clear and concise. I'm not actually sure I could have given you the answer in the grandparent in so few words before reading vi21maobk9vp's comment.

Concerning the application of rationality in one's own life: In the mini-camp thread, Brandon Reinhart gives a very detailed summary of how he improved using the methods taught there (here). I'm sure the material that is taught in the camps can be found somewhere on the site.

However, the masses of material are hard to comb through, and my google-fu wasn't sufficient to identify the relevant ones. Can anyone point me to sequences that teach that kind of stuff?

Especially in light of the recent thread which seemed to conclude that Alcor is superior to CI I've been thinking about the discrepancy between Alcor membership fees and the cost of life insurance. Membership fees are a fixed rate independent of age/probability of death, while life insurance varies. This means that the (cost : likelihood of death) ratio is far higher for younger prospective cryonauts, and this triggers my sense of economic unfairness/inefficiency.

For instance, with data from Alcor, assuming neurosuspension and extra as I live in the UK:

Mem... (read more)

[-][anonymous]12y20

Has anyone read:

Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N. and Malone, T. W. (2010), “Evidence for a Collective Intelligence Factor in the Performance of Human Groups,” Science, 330, 686–688.

It seems to be relevant to various LW tropes, but to actually read it myself I'd have to talk to somebody (the librarian who runs my university's journal repository) and paying that kind of price would be massively depressing if the above paper turned out to be as crappy as the paper I just read that cited it.

It's already massively depressing that even b... (read more)

2wedrifid12y
Here you go. Possibly Melbourne Uni is a cooler large institution than yours! ;)
0[anonymous]12y
Yeah, it's an interesting paper after all. Much better than this one.

Is there any empirical evidence that humans have bounded utility functions? Any evidence against?

2steven046112y
What do you mean by "have"?
0MileyCyrus12y
If you have a utility function, I can predict your preferences from it. If I want to know what you'd pick when given the choice between an apple or a banana, I don't have to ask you. I can just consult your utility function and see which ranks higher.
6steven046112y
If you're looking for a utility function that reproduces human behavior in all contexts, I'd say the empirical evidence is overwhelming that humans don't have utility functions, bounded or otherwise. If you're looking for a utility function that merely reproduces human behavior approximately, how good does the approximation have to be in what circumstances? Does the utility function that someone "has" reproduce his behavior when the behavior is mediated by conscious philosophizing, or when it's mediated by his gut? If conscious philosophizing, how much and what kind of conscious philosophizing? If gut, what framing of a given decision problem causes someone's gut to make those decisions that are reproduced by the utility function that he "has"? (For example, if someone is more keen on paying some price to save 20 out of 100 lives than on paying the same price to allow 80 out of 100 people to die, does the utility function that he "has" say do it, or not?)
0djcb12y
Practically, I agree, but I would maintain that people do have utility functions - it's just that they are so complex that they are impossible to write down as a neat mathematical expression, just like it's practically impossible to construct a function to accurately predict next week's weather. But if we define the utility function as the relation between (many!) input variables and the ultimate behavior chosen, certainly it exists. Just like that illusive function for predicting the weather.
0MileyCyrus12y
How is rationality possible if people don't have a utility function?
6TimS12y
Might I suggest tabooing "utility function"? The phrase is used in this community at various levels of abstraction, and useful responses to your question depend on which level of abstraction you intend.
5steven046112y
No, a utility function is a perfectly well-defined mathematical object. The question is what it means for a human to "have" such an object; so "have" is the word we should be tabooing. Does Abraham Lincoln have a six-dimensional vector space? Does Spain have an abelian group? The question "do humans have a bounded utility function" is kind of like that.
-2wedrifid12y
"Does Abraham Lincoln have a personality?" would be somewhat analogous. (By somewhat I mean 'more'.)
2gwern12y
I'd personally phrase the answer more as 'yes, but it's actually 5-dimensional' :)
-1wedrifid12y
A utility function can reproduce any combination of behavior.
6steven046112y
Sure, so everyone's utility function puts a value 1 (ETA: or a value x > 0 that varies unboundedly as a function of some feature of the universe) on all possible universe-histories where they take the actions they actually take, and 0 on all other possible universe-histories. I don't think that's an answer.
-2wedrifid12y
That's one way to make the reducio, yes.
0Richard_Kennaway12y
Only a reductio ad absurdum. The Texas Sharpshooter Utility Function is completely useless, and is not a utility function. It is useless, because it makes no predictions. It does not constrain your expectations in any way. It is not a utility function in the sense of the Utility Theorem, because utility functions in that sense are defined not over outcomes but over lotteries of outcomes. Extending a TSUF to lotteries by linear interpolation does not work, since lotteries can themselves be conducted in the real world, and all real occurrences of such lotteries are given a value of 0 or 1 by the TSUF. Utility theory therefore does not apply to the TSUF. Calling it a utility function does not make it one. It is like defining a universal theory of physics by assigning "true" to whatever happens and "false" to whatever does not.
-4wedrifid12y
It wasn't an example I gave. I care almost nothing about Straw Texas Sharpshooters.
0Richard_Kennaway12y
What were you referring to when you claimed: and what attitude were you expressing to steven0461's negative judgement about the TSUF when you said this:
-2wedrifid12y
Acknowledgement that yes, technically, one of the infinite number of utility functions which can result in a given behavior over time for a given input happens to be the one he mentions.
0steven046112y
Did you mean to claim that some other utility function also results in the given behavior, but isn't useless like the TSUF is? Someone seems to have voted down the entire subthread; I wonder why.
0wedrifid12y
An infinite number of of utility functions, some of them simpler and more useful than others.

TeXmacs workshop videos (the WYSIWYG software for creating documents and interfacing with math software -- Word / LaTeX replacement).

I remember seeing discussion about sample bias in studies about depression. Specifically about the self-selction effects for people who respond to advertisements. Does anyone know what thread this was in.

1vi21maobk9vp12y
http://lesswrong.com/lw/96o/link_antidepressants_bad_drugs_or_bad_patients/ Found by keywords: "depression self-selection lesswrong.com " on carrot2.org
0beoShaffer12y
Thanks.
[-][anonymous]12y00

Floating the idea of a New Jersey LW meetup. (Particularly for people in Somerset and the surrounding counties.) Is there any interest?

[-]maia12y00

Meta-LW question: What does the comment sorting system actually do? I assumed it was Reddit's "best" system, but then noticed some highly-upvoted, seemingly non-criticized comments below worse-seeming ones. Am I just crazy?

3TheOtherDave12y
I vaguely remember noticing something similar a while back and concluding, after some poking around, that sorting by score sorts by number-of-upvotes rather than number-of-upvotes-minus-number-of-downvotes. I no longer remember how I concluded this or whether I was justified in doing so.
-2John_Maxwell12y
Maybe recency is a factor in your current sorting scheme? Could be based on recent voting patterns, even.
[-][anonymous]12y00

What is the future of human languages?

-3vi21maobk9vp12y
Nobody knows. So, we can look around for known pieces, but we should know that we are only guessing. I am not a linguist, so it is only easier for me to go wild. I presuppose that this future includes flesh humans using some of the existing (flesh) communication means - I doubt we can correctly predict any interesting language consequences of complete computational medium change. Let's start with "changes as they always occured". If we look at long-term available history, there are examples of cyclical change in grammar structure. The simplest example Zaliznyak gives in his lecture about this is that will got shortened to 'll and it slowly becomes a prefix signifying future tense. Also, historically, pronounciation caused orphography changes -different in different in partially isolated populations, of course. Also, borrowing, mangling and direct invention of words occured in such populations separately, of course. What can we look at now? It looks like geographical closeness can sometimes be a weaker factor than membership in some subculture or online community from the language change point of view. Of course, earlier there were some language changes specific to a single proffession or social stratum; nowadays one person can participate in more language enironments that should reduce isolation but make neologism spread faster. Acceleration of technological changes and social changes these technological changes bring increases the amount of new notions - and words - that have to be created as an expansion of language, not as size-preserving change. On the other hand, looking up a new word you encounter is easier, and at the same time some old notions become obsolete. This should increase the speed of vocabulary update; it may bring some unforeseen changes, but for me these are unknown unknowns. The idea that written text is used for real-time communication is relatively new and already leads to sloppy orthography. So oral speech is probavly losing its role

I've been analyzing the reasons for my dislike of the MWI lately. I initially thought that it was because it was untestable and so no better than any other interpretation. But this wasn't a good enough explanation for an emotional response ("dislike" is an emotional response). So, after some digging, I have realized that what I dislike is not the anti-Popperianism of it, but the process of futile arguing itself, where convincing the other side, or getting convinced, or building a better model based on the two original positions is not an option. And Copenhagen vs MWI is one of those debates. Now, if only I could figure out why I am still commenting about it...

3[anonymous]12y
Wouldn't it be better to dislike all the interpretations equally?
-3shminux12y
I dislike all interpretations that pretend to be more than interpretations equally. Wikipedia has dozens of those.
2Viliam_Bur12y
Because they both predict exactly the same set of experimental results? And neither feels like a strict subset of the other? Or is there something more? What is the problem with seeing MWI as a strict subset of Copenhagen? I am not an expert, but it seems to me that MWI explains the results of quantum experiments, and Copenhagen does the same with additional experimentally untestable hypothesis that our amplitude blob is different from the others, because the others magically disappear at some unspecified moment. (You know, just like the Earth is different, because it is down, while the other planets are up there. Imagining people walking upside down makes my head spin.) Cognitive dissonance. One part of your brain tells you to follow Occam's razor, other part tells you to follow the opinions of respected majority. No offense meant; following the educated majority is very strong Bayesian evidence.
-1shminux12y
That's a common misunderstanding of the situation... The math is exactly the same in both cases, all that's different is the handwaving around it. There are no additional meaningful assumptions in the orthodox interpretation compared to the MWI. Both have the "and then a miracle occurs" step (one calls it "collapse", the other "world split"). I don't think this is it, given my understanding of the Occam's razor. Could be some other type of cognitive dissonance, though.
6pragmatist12y
Your conception of the MWI seems to be the DeWitt-Graham version, which is pretty outdated. According to this version, there is a global duplication of the entire universe every time there is a measurement of a quantum event with two possible outcomes. I agree that this interpretation is no less handwavey than Copenhagen. But most contemporary proponents of the MWI (including Eliezer) don't believe there is any mysterious world-splitting process that is extraneous to the dynamics. The theory should really be called Everettianism (or maybe, as Everett originally labeled it, the relative state formulation) rather than many-worlds. The idea is just that nothing more than the Schrodinger dynamics is required in order to account for our experience of a macroscopically classical world with determinate measurement outcomes. The appearance of collapse (or world-splitting) is fully accounted for by decoherence. This version of the MWI does not involve the periodic disruption of the dynamics postulated by Copenhagen. Nor is it empirically equivalent to Copenhagen. So I urge you to reconsider your dislike of the MWI! Join us on the dark side...
-2Alejandro112y
...and the light side, and the grey side.
-2shminux12y
Seem my other reply.
4vi21maobk9vp12y
Isn't world split explainable an approximation of unitary math that happens when your state gets properly entangled with the state of the observed system? And in Copenhagen interpretation collapse is a new operation.
-8shminux12y