Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Clarity 11 February 2016 11:16:32AM *  0 points [-]
Comment author: MrMind 11 February 2016 08:19:15AM 0 points [-]

That does not depend on the details of how the bet is arranged.

I would contest that's the case insofar as you have to bet only on one side, if you gain / lose stakes from both positions, possibly the " rooting for one outcome over another, it makes the denouement of the bet about the relative status of the people in question" would be diminished?

Comment author: Good_Burning_Plastic 11 February 2016 08:15:14AM 0 points [-]

And not tobacco?

Comment author: fubarobfusco 11 February 2016 08:11:01AM 0 points [-]

A posted sign saying "Don't spit on the floor" does not license spitting on the counter, the shelves, the ceiling, etc.; and does not restrain the bouncer from turfing you if you spit in someone else's drink.

Comment author: bogus 11 February 2016 07:45:58AM 0 points [-]

'Vulnerability' is a highly ambiguous term, though. You can definitely show an 'emotional' side (good!vulnerability) without slipping into unattractive 'beta/doormat' mode (bad!vulnerability).

Comment author: Old_Gold 11 February 2016 07:45:11AM 0 points [-]

Yes, all the cool kids are doing it.

Comment author: bogus 11 February 2016 07:33:30AM 0 points [-]

That was the specific personal advice he gave me at the end of spending 10 days at a retreat in nature together.

Makes sense then. He got to know you quite well, and realized that a 'direct' style would work best for you.

The idea of learning a bunch of techniques to change woman into liking you instead of working to change yourself doesn't seem to be successful.

That's not really what's happening, though. The techniques are there to change the image you're presenting and ensure that it reflects you at your best and most attractive. That's why 'the inner game' (changing yourself) and 'the outer game' (changing your social image/approach) are largely seen as complementary and mutually reinforcing.

Comment author: Douglas_Knight 11 February 2016 07:20:59AM *  0 points [-]

remained below pre-Prohibition levels until the 1940s

In other words, effect of 10 years of prohibition lasted for 10 years past the repeal. That sounds to me like a really small effect.

Let's put this in context. Here is a table from Rorabaugh and the graph:
The Second Great Awakening had a much larger and lasting effect

Comment author: Clarity 11 February 2016 06:56:58AM *  0 points [-]

The correlates of war dataset. Now discussions about military interventions can go beyond our interventions, single case examples and personal reference experiences.

Would anyone be willing to look at humanitarian interventions from an effective altruism angle? Since the Open Philanthropy Project doesn't even have a shallow investigation of the topic, donors, researchers and advocates might be missing quite an important cause.

Comment author: goose000 11 February 2016 06:42:26AM *  0 points [-]

C.S. Lewis addressed the issue of faith in Mere Christianity as follows:

In one sense Faith means simply Belief—accepting or regarding as true the doctrines of Christianity. That is fairly simple. But what does puzzle people—at least it used to puzzle me—is the fact that Christians regard faith in this sense as a virtue, I used to ask how on earth it can be a virtue—what is there moral or immoral about believing or not believing a set of statements? Obviously, I used to say, a sane man accepts or rejects any statement, not because he wants or does not want to, but because the evidence seems to him good or bad. Well, I think I still take that view. But what I did not see then— and a good many people do not see still—was this. I was assuming that if the human mind once accepts a thing as true it will automatically go on regarding it as true, until some real reason for reconsidering it turns up. In fact, I was assuming that the human mind is completely ruled by reason. But that is not so. For example, my reason is perfectly convinced by good evidence that anaesthetics do not smother me and that properly trained surgeons do not start operating until I am unconscious. But that does not alter the fact that when they have me down on the table and clap their horrible mask over my face, a mere childish panic begins inside me. In other words, I lose my faith in anaesthetics. It is not reason that is taking away my faith: on the contrary, my faith is based on reason. It is my imagination and emotions. The battle is between faith and reason on one side and emotion and imagination on the other. When you think of it you will see lots of instances of this. A man knows, on perfectly good evidence, that a pretty girl of his acquaintance is a liar and cannot keep a secret and ought not to be trusted; but when he finds himself with her his mind loses its faith in that bit of knowledge and he starts thinking, “Perhaps she’ll be different this time,” and once more makes a fool of himself and tells her something he ought not to have told her. His senses and emotions have destroyed his faith in what he really knows to be true. Or take a boy learning to swim. His reason knows perfectly well that an unsupported human body will not necessarily sink in water: he has seen dozens of people float and swim. But the whole question is whether he will be able to go on believing this when the instructor takes away his hand and leaves him unsupported in the water—or whether he will suddenly cease to believe it and get in a fright and go down. Now just the same thing happens about Christianity. I am not asking anyone to accept Christianity if his best reasoning tells him that the weight of the evidence is against it. That is not the point at which Faith comes in. Faith, in the sense in which I am here using the word, is the art of holding on to things your reason has once accepted, in spite of your changing moods.

Although many religious people use the word differently, this is how I use Faith, and I propose that it would be an acceptable one to facilitate this discussion: a determination to hold on to what you have already established a high confidence level in, despite signals you may have received from less rational sources (i.e. emotions).

Comment author: Jayson_Virissimo 11 February 2016 06:02:37AM *  0 points [-]

It may help to point out which conception of faith you have in mind. For example:

  • faith as a feeling of existential confidence
  • faith as knowledge of specific truths, revealed by God
  • faith as belief that God exists
  • faith as belief in (trust in) God
  • faith as practical commitment beyond the evidence to one's belief that God exists
  • faith as practical commitment without belief
  • faith as hoping—or acting in the hope that—the God who saves exists
  • etc...
Comment author: Clarity 11 February 2016 05:42:54AM 0 points [-]

The misery that is now upon us is but the passing of greed, the bitterness of men who fear the way of human progress.

Charlie Chaplin in The Great Dictator

Comment author: Clarity 11 February 2016 05:39:43AM 0 points [-]

This reminds of the viral video of Senate estimates hearing where one senator's Mansplaining accusation backfires badly. Go gender equality! Fight both patriachy and matriachy!

Comment author: Douglas_Knight 11 February 2016 05:22:13AM 1 point [-]

I think that it is worth mentioning that those are also the numbers extracted from Betfair, which has much higher volume, though is not available to Americans.

Is that bet actually available from small volume PredictIt? The bid-ask spread looks small, but are there hidden transaction costs? Why do the three "sell yes" numbers add up to more than $1?

Comment author: _rpd 11 February 2016 04:28:36AM 0 points [-]

"Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.

Taking this a little further, the final prediction can be a weighted combination of the individual predictions, with the weights corresponding to historical or expected accuracy.

However different individuals will likely specialize to be more accurate with regard to different cognitive tasks (in fact, you may wish to set up the reward economy to encourage such specialization), so that the set of weights will vary by cognitive task, or more generally become a weighting function if you can define some sort of sensible topology for the cognitive task space.

Comment author: Val 11 February 2016 04:28:02AM 0 points [-]

A valid concern. Do you have a better formulation in mind, which is still not too much specific?

Comment author: Dagon 11 February 2016 03:42:11AM 0 points [-]

A lot depends on what the "real assets" are. They have no property right in future revenue if customers choose to buy somewhere or something else. They may have contractual rights to a monopoly, which could be purchased (but which is pretty suspect to start with). They do likely have property rights in plant (heh) and equipment, which will be a natural barrier to competitive entry.

Comment author: Clarity 11 February 2016 03:32:22AM *  -1 points [-]
Comment author: Douglas_Knight 11 February 2016 02:37:18AM 1 point [-]

Australia has a track record of doing a better job of enforcing bans than other countries. It's a island.

Comment author: Douglas_Knight 11 February 2016 02:31:19AM *  0 points [-]

The existing tobacco companies are real assets that require compensation if they are nationalized, while the right to create new companies does not require compensation if it is destroyed.

You might also ask why people don't create new American tobacco companies to acquire the advertising rights that existing companies gave up in Master Settlement.

Comment author: Elo 11 February 2016 02:29:36AM 2 points [-]

I haven't reviewed the slack posts recently; I wanted to briefly say that there are 250 people who have joined now. They are not all active; but still it's going strong.

https://wiki.lesswrong.com/wiki/Less_Wrong_Slack

Comment author: Crux 11 February 2016 02:20:44AM *  1 point [-]

I appreciate the level-headed emotional de-escalation.

And with that, onto the content:

Yes and yes.

Understood. The next thing I'm wondering, then, is whether you've read this article. The reason I'm asking is because that's the full and original explanation of the non-central fallacy, the fallacy that Jiro was claiming was exemplified by saying that Roosh "wants rape to be legal".

Whatever your answer to that question, I would like to make a request. Can you re-state Jiro's original argument in your own words? I don't mean simply repeating the propositional logic inherent in the single statement that you're objecting to; I mean explaining in full detail what Jiro meant to convey.

Actually, you can. Jiro made a propositional statement and it can be evaluated independently without rehashing the entire thread history.

Oh wait, I guess you may think that my request is irrelevant.

I believe we have a fundamental disagreement on the nature of language and epistemology, and I'm not optimistic that we will be able to resolve this dispute within this subthread. I will, however, put your username in my notes and contact you if I put together a sequence on logic which bears on this discussion.

But I might as well give it a brief attempt.

Few things are more common in Less Wrong culture than taking things far too literally. Most people on this website come from a background of social oddity and nerd interests. The source of the average Rationalist's superpowers is also the source of his weakness: undue attention to the finely delimited moving parts of single isolated statements. Such an orientation of mind allows deep analysis, innovative thinking, and so forth. But the danger is that natural language is too primitive of a tool to expect to be able to scrutinize single statements; arguments must be evaluated as a whole unless we're in the realm of mathematical logic.

Perhaps it would be easier to explain if I merely claim that your original post was irrelevant and off topic. Whatever the case with the single statement that you're analyzing, neither I nor Jiro make any claim which rests upon that foundation. Sure, you can find that statement in Jiro's post. You can discover that sequence of Latin characters lying within the square. But did Jiro think to himself or herself that there exists an equivalence between those two concepts? Absolutely not.

I'm a little bit lost about how to elucidate this clearly. How about you take up this challenge, which I mentioned earlier in this comment: Explain in your own words what Jiro meant, complete with demonstrating an understanding of the nature of the non-central fallacy. You're going to have to take my word for it, but I believe that completing this exercise will reveal to you why I believe it's so important that you take the context into account rather than simply pinpointing that one statement and laying out your disagreement.

Comment author: g_pepper 11 February 2016 01:39:12AM 0 points [-]

Thanks for the lengthy response. I better understand the cause of the disagreement. And, I reread my response to the OP with your comments in mind, and you are 100% correct; I did sound more irritated and dismissive than I had any reason to (when I used the word “confused”). That was not my intention; I apologize for any offense caused.

In addition, I would like to respond to and/or comment on some of your other comments. You asked:

Have you read the subthread carefully, going all the way back to Clarity's question? Have you read Roosh's article?

Yes and yes. It was an interesting thread. However the point I was making was not about what Roosh may or may not have meant in his article, nor was it about Clarity’s question, nor about gjm’s comments to Clarity’s question. All of those are interesting topics, and I have opinions on them, but I did not express them. Why not? Because the discussion volume on all of those topics has been large enough that my opinion on each of the main controversial points of the thread has been stated by someone else (in some cases, by multiple people); my stating opinions that have already been stated would add little value to the conversation. However, Jiro’s post did contain a statement that had not been addressed elsewhere and that I thought should be addressed, so I addressed it.

You also said:

You can't simply single out a specific statement and attempt to grapple with its internal logic.

Actually, you can. Jiro made a propositional statement and it can be evaluated independently without rehashing the entire thread history.

Again, Jiro's response is highly contextual and only makes sense when you consider the big picture.

Agreed – Jiro’s entire response was multifaceted, nuanced and complex, and were I disagreeing with his/her entire comment, the context of the thread would be relevant. The one statement I was commenting on however was self-standing and could be evaluated as such:

If you don't want people to be convicted of rape based on evidence obtained by torture, you also "want rape to be legal"

And, no, the quotes in the original do not significantly change the meaning of the sentence; certainly they do not render my objections (stated here) invalid.

So, why did I think that this one statement was important enough to respond to? Two reasons:

  1. The statement is factually incorrect – it expresses a false equivalence, as explained here

  2. The belief is not only factually incorrect, it is actually harmful; if widely held, it would have a pernicious effect on the justice system. If it was widely believed that placing reasonable limits on what the state can do to win a conviction for some offense is the same as making that offense legal, you could expect to see increased demands (and eventually capitulation to those demands) to actually allow torture to obtain convictions, or to reduce the standard of proof from “guilty beyond a reasonable doubt” to “guilty by the preponderance of evidence”, or even “guilty by the majority of the evidence”, etc. This is especially true for crimes that tend to evoke strong emotional responses in the public. This is not a theoretical objection – there are currently voices arguing for torture to be used in cases involving terrorism, for example.

If I sound condescending, it's because it's tiresome to argue with someone who is taking a single point as literally as possible while neglecting to look into the context of the discussion.

Understood, but as stated, my objection was to a single point; various responses to the bulk of the thread’s controversial points have been discussed at length elsewhere. Therefore, it would have been pointless for me to address the entirety of the thread.

While you didn't seem offended, you nevertheless began your reply with an emotionally charged claim that Jiro seemed "confused".

Yes, valid point. I apologize for that.

I admit that I felt a bit of annoyance right from the beginning. The emotional charge you can feel channeled through my words is a product of status-posturing emotions related to defending Jiro.

Understood, and your desire to defend a fellow LWer is noble. My feeling, however, based on Jiro’s history of high-quality, well-argued comments, is that Jiro is in no need of verbal defense. Jiro has a higher karma score than either you or I do, and has (I suspect) a history at LW longer than mine (not sure about yours). None of that of course changes the fact that my initial comment was unduly abrasive, however.

Comment author: Lumifer 11 February 2016 01:38:17AM -1 points [-]

How much would a user have to know about LW to think to do that?

I've seen it happen, and more than once, too. I think all you have to know is that you need a particular quantity of internet points and that people can give them to you for free. You just ask.

You're failing at other minds.

I will concede that my expectations of certain LW users might have been too high :-P

Comment author: buybuydandavis 11 February 2016 01:24:05AM 2 points [-]

That? I don't know what you're referring to.

I'll assume the quote I included.

It's part of popular christian apologetics here in the US.

For example, if you watch the Hitchens vs. Theist debates for Hitchens' book tour of 'God is not Great', one of the standard theistic moves was "you start from faith in your unjustified foundations, and we start from ours". Douglas Wilson was a good example of that.

Comment author: Manfred 11 February 2016 01:12:19AM 0 points [-]

1: Imagine a utility function as a function that takes as input a description of the world in some standard format, and outputs a "goodness rating" between 0 and 100. The AI can then take actions that it predicts will make the world have a higher goodness rating.

Lots of utility functions are possible. Suppose there's one possible future where I get cake, and one where I get pie. I have a very strong opinion on these futures' goodness, and I will take actions that I predict will make the world more likely to turn out pie. But this is not a priori necessary - we could define a utility function that swaps the goodness ratings of cake and pie, and an AI using that utility function would take actions that it predicts will lead to worlds with higher goodness rating, i.e. cake. There is no objective standard that it could use to realize that pie is better - it is merely a computer program that makes predictions and then picks the action that it predicts maximizes some function.

Utopias are like cake and pie. If I give the pie utopia a higher goodness rating, and the AI gives the cake utopia a higher goodness rating, it's not "wrong" in the sense of being able to check its work and find a mistake. The AI can prefer the cake utopia even while operating perfectly.

This is what happens in the case of Failed Utopia 4-2. The AI has some preferences about the world. And those preferences are very close but not quite human preferences. And so the main character ends up in the cake utopia. Even if the AI does a lot more research and checks its reasoning carefully, it is not a priori necessary that it should realize the error of its ways and make the world a pie utopia instead. It's wrong(2), but not wrong(1).

Similar problems show up when you try to make any sort of AI that just "does what humans want." Eventually, somewhere, you have to turn this vague verbal statement into a precise specification (like the code of the AI), which is used to compute something like a goodness rating. And it turns out that when you actually try to do this, it's pretty tricky to make the AI's goodness ratings similar to a human's goodness ratings. Basically every easy way has some critical flaw, and the ways that seem promising are not very easy.

So sure, we want to make an AI that just does what humans want (sort of). But this is like "make an AI that recognizes pictures of cats" - an admirable goal, but a nontrivial one. And one that might have bad consequences even if only slightly wrong.

Comment author: Clarity 11 February 2016 12:23:49AM -2 points [-]

Are there handy references (like an app) that can output the following, given a particularly growing region, rather than have me have to look it all up individually? Crop, Soil, Harvest season, Sunlight, Pest control, Sowing season, Water and Spacing?

Comment author: Jiro 10 February 2016 11:38:48PM 0 points [-]

"Non-Western" is too broad a term here. People who object to immigration on cultural grounds have something more specific in mind than "non-Western".

Comment author: DataPacRat 10 February 2016 11:30:32PM 0 points [-]

Seeking socio-econo-political organizing methods

How many useful ways are there for an uploaded mind, an em, to organize copies of itself to maximize the accuracy of their final predictions?

The few that I've been able to think of:

  • "Strict hierarchy". DPR.2.1 can advise DPR.2, but DPR.2's decision overrides DPR.2.1's.
  • "One em, one vote". DPR.2 gets a vote, and so does DPR.2.
  • "One subjective year, one vote". DPR.2.1 is running twice as fast as DPR.2, and so DPR.2.1 gets twice as many votes.
  • "Prediction market". The DPRs implement some sort of internal currency (which, thanks to blockchains, is fairly easy), and make bets, receiving rewards for accurate predictions.
  • "Human swarm". Based on https://www.singularityweblog.com/unanimous-ai-louis-rosenberg-on-human-swarming/ .

How many reasonably plausible methods am I missing?

Comment author: Elo 10 February 2016 11:13:31PM *  0 points [-]

look over a draft of the article

can do.

Comment author: Clarity 10 February 2016 11:08:19PM 0 points [-]

They can be

Comment author: Elo 10 February 2016 10:54:07PM 0 points [-]

You're failing at other minds.

Comment author: Val 10 February 2016 10:44:12PM 0 points [-]

I agree, but my point was not a comprehensive segmentation of the whole population, but the existence of the groups I mentioned. Also, the border between B and C might be blurry.

Comment author: Val 10 February 2016 10:39:54PM *  0 points [-]

Last time I felt the lack of questions about multiculturalism. It often correlates with political affiliation, but I don't know how it holds in this community.

Two examples come to my mind:

  1. What is your opinion of the regulation of people from a country with a non-Western culture immigrating into countries with a western culture? (I specified it because it both seems that the overwhelming majority of the community lives in a Western country, and that the topic of multiculturalism is most hotly debated in such countries) There could be around 5 or 6 answers between (and including) the extremes of "let absolutely everyone in without any selection criteria or any upper limit at all" and "close all borders, don't let anyone in", with several in-between answers like "allow them in, but only in amounts which wouldn't completely change the ethnic and cultural majorities of the regions",or "allow them in, but based on some selection criteria regarding education, social class, etc."

  2. In the case of immigrants from a non-Western culture into a country with Western culture, how would you like for the people to culturally adjust?

Possible answers, going through the whole scale between the two extremes:

  • "The newcomers should completely embrace the culture of their new home"
  • "The newcomers should mostly adopt to the culture of their new homes, while the natives should make at least a few small steps to accommodate them"
  • "They both should move equal distances in the direction of each other"
  • "The natives should mostly adopt to the culture of the newcomers, while the newcomers should make at least a few small steps to become acquainted with the cultural history of their new home "
  • "The natives should completely embrace the culture of the newcomers, so that they can freely live as they used to live in their old home, without being forced to abandon any of their values"
  • (and to add an option outside of the scale of who should change:) "Neither should change anything in their culture, they should live in parallel societies among each other, each keeping their own cultural values and not interfering with the other"
Comment author: Nornagest 10 February 2016 10:39:32PM *  0 points [-]

I'm not saying we should do away with rules. I'm saying that there needs to be leeway to handle cases outside of the (specific) rules, with more teeth behind it than "don't do it again".

Rules are helpful. A ruleset outlines what you're concerned with, and a good one nudges users toward behaving in prosocial ways. But the thing to remember is that rules, in a blog or forum context, are there to keep honest people honest. They'll never be able to deal with serious malice on their own, not without spending far more effort on writing and adjudicating them than you'll ever be able to spend, and in the worst cases they can even be used against you.

Comment author: gwern 10 February 2016 10:27:27PM 4 points [-]

The mind is what the brain does. Obesity can be a choice in the same way that going through with cryonics can be a choice. As a materialist, I see little difference; both are the outcome of many physical processes, some of which run through the brain, which typically can themselves can be traced causally further back. (For example, I have little doubt that were it possible to run such a study, we would find a high heritability to cryonics as well as the already-established obesity/BMI heritabilities, because cryonics seems to relate very heavily to various cognitive attitudes which are closely connected to other cognitive traits which are heritable, such as intuitive religious cognition which tends towards dualism or essentialism and against the reductionism & materialism which leads to patternism.)

Comment author: Crux 10 February 2016 10:17:06PM 2 points [-]

I would absolutely be very interested. I think Vipassana meditation can be used as a very powerful rationality technique, and I'm always interested to read rationalists explain their experiences with it.

Comment author: paper-machine 10 February 2016 10:16:26PM 0 points [-]

For the purposes of this argument, it's sufficient that merely some fraction of people can indeed choose to stop becoming obese, which does indeed appear to be the case.

Comment author: paper-machine 10 February 2016 10:00:28PM 2 points [-]

I've pre-ordered it, so expect a review in short order.

Comment author: Brillyant 10 February 2016 09:54:58PM -1 points [-]

People...cannot stop becoming obese

This is a choice?

Comment author: polymathwannabe 10 February 2016 09:48:08PM 0 points [-]

What I was hoping for with it was to show an example of faith that secular people can relate to.

While I believe "faith" as a concept is insufficiently defined, I suspect its definition would have to be expanded too much for it to occupy some of the space of secular epistemology.

Comment author: OrphanWilde 10 February 2016 09:47:20PM 2 points [-]

My argument is symmetry, but the form that argument would take would be... extremely weak, once translated into words.

Roughly, however... you risk defining new norms, by treating downvotes as uniquely bad as compared to upvotes. We already have an issue where neutral karma is regarded by many as close-to-failure. It would accentuate that problem, and make upvotes worth less.

Comment author: OrphanWilde 10 February 2016 09:37:33PM 1 point [-]

I make rare exceptions. About the only time I do it is when I notice my opponent is doing it. (Not because I care they're doing it to me or about karma, but I regard it as a moral imperative to defect against defectors, and if they care about karma enough to try it against me, I'm going to retaliate on the grounds that it will probably hurt them as much as they hoped it would hurt me.)

Comment author: OrphanWilde 10 February 2016 09:33:17PM 1 point [-]

I'm aware there are ways of causing trouble that do not involve violating any rules.

I can do it without even violating the "Don't be a dick" rule, personally. I once caused a blog to explode by being politely insistent the blog author was wrong, and being perfectly logical and consistently helpful about it. I think observers were left dumbfounded by the whole thing. I still occasionally find references to the aftereffects of the event on relevant corners of the internet. I was asked to leave, is the short of it. And then the problem got infinitely worse - because nobody could say what exactly I had done.

A substantial percentage of the blog's readers left and never came back. The blog author's significant other came in at some point in the mess, and I suspect their relationship ended as a result. I would guess the author in question probably had a nervous breakdown; it wouldn't be the first, if so.

You're right in that rules don't help, at all, against certain classes of people. The solution is not to do away with rules, however, but to remember they're not a complete solution.

Comment author: Lumifer 10 February 2016 09:30:49PM 1 point [-]

I would add group C: people who do not make a personal choice, but rather just become whatever the social circle around them (which can be defined more narrowly or more widely) expects them to be. They just say whatevs... and take the default offering.

Comment author: spriteless 10 February 2016 09:29:49PM 0 points [-]

Don't push this all on the girls! Any boy could dress up as a girl convincingly enough to fool the magic and lift the branch himself. The only reason they did not was because they would take a similar status hit as the girls would for giving away their magic for free.

(More practical advice from an unwillingly celibate lesbian who is as disgusted with the idea of getting touched by dudes as you: learn to masturbate, and/or seek ways to relieve or avoid other types of stress that exacerbate the problem.)

Comment author: gwern 10 February 2016 09:25:05PM 9 points [-]

Probably not. If you look at the comments on posts about the Prize, you can see how clearly people have already set up their fallback arguments once the soldier of 'possible bad vitrification when scaled up to human brain size' has been knocked down. For example, on HN: https://news.ycombinator.com/item?id=11070528

  • 'you may have preserved all the ultrastructure but despite the mechanism of crosslinking, I'm going to argue that all the real important information has been lost'
  • 'we already knew that glutaraldehyde does a good job of fixating, this isn't news, it's just a con job looking for some free money'
  • 'it irreversibly kills cells by fixing them in place so this is irrelevant'
  • 'regardless of how good the scans look, this is just a con job'
  • 'what's the big deal, we already know frogs can do this, but what does it have to do with humans; anyway, it's a quack science which we know will never work'

Even if a human brain is stored, successfully scanned, and emulated, the continued existence - nay, majority - of body-identity theorists ensures that there will always be many people who have a bulletproof argument against: 'yeah, maybe there's a perfect copy, but it'll never really be you, it's only a copy waking up'.

More broadly, we can see that there is probably never going to be any 'Sputnik moment' for cryonics, because the adoption curve of paid-up members or cryopreservations is almost eerily linear over the past 50 years and entirely independent of the evidence. Refutation of 'exploding lysosomes' didn't produce any uptick. Long-term viability of ALCOR has not produced any uptick. Discoveries always pointing towards memory being a durable feature of neuronal connections rather than, as so often postulated, an evanescent dynamic property of electrical patterns, have never produced an uptick. Continued pushbacks of 'death' have not produced upticks. No improvement in scanning technology has produced an uptick. Moore's law proceeding for decades has produced no uptick. Revival of rabbit kidney, demonstration of long-term memory continuity in revived C. elegans, improvements in plastination and vitrification - all have not or are not producing any uptick. Adoption is not about evidence.

Even more broadly, if you could convince anyone, how many do you expect to take action? To make such long-term plans on abstract bases for the sake of the future? We live in a world where people cannot save for retirement and cannot stop becoming obese and diabetic despite knowing full well the highly negative consequences, and where people who have survived near-fatal heart attacks are generally unable to take their medicines and exercise consistently as their doctors keep begging them. And for what? Life sucks, but at least then you get to die.

Comment author: Gleb_Tsipursky 10 February 2016 09:19:30PM 0 points [-]

How much would a user have to know about LW to think to do that? Heck, even I didn't think of suggesting to Caleb to do that, as that notion didn't occur to me. You're failing at other minds.

Comment author: RevPitkin 10 February 2016 09:17:42PM 1 point [-]

I agree with this assessment. I often think of it using the language of fundamentalism. Fundamentalism is at its core the belief that I arrived at the only/best/real answer and that anyone who didn't is either dumb or bad. It leads to disrespect of other groups and an unwillingness to see any sort of common ground. In my opinion both theist and atheist groups can produce that sort of fundamentalists. Though religion produces many more. Let's hope more people will join group A.

Comment author: Gleb_Tsipursky 10 February 2016 09:15:15PM 0 points [-]

No worries, delays happen!

Regarding secular experiences relating to religion, you might want to check out the discussion here about the article written in response to yours. Might pick up some good ideas there for relevant points to make.

Comment author: Val 10 February 2016 09:11:30PM *  0 points [-]

When the topic of religion and rationality comes up, I think the classification atheist / theist might be a very flawed one in this topic. I propose a different classification:

Let's consider group A to be people who are curious about whether there is much more to our world than what we can perceive with our organs and our instruments. They ask themselves whether there might be some higher meaning in this world, whether we are really just looking at shadows cast onto the wall in a cave, thinking that that's our entire universe, while there might be something much more out there, something we can't even imagine. And these people search for ways to experience this feeling, they seek to understand the concept most people call "God". Some of them find it, and become theists. Some people don't find it, or assign a different concept to it, or find other goals which they perceive to be more fulfilling, and become atheists/nontheists. But both of these know what they were searching for and don't condemn those who reached a different conclusion.

Let's consider group B to be people who wish to feel that they are better than other people, or at least that there are plenty of people who are worse then them. They want to belong to a group, to a community, where they are respected because they have similar opinions as others in the group. Another major motivation for joining that group is that they can now feel themselves to be superior to people outside of this group. To this group belong those atheists, whose main motivation for being an atheist is that they can feel themselves superior to people who they consider stupid, and also to this group belong those religious, whose main motivation for being religious is that they can feel themselves superior to people who they consider immoral.

I think the difference between the groups A and B are much bigger than the difference between atheists in A and theists in A, or between atheists in B and theists in B.

I admit that I'm basing these observations on my personal experiences, but as I'm eager to explore different communities with very varying value systems, I had the opportunity of meeting many people from all 4 groups of the above classification.

Comment author: Lumifer 10 February 2016 09:10:23PM 0 points [-]

It's easy to get a LW account - takes one minute - but it's not easy to get karma sufficient to post.

It is trivially easy. You put up a comment saying "I wish to make a post about this-and-that, but lack karma. I would appreciate gifts of karma so that I could post" and lo and behold! in a few hours at most you have sufficient karma to post.

Comment author: Crux 10 February 2016 09:09:35PM *  0 points [-]

Let me summarize in my own words some of the points in your post:

Many members of the PUA community:

  • take it too far and believe that newbies should immediately dive head-first into doing uncomfortable and anxiety-producing approaches in often-hostile environments. (Which causes these newbies to wall off their real selves and hide behind manufactured personalities.)

  • are paranoid about girls cheating on them and think a single slip into beta-provider mode may seal a crushing and depressing fate. (Which prevents them from opening up and showing vulnerability, which is required for escalating into a love relationship.)

  • believe that showing weakness in a relationship is always and everywhere a poor tactic. (Which causes the same problem as the last bullet.)

  • are depressed even if they have had a lot of success attracting women, as evidenced by two of the key individuals, Tyler and Mystery, encountering this issue. (Which shows that PUA working for seduction doesn't necessary mean it works for a good life.)

  • lose a sufficient amount of skill after a short enough time out of the game to suggest that they failed to create deeply-rooted changes in themselves. (Which stands as more evidence that PUA teaches people how to put on an act rather than how to truly improve themselves.)

Am I on the right track?

Although I agree with you on all of these claims, I don't agree with you on what I perceive to be the overall argument you're constructing, which is that reading a large selection of material from the PUA community is unlikely to be a good way for a man to better himself in the realm of achieving genuine connections with women he desires either sexually or romantically.

Before I continue: Have you read HughRistik's writing here on Less Wrong?

Comment author: Lumifer 10 February 2016 09:05:57PM *  2 points [-]

I am curious now about the interaction between downvoting a comment and replying to it.

I have a personal policy of either replying to a comment or downvoting it, not both. The rationale is that downvoting is a message and if I'm bothering to reply, I can provide a better message and the vote is not needed. I am not terribly interested in karma, especially karma of other people. Occasionally I make exceptions to this policy, though.

Comment author: Vaniver 10 February 2016 09:05:25PM 0 points [-]

Blocking downvoting responses I could be convinced of, but blocking upvoting responses seems like a much harder sell.

Comment author: Nornagest 10 February 2016 09:05:24PM *  0 points [-]

Standing just on this side of a line you've drawn is only a problem if you have a mod staff that's way too cautious or too legalistic, which -- judging from the Eugine debacle -- may indeed be a problem that LW has. For most sites, though, that's about the least challenging problem you'll face short of a clear violation.

The cases you need to watch out for are the ones that're clearly abusive but have nothing to do with any of the rules you worked out beforehand. And there are always going to be a lot of those. More of them the more and stricter your rules are (there's the incentives thing again).

Comment author: RevPitkin 10 February 2016 09:03:10PM 0 points [-]

You know I have actually not read that in Christian apologetics. I believe its there but in the context of this article it came out of discussion with Gleb.

Comment author: RevPitkin 10 February 2016 09:02:01PM 5 points [-]

Finally got a chance to start an account. Sorry for the delay. I've enjoyed reading the comments and there are some very good point raised. I realize now that trust in sensory experience was not the strongest argument. What I was hoping for with it was to show an example of faith that secular people can relate to. It does not seem like it landed so I may have to keep thinking about what those might be. Realizing that there is not going to be anything directly analogous to religious faith. I wonder if something like "faith in the scientific method to help understand the world" might better illustrate the point I was going for?

Comment author: OrphanWilde 10 February 2016 09:01:59PM 1 point [-]

I think it's sufficient to just prevent voting on children of your own posts/comments. The community should provide what voting feedback is necessary, and any voting you engage in on responses to your material probably isn't going to be high-quality rational voting anyways.

Comment author: OrphanWilde 10 February 2016 08:59:59PM 2 points [-]

Not to insult your work as a tyrant, but you were managing the wrong problem if you were spending your time trying to write ever-more specific rules. Rough rules are good; "Don't be a dick" is perhaps too rough.

You don't try to eliminate fuzzy edges; legal edge cases are fractal in nature, you'll never finish drawing lines. You draw approximately where the lines are, without worrying about getting it exactly right, and just (metaphorically) shoot the people who jump up and down next to the line going "Not crossing, not crosssing!". (Rule #1: There shall be no rule lawyering.) They're not worth your time. For the people random-walking back and forth, exercise the same judgment as you would for "Don't be a dick", and enforce it just as visibly.

(It's the visible enforcement there that matters.)

The rough lines aren't there so rule lawyers know exactly what point they can push things to: They're so the administrators can punish clear infractions without being accused of politicizing, because if the administrators need to step in, odds are there are sides forming if not formed, and a politicized punishment will only solidify those lines and fragment the community. (Eugine Nier is a great example of this.)

Comment author: Gleb_Tsipursky 10 February 2016 08:54:17PM 0 points [-]

In saying "doesn't have enough karma," I was pointing to the obstacle to him posting. It's easy to get a LW account - takes one minute - but it's not easy to get karma sufficient to post. Anyway, I don't think this thread is helpful to continue anymore.

Comment author: Gleb_Tsipursky 10 February 2016 08:53:38PM 1 point [-]

In saying "doesn't have enough karma," I was pointing to the obstacle to him posting. It's easy to get a LW account - takes one minute - but it's not easy to get karma sufficient to post.

I think you might have missed his comments, his LW name is RevPitkin.

Comment author: RevPitkin 10 February 2016 08:53:27PM 3 points [-]

The article is aimed at both. Yes, it is probably more aimed at believers because as a minister that the audience most receptive to me. For believers I hope to show that rationality is not always antithetical to religious practice. For secular people I hope to show that there are things in common between the religious and the secular. We dont have to always be at odds. Your right and others who have pointed it out are right that we all start with sensory experience.. It would be interesting to discuss where sensory experience begins to lead religious people to faith.

Comment author: Gleb_Tsipursky 10 February 2016 08:50:56PM 0 points [-]

Caleb, by aspiring rationalist I mean one who is engaged in the rationality community, it's a LW jargon term :-)

Comment author: Val 10 February 2016 08:48:57PM 1 point [-]

One of the objectives of LW is to fight against biases. I've seen several comments on this site in previous topics, which paint an almost strawman-like image of religious people (especially Christians - probably as they are the largest group in the countries where most LW readers live), and a strong disbelief against the possibility of faith and rationality co-existing. Therefore an article or a discussion about the compatibility of religion and rationalism might have its merits here.

Comment author: RevPitkin 10 February 2016 08:46:50PM 2 points [-]

I think Augustine would be an interesting candidate. John Wesley from my own denomination. Many of the early church theologians. We live with a fairly well developed system of theology and Christian belief . However, the early church had to define and articulate the faith. For this they used the methods of logical inquiry available to them based on the idea that theology had to be understandable and had to be internally consistent. So many of them used tools of logical and reason to examine the Christian faith. Were many of them rationalist in the modern sense, no but were they in their time and place yes.

Comment author: OrphanWilde 10 February 2016 08:42:29PM 2 points [-]

I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.

All security ultimately relies on some kind of obscurity, this is true. But the first pass should deal with -dumb- evil. Smart evil is its own set of problems.

I remember that case, and I would put that in the "downvoting five terrible politics comments" category, since it wasn't disagreement on that topic spilling over to other topics.

You would. Somebody else would put it somewhere else. You don't have a common definition. Literally no matter what moderation decision is made in a high-enough profile case like that - somebody is going to be left unsatisfied that it was politics that decided the case instead of rules.

Comment author: RevPitkin 10 February 2016 08:40:41PM 3 points [-]

I dont think I'm the only one. I just think I'm the only one to get mixed up in the rationality community. Thanks to Gleb and Columbus rationality. Most mainline protestant ministers are well educated and many are deeply engage with the practice of critical thinking

Comment author: Crux 10 February 2016 08:40:24PM *  1 point [-]

As I have already stated, I am objecting to exactly one statement

This is your problem right here. You can't simply single out a specific statement and attempt to grapple with its internal logic. Again, Jiro's response is highly contextual and only makes sense when you consider the big picture. Have you read the subthread carefully, going all the way back to Clarity's question? Have you read Roosh's article? If you haven't done these things, then you're being irresponsible in your attempt to interpret Jiro.

Let's look again at the statement you're objecting to:

If you don't want people to be convicted of rape based on evidence obtained by torture, you also want rape to be legal

Oh wait, you misquoted Jiro. Let's take a look at what Jiro actually said:

If you don't want people to be convicted of rape based on evidence obtained by torture, you also "want rape to be legal"

See the quotation marks?

Jiro's whole response was an attempt to explain that we shouldn't use the phrase "want rape to be legal" to describe either Roosh's position (that rape should be legal on private property) or the analogy (that rape convictions based on evidence obtained by torture should be thrown out) because it makes it sound like Roosh or the hypothetical person in the analogy endorses rape.

If I sound condescending, it's because it's tiresome to argue with someone who is taking a single point as literally as possible while neglecting to look into the context of the discussion.

Taking a step back:

Jiro expressed uneasiness about submitting his or her post, probably because he or she knows how likely explicit discussions on these topics are to provoke angry or offended replies. While you didn't seem offended, you nevertheless began your reply with an emotionally charged claim that Jiro seemed "confused". I'm sure you're aware that such phrasing provokes the same kind of emotions that you're experiencing with my patronizing responses.

I believe that it's very important for people to speak openly on these kinds of subjects, so when Jiro made what I interpreted as a solid point and then showed uneasiness about being part of the conversation, I found this somewhat alarming. I wrote a reply, and then soon afterwards I discovered your response, which began in a condescending way and then continued into what I considered (and still consider) a misinterpretation which demonstrates lack of care and thoroughness and stands as a frivolous disincentive for Jiro to jump into similar discussions in the future.

I admit that I felt a bit of annoyance right from the beginning. The emotional charge you can feel channeled through my words is a product of status-posturing emotions related to defending Jiro.

Comment author: Vaniver 10 February 2016 08:40:19PM 0 points [-]

I agree with you that grudge-making should be discouraged by the system.

Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments

Hmm. I think downvoting a response to one's material is typically a poor idea, but I don't yet think that case is typical enough to prevent it outright.

I am curious now about the interaction between downvoting a comment and replying to it. If Alice posts something and Bob responds to it, a bad situation from the grudge-making point of view is Alice both downvoting Bob's comment and responding to it. If it was bad enough to downvote, the theory goes, that means it is too bad to respond to.

So one could force Alice to choose between downvoting and replying to the children of posts she makes, in the hopes of replacing a chain of -1 snipes with either a single -1 or a chain of discussion at 0.

Comment author: bogus 10 February 2016 08:29:58PM 0 points [-]

Doesn't severe depression have a DALY weight? Of course one could also be miserable for all sorts of reasons without actually being depressed in a medical sense, and DALYs wouldn't account for this. But that's just one of the many ways in which DALYs optimize for practicality, compared to QALYs.

Comment author: Vaniver 10 February 2016 08:29:12PM 1 point [-]

Begging your pardon, but I know the behavior you're referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That's a recipe for disaster.

My impression is that the primary benefit of a concrete definition is easy communication; if my concrete definition aligns with your concrete definition, then we can both be sure that we know, the other person knows, and both of those pieces of information are mutually known. So the worry here is if a third person comes in and we need to explain the 'no vote manipulation' rule to them.

I am not as impressed with algorithmic detection systems because of the ease of evading them with algorithms, especially if the mechanics of any system will be available on Github.

Would we say that's against the rules, or no?

I remember that case, and I would put that in the "downvoting five terrible politics comments" category, since it wasn't disagreement on that topic spilling over to other topics.

Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.

My current plan is to introduce karma weights, where we can easily adjust how much an account's votes matter, and zero out the votes of any account that engages in vote manipulation. If someone makes good comments but votes irresponsibly, there's no need to penalize their comments or their overall account standing when we can just remove the power they're not wielding well. (This also makes it fairly easy to fix any moderator mistakes, since disenfranchised accounts will still have their votes recorded, just not counted.)

Comment author: Nornagest 10 February 2016 08:24:04PM *  3 points [-]

Speaking as someone that's done some Petty Internet Tyrant work in his time, rules-lawyering is a far worse problem than you're giving it credit for. Even a large, experienced mod staff -- which we don't have -- rarely has the time and leeway to define much of the attack surface, much less write rules to cover it; real-life legal systems only manage the same feat with the help of centuries of precedent and millions of man-hours of work, even in relatively small and well-defined domains.

The best first step is to think hard about what you're incentivizing and make sure your users want what you want them to. If that doesn't get you where you're going, explicit rules and technical fixes can save you some time in common cases, but when it comes to gray areas the only practical approach is to cover everything with some variously subdivided version of "don't be a dick" and then visibly enforce it. I have literally never seen anything else work.

Comment author: OrphanWilde 10 February 2016 08:15:36PM 1 point [-]

Taking another tack - human beings are prone to failure. Maybe the system should accommodate some degree of failure, as well, instead of punish it.

I think one obvious thing would be caps on the maximum percent of upvotes/downvotes a given user is allowed to be responsible for, vis a vis another user, particularly over a given timeframe. Ideally, just prevent users from upvoting/downvoting further on that user's posts or their comments past the cap. This would help deal with the major failure mode of people hating one another.

Another might be, as suggested somewhere else, preventing users from downvoting responses to their own posts/comments (and maybe prevent them from upvoting responses to those responses). That should cut off a major source of grudges. (It's absurdly obvious when people do this, and they do this knowing it is obvious. It's a way of saying to somebody "I'm hurting you, and I want you to know that it's me doing it.")

A third would be - hide or disable user-level karma scores entirely. Just do away with them. It'd be painful to do away with that badge of honor for longstanding users, but maybe the emphasis should be on the quality of the content than the quality (or at least the duration) of the author anyways.

Sockpuppets aren't the only failure mode. A system which encourages grudge-making is its own failure.

Comment author: OrphanWilde 10 February 2016 08:02:16PM 2 points [-]

Begging your pardon, but I know the behavior you're referring to; what concerns me with the increased ability to detect this behavior is the lack of a concrete definition for what the behavior is. That's a recipe for disaster.

A concrete definition does enable "rule-lawyering", but then we can have a fuzzy area at the boundary of the rules, which is an acceptable place for fuzziness, and narrow enough that human judgment at its worst won't deviate too far from fair. I/e, for a nonexistent rule, we could make a rule against downvoting more than ten of another user's comments in an hour, and then create a trigger that goes off at 8 or 9 (at which point maybe the user gets flagged, and sufficient flags trigger a moderator to take a look), to catch those who rule-lawyer, and another that goes off at 10 and immediately punishes the infractor (maybe with a 100 karma penalty) while still letting people know what behavior is acceptable and unacceptable.

To give a specific real-world case, I had a user who said they were downvoting every comment I wrote in a particular post, and encouraged other users to do the same, on the basis that they didn't like what I had done there, and didn't want to see anything like it ever again. (I do not want something to be done about that, to be clear, I'm using it as an example.) Would we say that's against the rules, or no? To be clear, nobody went through my history or otherwise downvoted anything that wasn't in that post - but this is the kind of situation you need explicit rules for.

Rules should also have explicit punishments. I think karma penalties are probably fair in most cases, and more extreme measures only as necessary.

Comment author: Lumifer 10 February 2016 07:51:44PM 0 points [-]

This would prevent karma gain from "interest" on old comments

If you want to reward having a long history of comments, you could prohibit only downvoting of old comments.

it wouldn't prevent ongoing retributive downvoting

I doubt you could algorithmically distinguish between downvoting a horticulture post because of disagreements about horticulture and downvoting a horticulture post because of disagreements about some other topic.

But I suspect voting rate limiters should keep the problem in check.

Comment author: tadrinth 10 February 2016 07:39:46PM 0 points [-]

At this point, I won't be confident that i've been successfully preserved until ultra high resolution electron micrographs of my brain are in Amazon's S3 storage, replicated across multiple regions. Any storage that doesn't have redundancy doesn't count as safe.

Comment author: Nornagest 10 February 2016 07:39:26PM *  0 points [-]

The cheapest technical fix would probably be to prohibit voting on a comment after some time has passed, like some subreddits do. This would prevent karma gain from "interest" on old comments, but that probably wouldn't be too big a deal. More importantly, though, it wouldn't prevent ongoing retributive downvoting, which Eugine did (sometimes? I was never targeted) engage in -- only big one-time karma moves.

If we're looking for first steps, though, this is a place to start.

Comment author: Brillyant 10 February 2016 07:36:11PM *  1 point [-]

I'm not sure this distinction, while significant, would ensure "millions" of people wouldn't sign up.

Presumably, preserving a human brain "successfully", according to some reasonable definition of the term, would be a big deal and cause a lot of interest in cryonics. It would certainly seem like significant progress towards the sort of life-extension that LW's been clambering about.

Exactly how many new contracts they would get seems hard to predict, but I don't see a number larger than 1,000,000 to be unreasonable.

Comment author: Vaniver 10 February 2016 07:21:08PM *  1 point [-]

What are we calling retributive downvoting, incidentally?

The targeted harassment of one user by another user to punish disagreement; letting disagreements on one topic spill over into disagreements on all topics.

That is, if someone has five terrible comments on politics and five mediocre comments on horticulture, downvoting all five politics comments could be acceptable but downvoting all ten is troubling, especially if it's done all at once. (In general, don't hate-read.)

Another way to think about this is that we want to preserve large swings in karma as signals of community approval or disapproval, rather than individuals using long histories to magnify approval or disapproval. It's also problematic to vote up everything someone else has written because you really like one of their recent comments, and serial vote detection algorithms also target that behavior.

We typically see this as sockpuppets instead of serial upvoters, because when someone wants to abuse the karma system they want someone else's total / last thirty days to be low, and they want a particular comment's karma to be high, and having a second account upvote everything they've ever done isn't as useful for the latter.

View more: Next