Open Thread, Aug 29. - Sept 5. 2016
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (119)
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted. "
http://arxiv.org/abs/1602.04938
Man burns down house remotely over the internet, for insurance, no accident.
Edit: was only posited, but ivestigators rigged up the supposed instrument of doom, a network printer with a piece of string.
http://www.stuff.co.nz/national/crime/83868063/Northland-man-denies-burning-down-house-but-insurer-refuses-to-pay-out
Anyone know where I can find melatonin tablets <300 mcg? Splitting 300 mcg into 75 mcg quarters still gives me morning sleepiness, thinking smaller dose will reduce remaining melatonin upon wake time. Thanks.
The Netherlands. I think that they will ship anywhere in the EU, even places where it requires a prescription. I don't know about the US. But I'm skeptical that dose is your problem.
You could use liquid melatonin instead, and dilute it to a usably measurably small dose.
Or he could use whatever pills to make his own liquid melatonin, and titrate from there.
I bought a bunch of stoppered bottles with calibrated eye droppers for the purpose. 6 bottles, 7 bucks at amazon, though that item shows no longer available.
Thanks. I haven't used liquid products much before. Anything you've noticed that's significantly different in terms of onset time, effect duration, etc?
I haven't used them since I haven't tried to go that low dose yet. I assume they would be absorbed faster but otherwise similar.
My girlfriend and I disagreed about focussing on poor vs richer countries in terms of doing good. She made an argument along the lines of:
What do you make of it?
I think the regulatory targeting of government enabled shake downs toward the wealthier middle class is much more of an issue. The wealthy middle class can afford to put up with more than the poor.
Though they try, it's hard to market segment your shakedowns, so the poor are often just priced out of the market.
Market segmentation by price/quality/status works just fine where there is a free market. If you've got the money to buy it, goods will come.
If that's your real reason, perhaps the best way to help poor Australians is to import stuff from Africa so that they get that supply of suitable goods. Or better yet invite some Kenyans to teach them how to make things themselves.
Nope. Won't work.
The cheap goods can't be available to the poor, because then they'd be available to the not poor, and the government enabled rent seeking would no longer work.
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
A lot of singularitarian thought tries to holds human desire to be exogenous and untouchable, which seems to be a rather odd blind-spot to have... we rightly discard the notion that death is desirable because it is natural, but not the notion that desire is sacred and hence should always be fulfilled, fighting against any and all limits?
Hi, I'm curious what rationalists (you) think of this video if you have time:
Why Rationality Is WRONG! - A Critique Of Rationalism https://www.youtube.com/watch?v=iaV6S45AD1w 1 h 22 min 47 s
Personally, I don't know much about all of the different obstacles in figuring out the truth so I can't do this myself. I simply bought it because it made sense to me, but if you can somehow go meta on the already meta, I would appreciate it.
Watching a long bad quality video isn't a good use of time. Can you summarize which arguments you think he made that you think made sense? Even better which argument made sense and that aren't stawmans?
I agree with the other commenters about this.
So I thought "maybe it gets more interesting later on" and skipped to 50:00. At which point he isn't bothering to make any arguments, merely preening over how he understands the world so much more deeply than rationalists, who will come and bother him with their "arguments" and "contradictions" and he can just see that they "haven't got any awareness" and trying to engage with them would be like trying to teach calculus to a dog, and that the mechanism used to brainwash suicide bombers and fundamentalists are "the exact same mechanism that very intelligent scientists use to prove their theories of space and time and whatever else". OK, then.
Since I obviously wasn't enlightened enough for minute 50 of this thing, I went back to 40:00. He says it's important to connect with your emotions and not deny they're there (OK), and then he says that "rational people just assume that, well, we don't need any of that emotional stuff". OK, then. (And rational people like scientists get emotional when they argue with highly irrational people because they're attached to their rational models of the world and don't want to hear anything contrary to those models because of cognitive dissonance; they close their eyes and ears to the arational because they demonize it as irrational.)
OK, clearly still too advanced for me. Back to 30:00. Apparently, if your "awareness" is low then you think thinking is great (OK...), you think thinking is all there is (huh?), you think thinking is a powerful tool for understanding reality (OK...), but as you gain in "awareness" you realise that thinking is a system of symbols, and "this gulf between the map and the territory just grows wider and wider and wider, until you see that the map is just a complete fiction, a complete illusion", and once you realise this you see "the gross limitations of thinking". Einstein's theory of gravity isn't revealing anything deep about the world, it's just a set of sounds and symbols on paper. "That's what it literally is, except your awareness is too low to actually see that". And then he pulls an interesting move where he complains about people with "low" "awareness" getting "sucked into the content" of a theory because they don't see the "larger context". You might think he's now going to explain what the larger context is and how it should affect our understanding of relativity. Ha, ha. What a silly idea. Only someone with low awareness would expect that. What he actually does is to tell us how when rationalists criticize him they're doing it "on the level of thoughts" while he is "on the level of awareness, which is a much higher level". Bleh.
Oh, wait, he has something resembling an actual point somewhere around 35:00. Rationalists give too much credit to logic, he says, because logic "has no teeth", because it depends on its premises and the premises are doing the real work, and if your premises are dodgy then so are your conclusions, and "most of them are very very wrong". Cool, he's going to tell us what wrong premises we have. ... Oh, no, silly me, he isn't. He just says they're very wrong but gives no specifics.
So far as I can see, he alternates between three main things.
If there's anything actually useful there, I missed it. And now I've listened to enough of this without any sign that he has anything useful to teach me, and I'm going to go and do something else. My apologies for not sitting through all 82 minutes of it.
But that's what you're mostly doing in your post. I will bring this up below.
I don't think everyone shares that view, at least it's not for me. I don't know if I am contradicting myself, though. If someone was similar but in differing in opinion then me. The contradiction would then lie under if I told you the world is your mirror.
That's what he said, of course it's kind of harsh, but it's his way of going on these things I think, I don't know why or what's most effective but for myself I am unaffected or in the positive. That might be just because I agree.
By becoming aware of the emotions that you are suppressing, not the "feeling emotions" rationally because the reason of emotion is rational.
There is awareness of thoughts, not only thoughts, and the awareness is not a thought. That is a definition game of what is a thought, consider it being different from awareness.
Yes, you don't have a thought of a thought, you have awareness of thought. Otherwise, you're trapped in thinking and don't know that there is something else.
See how he never mentions the larger context of an understanding of relativity itself? But the context of which sounds and symbols make up our "reality".
You missed the point, there was nothing said about affecting the understanding of relativity, you fell into the exact paradigm which the video said.
The larger context of the symbols and sounds on the paper. Not the theory itself according to physicists. That's the matrix.
He gave the specifics right after that, rationality itself. Asking about the premises which make rationality possible.
It seems like you disagree on numerous points, but not being aware of it. Like Einstein's equation is simply symbols and sounds (and pretty much everything else which you give attribute to)
Let's say the rational mind cannot understand something, why continue to use the rational mind? Is there something else? Maybe awareness? There might be something worth pursuing there.
Now I know I am not responding to my quote of your text. Rationality is wrong because of rationality itself. It cannot be right without the right context. The context of which rationality exists. Where thinking exists. Which is "outside" the subjective experience according to you. That's the whole point. It's right under your nose if you'd bother to meditate and separate awareness from thoughts.
Well. You're capable of becoming aware as well. It's not a radical difference. :)
Suppressing emotions has nothing to do with rationality as understood by this community. We aren't straw vulcans. Giving a speech of why straw vulcanism is bad, is no speech that provides a good critique of what we consider rationalism to be.
For the record, I agree with what gjm said; he wrote it much better than I could.
I feel we have a deep communicational barrier here. You probably didn't read "Rationality A-Z" (the canonical LW text). On the other hand, I have no idea what you mean by "matrix" and "context" and "awareness" and other stuff, and you don't bother to explain. (By "no idea" I actually mean I could imagine hundred different things under each of these labels, and I don't know which one of them is close to the one you mean. That makes the communication difficult.)
From my point of view, it seems like you are "in love" with some words; you associate strong positive emotions with certain nebulous concepts. These are all typical mistakes people make while reasoning; even very highly intelligent people! A part of the mission of this website is to help people overcome making this mistakes.
Maybe I am wrong about you here, but you don't provide enough information for me to judge otherwise. You posted a video of a smug person accusing everyone else, especially "scientists" and "rationalists" of being stupid and having lesser awareness. That's all there is, as far as I see. Color me unimpressed. There are some things that... uhm, are you familiar with the "motte and bailey" concept? Essentially: there are some statements which taken literally are true but trivial, but they can be interpreted more generally, which makes them interesting but false. I suspect this is one of the traps you fell into.
So, here we are... each side convinced that the other side is missing something important, relatively simple, but kinda tricky. Saying "dude, you are just confused!" is obviously not going to help, when the other side is thinking the same thing. Any other idea? From my side, I recommend reading "Rationality A-Z", there is free download.
I have not read that.
Virtual reality, as in the movie Matrix.
This is a bit harder to explain, imagine everything said is out of context from the subjective experience. Context can only be found within the subjective experience.
Awareness is the separation of thoughts from awareness. You can be aware of thoughts, that's awareness, and aware of thoughts which you think is you.
It would be better if I could reason for my point without making a mistake, but unfortunately, that's very hard to do. It's also up to the rationalist to consider opening up to the possibility everything they think is true, is wrong. By this I mean, being able to reason properly will spread more truth, meanwhile it might be futile depending how close-minded rationalists can be. But that's on my current data.
The only way to know you have lesser awareness is by having higher awareness. Then, it repeats itself.
I don't understand, you don't have to be afraid of criticising properly.
This is nothing trivial, this is the truth, and if you are serious about it can see for yourself.
How many pages is it, how do you use the information and how, what, should you remember?
About a thousand, depends on formatting.
Yeah, that's a lot, and many people complain about it. On the other hand, it provides great insights which can also be found in other books, but reading those other books together would be even more pages. Also, people who read online debates regularly, probably read such amount of text every few weeks, they are just not aware of it, because "following 15 facebook links every day, each on average two pages of text" doesn't feel like "reading 1000 pages of random text every month", even if in reality it actually means that.
I believe reading the book is a time well spent (I wish I had a time machine to send me the book back when I was a teenager; would probably be my favorite one), but that of course is a personal opinion.
Gosh, if only someone associated with LW rationalism had ever thought of that.
Seriously, what you've done here is to come to a group of people whose foundational ideas include "the map is not the territory", "human brains are fallible and you need to pay attention to how your thoughts work", and "you should never be literally 100% sure of anything" and say "Hey, losers! Rationality is overrated because you confuse the map with the territory, you aren't aware of your own thoughts and don't distinguish them from reality, and you're 100% confident you're right and therefore can't change your minds!".
There seems to be quite some denial on LW then regarding the topic. I don't understand why, if what you are saying is true.
That's a straw man argument, as far as I remember, I never said that. Personally, it seems to me as "the map is not the territory" is one of the maps which some, I am not saying you or anyone else, might think is the territory. This is only speculation.
So you do agree with the video, who else?
If for example, you were the person who was attached to the map being the territory, or not aware of it, and the argument was not a straw man.
Of course, you don't have to agree with a certain method of delivery, like the straw man.
Consider distinguishing between "the map is the territory" and "the map is an accurate representation of the territory".
Regardless how accurate or inaccurate a map is, it is still a map. But some maps are more or less accurate over other maps. That's fine. That's human projections.
I argue that the territory is arational, which means any representation in relation to the territory is all the same.
The second sentence contradicts the first.
I don't think so. What I see is people pointing out that the video is attacking straw men. (Extra-specially strawy, as regards LW in particular; but very strawy even if applied more broadly to people who explicitly aim to be rational.)
Some of it is things the video said, and you've said you agree with it. I don't think there's anything in my (admittedly not especially generous) paraphrase that doesn't closely match things said in the video.
Nope. I agree with some of what the video says. You know the old joke about the book review? "This book was both original and good. Unfortunately the parts that were original were not good, and the parts that were good were not original." In the same way, the video seems to me to combine (1) stating things that I think would be obvious to almost everyone here, (2) making less-obvious claims without any sort of justification, which in many cases I think are entirely false, and (3) gloating about how the maker is so much more advanced than those poor deluded rationalists.
You couldn't respond to my statement that "the map is not the territory"- is one of the maps which you use, regularly, thus fall into the category of which the straw man is targeted towards. In my opinion, and what I think.
I do agree with it, I think everything is arational and within the arational there is irrationality and rationality.
Which is probably not the target audience, do you believe there are those who know nothing of rationality yet think math and language is the territory and be Spock? Although I understand now why you can't agree with all the arguments/fallacies in the video, but a few.
Which less obvious claims without justification and why are they false? That's what I am looking for to learn.
Ok, how does this apply to any of the arguments made?
No, I didn't, which is not the same thing. But yeah, it's hard to respond to because it's not clear what you're saying. Any given thing anyone says can be called a "map", which tells us nothing about the particular thing or the particular person who says it. So if there's a specific criticism you're making, would you care to make it clearer?
Quite likely not. But it's the audience here, to which you brought the video and asked "what do you think?".
I already listed some in an earlier comment. You did reply to that comment but not in a way that gave me much reason to hope for constructive discussion.
I hope you will forgive me for saying that I don't get the impression that you are here to learn at all.
I'm sorry, but I don't understand the question. The things I was describing aren't arguments; my comment applies not to the arguments (of which there are actually rather few in the video) but to the maker's repeated comments about how people who consider themselves rational are so far beneath his level of "awareness".
I do not think further discussion is likely to be very fruitful.
I tried listening to the video on the 1.5× speed. Even so, the density of ideas is horribly low. It's something like:
That was the first 16 minutes, then I became too bored to continue.
My opinion?
Well, of course if you define a "rationalist" as a strawman, you can easily prove the strawman is foolish. You don't need more than one hour to convince me about that. No one in this community is trying to derive whether the sun is shining from the first principles.
I am not sure whether "universe is rational" is supposed to mean that (a) the universe has a relatively short description which could be understood by a mind, or that (b) the universe itself is a mind, specifically a rational one. Seems like the meaning was switched in the middle of an argument, using a sleight of hand.
In summary, my impression is of muddled thinking, and of feeling superior to the imaginary opponents. Actually, maybe the opponents are not imaginary -- there are many fools of various kinds out there -- it just has nothing to do with the kind of "rationality" that we use here, such as described e.g. by Stanovich.
Regarding the "Universe is rational"-strawman: I think the mistake which the video is trying to point out, is the mistake that a description of the universe is the universe. When it is only a description, same with anything. It is language and that is the limitation.
So for those that believe the universe is for example physics, instead of our projection, that's the flaw I think. It's simple, ask a person if gravity is real, after they respond "yes" ask them, is this not a human projection (your projection) upon the universe? What is the real universe?
What I wonder is what lesswrongers think of this strawman if it wasn't one, an actual argument towards someone (rationalist in this context) who made the statement gravity is real and not a projection of mind: "G R A V I T Y and everything else which is occurring to me in consciousness"
I'm not sure what you mean with this, because "Universe is a mind" seems more of an argument then stating the opponent believes the "universe is rational" (the strawman) like "What you think is the universe is your mind projection of labels and symbols yet you're not aware of it"
Well. I think usually what we see in others is just a projection of our own mind. "The world is your mirror"
But is there someone who can refute the argument made in the video, if you had the argument which the strawman was?
Otherwise it seems to me "Only fools would make the argument of which the strawman was targeted towards".
I wonder if any rationalist ever heard about "map is not the territory". /s
Ask a person whether a tree is real. Isn't that also just a human projection upon the nature?
We could spend days trying to pinpoint what exactly do we mean by "tree" etc. I am just saying that this is not specific to science or "rationalists", so why use it as an argument against them. There are useful things that could be said about the topic, but the "drive-by shooting" done in the video helps no one.
The LW-style answer would be something like: Yes, I obviously perceive the idea of gravity in my mind (because that's the only organ I have for perceiving ideas), but it is reasonable to assume that there is something "out there" that causes those perceptions in systematic ways. (I might be living in Matrix, but then "gravity" would refer to the specific law of the Matrix.)
That would probably require having that argument in a shorter written form, with footnotes explaining what did the author actually mean by saying this or that.
Otherwise: inferential distances, illusion of transparency, and all the way words can go wrong. :(
Most things are, or I can't really know what is not a human projection, but as long as we're aware of it, it's fine.
Well, there are probably "rationalists" aware of this or "scientists" as explained early on in the video. The argument is for those who aren't aware of the "map is not the territory".
Whether or not it helps someone or doesn't, that's hard to know, the like:dislike ratio and comments could be scraped. How this is relevant I don't understand. You don't know with a high %, neither do I.
People who take offense probably dislike and click away, or don't watch the whole video, those who argue against it already failed?
Now you, however, are still perceiving the idea of your, mind, organ and so forth. That's just other layers deep which you aren't aware of. Which makes it seem you don't fully understand the argument: Which is somewhere, something, subjective experience. Whatever is occurring when you're meditating for example.
But the LW-style answer seems like an agreement: is this true?
The context is found outside the matrix, so anything and everything is out of context.
I want to clarify that writing about these things is equally untrue then the empirical investigation, so we're both wrong by being in the matrix.
"I'm mapping the trajectory of this planet, yet I understand this is simply a human projection" Of course you can remove the "yet I understand this is simply a human projection" when it's ever-present.
No. If someone says that gravity is real they usually mean that the word points to is real. Maps reference objects on the terrritory. A person well educated in physics will tell you when you ask them for the specific of the gravitational effect that it's due to space time curvuture and not because a force is pulling on substance in the way Newtonian metaphysics assumes. If you ask them whether gravity exists they will still say "Yes".
It's quite typical for lay people to misuse language and overload terms. According to https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists it's a typical issue for lay people who think they made discoveries in physics.
The sleight of hand from going from rational₁ to rational₂ as described by Viliam is also typical for that kind of thinking. It's interaction with language on a way that's fundamentally flawed.
Objects is still a map, so is territory, so is this entire sentence. That's why it's a matrix. (virtual reality)
Which is one of the mistakes made by said scientists, especially if you ask them multiple times on this same point, to point out there might be a flaw. Because they won't question it otherwise.
"The cat sitting on the mat" is a map. The cat sitting on the mat is territory.
Insisitng that your opponents have an extra pair of quotes around everything, while they insist they don't have is not much of an argument.
The argument is that everything is a map including anything written here, in quotes or not. It's the written language and so forth, however, many layers deep the maps go.
By excluding all maps in direct experience you uncover the territory. Which is you. Which is arational. But only by direct experience.
The second sentence contradicts the first. Either there is a territory to be uncovered, or it is not the case that everything is a map.
Our world works in different ways than the movie matrix.
It's no mistake. It's just interpreting words to have a certain meaning and it's quite valuable to see them as having that meaning for practical purposes.
It's an analogy.
But that doesn't make it more likely to be true, especially if we are certain it is a human projection.
We usually believe that despite the fact that the content of our minds is only mental, we aren't Boltzman brains or live in a simulation but that there's a physical world out there with whom we interact. Do you disagree with the existence of such a physical world?
To judge how likely it is that something is true you first have to understand what's meant with the claim. Currently you seem to deal with language in a way where you don't get what's meant. It's like tax protestors in the US making claims about what laws are supposed to mean only to get imprisoned by courts when their intepretation of the meaning of language differs from that of other people. It's the same mechanism.
Tried listening.
doubt the rest is worth it.
I think it is worth it. You don't have to process the information and adapt it as your own. Simply give an argument against it if you're willing to teach others.
Of course I don't know if it's worth it for you, it certainly is worth it for me to ask.
It's much easier to do that if the author is willing to actually provide an argument in written form instead of only being able to make his case in a YouTube video. Avoidance of the written word in favor of video to make deep arguments is generally a good signal of unclear thinking.
Oh god. This is really bad.
Someone should tell him about the straw vulcan.
The more we (lw'ers) are tied to the word "Rationality". That should happen less. If you feel personally affected by the idea that someone says this part of your identity is wrong, then maybe it's time to be more fox and less hedgehog.
https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox
I think he's aware of the stereotype, but obviously, from my perspective, people are getting triggered left and right that rationality might somehow be wrong.
Of course not wrong in the sense that rationality in the matrix might still be considered "superior" over all other Ways in the matrix. But it is still the matrix and we're happy to play that game because it's fun :)
So, you keep using that word, "rationality," even though we've mentioned that LW uses it to mean something else. I don't know what you or the creator of the video mean by it, but I'm confident it's not the same. Perhaps instead of claiming that "people are getting triggered," you should ask yourself if you've succeeded in getting your most basic point across, or if we might be confused about which subject matter you want to address. Consider throwing out the video's words and finding new ones.
In addition, a good way to establish that your subject matter is real and not imaginary is to show people. When talking about stupidity this can be rude, but sometimes it seems unavoidable. I suppose in principle you could describe a time when you made the mistake in question.
Rationality the word might as well be tree, it doesn't mean anything. It's simply a limitation of the mind to not see the obvious truth right there, or let's say nowhere.
I did not succeed in getting my most basic point across, neither do I know how to right now.
With the limitations of language, our current technology? You can only figure out the truth for yourself, it is empirical, it can't be otherwise.
Which mistake? In what larger context?
This is a personal mantra of mine.
Could you offer a synthesis of the argument?
You know, many of the arguments against rationality around today aren't even worth listening to, and waisting 82 minutes of my life listening to the nth post-rationality-which-has-completely-misunderstood-rationality rant will exceed my generosity output.
Basically, everything in the universe is simply as it is, and that we humans have put a virtual reality layer over it without being aware of it. That subjective experience makes you aware of this. Meditation, enlightenment etc.
It's fine to calculate things, send people to mars and so forth, but that we shouldn't attach ourselves to it like if the universe is that way. Scientists should be aware of this, even though they can still do it and enjoy it. Going meta.
Like if you throw a ball, the ball moves in the air in a certain way. You decide to calculate everything out about the ball and figure out different laws "of the universe". In that moment you created a human projection of the ball and the universe, not what the universe really is. Your projection.
That gravity does not exist, it is only a concept we project upon the universe. Of course, it is useful for technology, science and so on.. It is language, symbols.
That was about 18 minutes into the video with my own twist to it.
This is all fine and dandy as long as the virtual reality that we create has no causal power over the underlying reality. We all know that reductionism is a way to simplify reality so that it may be fungible to our brains, after all "there's no plane, there are only quarks".
Does it say if this virtual reality has any precedence over the underlying causal reality?
"Researchers discover machines can learn by simply observing, without being told what to look for"
Giving "rewards" for discovering rules, Turing Learning.
http://sciencebulletin.org/archives/4761.html
http://link.springer.com/article/10.1007%2Fs11721-016-0126-1
And China and Russia have the best coders for algorithms
https://arc.applause.com/2016/08/30/best-software-developers-in-the-world/
can anyone get this page to open? It's a stanford report on AI, all 2,800 pages...
https://ai100.stanford.edu/2016-report
Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.
Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.
Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of us feels like we're on opposite sides of a debate or anything though, so usually pointing out a few simple facts is enough to get a concession that there are exceptions to the rule "humanity sucks". However, obviously refuting all thousand things, one by one, isn't a sound strategy. There really is a lot of bad stuff that humanity has done, and will continue to do I'm sure.
Usually, I try to point at broad improving trends like infant mortality, war, extreme poverty, etc. I'll argue that the media biases our fears by magnifying all the problems that remain. I paint a rosy future of people fighting debater's prisons in the past, debating universal healthcare today, and in the future arguing fiercely over whether money and work are needed at all in their post-scarcity Star Trek economy. Political rights for minorities yesterday, social justice today, argue over any minor inconveniences tomorrow. Starvation yesterday, healthy food for all today, gourmet delicacies free next to drinking fountains tomorrow. I figure they're more likely to accept a future where we never stop arguing, but do so over progressively more petty things, and never realize we're in a utopia.
However, I think I might have better luck trying to counter-counter signal. "Yeah, humanity is pretty messed up, but why do you want to put us out of our misery? Shouldn't we be made to suffer through climate change and everything else we've brought on ourselves, instead of getting off easy? Imagine another thousand years of inane cubical work and a dozen more Trump presidencies. Maybe we'll learn our lesson." [Obviously, I'm joking here.]
I think this might have the advantage of aligning their cynicism with their more charitable impulses, at least the way my conversations tend to go. And there's no impulse to counter-counter-counter-signal, because I've gone up a meta-level and made the counter-signaling game explicit, which releases all the fun available from being contrarian, and moves the conversation toward new sources of amusement. I'll bet we could then proceed to have interesting discussions on how to solve the world's problems. If whoever I'm musing with comes up with a few ideas of their own, maybe they'll even take ownership of the ideas, and start to actually care about saving the world in their own way. I can dream, I suppose.
There are both good and bad aspects of the human race, and our future could easily contain a lot which is bad. However, this is a reason to support improvements, as well as a reason to support our own destruction.
So it's a half full/half empty situation.
You can also point out the contradiction that they don't seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don't want to kill or prevent being born.
Be aware, though, that this isn't truth-seeking. It's debate for the fun of it.
I think there's also a near/far thing going on. I can't find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That's one reason why charity adds focus on 1 person or family's story, rather than faceless statistics.
Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. "Exactly. Let's start with Bob. Alice next, then you. I'll volunteer to go last. After all, I wouldn't want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it's own reward."
EDIT: I was thinking of scope insensitivity, but couldn't remember the name. It's not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.
However, I mis-remembered it above. It's true that I could cherry pick numbers and say that donations went down with scope in one case, but I'm guessing that's probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. Perhaps there are effects from misleading vividness at a small scale, as I imply. However, on a large scale, the slope is likely largely positive, even if just barely.
Academic Torrents site, for large scale database transfers
http://academictorrents.com/
Thanks, I linked this comment from the piracy thread.
LWers who liked this may also like: http://sci-hub.bz/
About: https://en.wikipedia.org/wiki/Sci-Hub
Basically, if you search for something and they don't have it, there's a huge network of scientists with access to pay-walled journals, and one of them will add a PDF. They've grown larger than any of the journal subscription companies, and have the world's largest collection of scientific papers.
Software to measure preferences?
I have a set of questions, in which a person faces a choice, which changes the odds of two moderately-positive but mutually-exclusive outcomes. Eg, with Choice #1, there is a 10% chance of X and a 20% chance of Y, while with Choice #2, there is a 15% chance of X and a 10% chance of Y. I want to find out if there are any recognizable patterns in which options the agent will choose. Is there any software already freely available which can be used to help figure this out?
Nevermind. After some time in the shower thinking, I've worked out that I can determine the bulk of what I want to determine with no more than two questions; and that there's a confounder effect which would most likely prevent me from finding out the rest of what I wanted to know anyway.
Now I'm curious to know what it was all about...
I've been trying to hammer out something like a blog post, but can't seem to get past the 'over-wordy technical' draft to the 'explain why I should actually care' draft; and am also having a touch of trouble emphasizing the important point. That said, here's one ugly draft of explanation for your amusement:
Two Questions:
The point of this exercise is to learn more about what you value, when you have to face a certain choice with no escape hatches. So, for the purposes of these questions, assume that there is no significant measurable evidence of the supernatural, of the afterlife, of alien intelligence, or of parallel worlds; that if the universe is a Matrix-like simulation, it's just being left to run without any interference. We're also going to assume that you've done as much research as possible with your available resources before you have to make these choices, and that you've done all the thinking and calculating that you can to produce the best possible estimates.
Question 1:
You are faced with a choice between two actions, which will have a significant effect on your life and the life of everyone else. If you choose Action A, then there is a 10% chance that you will survive into the long-term future, what's sometimes called Deep Time (by which I mean far enough into the future that you can't predict even the vaguest outline of things, and which may or may not include a fundamental discovery of physics that opens one of the escape hatches; and, given the nature of the laws of statistics as we know them, may involve you making copies of yourself so that a random meteor strike to one of you won't kill all of you, among other strange and wonderous possibilities), but that everyone else will die; and there's a 20% chance that you will permanently and irrevocably die, but some number of other people will survive into Deep Time; and a 70% chance that both you and everyone else die. It may not seem optimistic, but choosing Action B has its own ups and downs - by taking this action, you improve your own chances of survival into Deep Time to 15%, but the chances that you will die and someone else will survive change to 10%, and there's a 75% chance that everyone dies. (If you have trouble choosing, then assume that if you choose neither A nor B, the default Action C is a 100% chance that everyone dies.)
Question 2:
Much like Question 1, you are faced with a choice between your personal survival and the survival of other sapient people, only this time the odds are somewhat different. If you choose Action D, there is a 15% chance of your personal survival (while everyone else dies), and a 15% chance of other people surviving (while you die), and a 70% chance of everyone dying. Meanwhile, if you choose Action E, there is a 10% chance of your personal survival, and a 25% chance of other people surviving instead of you, and a 65% chance of everyone dying. (And if you need a spur, the default of Action F is a 100% chance that both you and everyone else die.)
Questionable:
When considering these questions, you most likely used one of three rules of thumb to figure out your answer. If you chose actions A and D, then you are choosing consistently with someone whose core value is their personal survival. If you chose A and E, then you are making the same choices as someone whose goal is the welfare of others, regardless of personal gain or loss. And if you choose B and E, then you are choosing the same way as someone who wishes to ensure the survival of at least some sapience, regardless of whether that is yourself or someone else.
No Question:
I am not going to ask you to publicize your answers; in fact, quite the opposite. There's a confounding factor involved here, in that we humans have evolved as a cooperative species, in which various pressures have developed to punish people who make choices that don't benefit the group, the least of which is public social disapproval. A more subtle effect is our ability to believe false things about what we really value. Which means that whatever choice you would make if actually faced with such a decision, if that choice isn't the one that matches the publicly-proclaimed values of your culture or subculture, then there is little information to be gained from whatever you claim your answers to be.
Any Questions?
While the three value-systems described above are the simplest, and amongst the most likely for people's choices to imitate, real-world human values are complex. For example, a number of people who picked the 'altruistic' choices may be willing to accept a small decrease in the odds of other people surviving, say from 10% to 9.999%, if it increases the odds of their personal survival from 5% to 85%. That is, they value other lives more than their own - but they do value their own lives /some/. And the troubles mentioned above for the simple two questions mean that it will be infeasible to measure such complicated value-systems with any accuracy. Not to mention more complicated questions, even just ones which include the option of both yourself and other people possibly being able to survive. But there are many clever people out there, who are very good at coming up with ways of extracting useful data that nobody expected could be collected at all, often through careful and subtle means; and so, at some point, it may become feasible to figure out how many people value which lives, and by how much more than they value other lives. At which point, if your past public pronouncements of your values don't match your actual values, then your credibility on such matters may take a hit at precisely the moment when such credibility massively increases in value. But knowing, ahead of time, what your values actually are, and how much you value X more than Y, could be of inestimable value.