in regard to: http://lesswrong.com/r/discussion/lw/nv8/do_you_want_to_be_like_kuro5hin_because_this_is/
While we are working on a solution, you can go to your preferences, and change the option:
Don't show me articles with a score less than:
to blank.
here are my recent posts:
There are more, but you can also go to my (new) website - http://www.bearlamp.com.au and see them all.
Upvote post for those who have been hit by collateral damage in the Elo / Nier war, as it happened to me. I promise to upvote everyone who writes under this something that is not controversial.
If the problem is the sockpuppet army, while coders create a systematic solution, the community can help and show that the army of goodwilling men and women are stronger than any puppet master.
We won the war against Eugene... for a brief instant.
I'm keeping score and calculating the number of downvotes/upvotes on the comment where I requested help against Eugene Nier downvotes campaign.
Well, there was a moment where 14 people upvoted and 20 puppets downvoted. Now we are at a point where 21 people upvoted and 30 puppets downvoted. This means that at least we forced Eugene to increase the count of his puppets to fight back. I count this as score for LW :)
#makelwniceagain
I don't think that opposing strategic voting by strategic voting is an improvement. (Noise + more noise != signal.) I also don't see how forcing Eugine to increase the number of sockpuppets is a good thing, especially if the difference is between 20 and 30.
Thanks for trying! I just think this is a wrong direction.
Here's the problem with talking x-risk with cynics who believe humanity is a net negative, and also a couple possible solutions.
Frequently, when discussing the great filter, or averting nuclear war, someone will bring up the notion that it would be a good thing. Humanity has such a bad track record with environmental responsibility or human rights abuses toward less advanced civilizations, that the planet, and by extension the universe, would be better off without us. Or so the argument goes. I've even seen some countersignaling severe enough to argue, somewhat seriously, in favor of building more nukes and weapons, out of a vague but general hatred for our collective insanity, politics, pettiness, etc.
Obviously these aren't exactly careful, step by step arguments, where if I refute some point they'll reverse their decision and decide we should spread humanity to the stars. It's a very general, diffuse dissatisfaction, and if I were to refute any one part, the response would be "ok sure, but what about [lists a thousand other things that are wrong with the world]". It's like fighting fog, because it's not their true objection, at least not quite. It's not like either of u...
I think there's also a near/far thing going on. I can't find it now, but somewhere in the rationalist diaspora someone discussed a study showing that people will donate more to help a smaller number of injured birds. That's one reason why charity adds focus on 1 person or family's story, rather than faceless statistics.
Combining this with what you pointed out, maybe a fun place to take the discussion would be to suggest that we start with a specific one of our friends. "Exactly. Let's start with Bob. Alice next, then you. I'll volunteer to go last. After all, I wouldn't want you guys to have to suffer through the loss of all your friends, one by one. No need to thank me, it is it's own reward."
EDIT: I was thinking of scope insensitivity, but couldn't remember the name. It's not just a LW concept, but also an empirically studied bias with a Wikipedia page and everything.
However, I mis-remembered it above. It's true that I could cherry pick numbers and say that donations went down with scope in one case, but I'm guessing that's probably not statistically significant. People are probably willing to donate a little more, not less, to have an impact a hundred times as large. P...
Man burns down house remotely over the internet, for insurance, no accident.
Edit: was only posited, but ivestigators rigged up the supposed instrument of doom, a network printer with a piece of string.
Anyone know where I can find melatonin tablets <300 mcg? Splitting 300 mcg into 75 mcg quarters still gives me morning sleepiness, thinking smaller dose will reduce remaining melatonin upon wake time. Thanks.
Software to measure preferences?
I have a set of questions, in which a person faces a choice, which changes the odds of two moderately-positive but mutually-exclusive outcomes. Eg, with Choice #1, there is a 10% chance of X and a 20% chance of Y, while with Choice #2, there is a 15% chance of X and a 10% chance of Y. I want to find out if there are any recognizable patterns in which options the agent will choose. Is there any software already freely available which can be used to help figure this out?
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted. "
I think it would be interesting if we weigh the benefits of human desire modification in all its forms (ranging from strategies like delayed gratification to brain pleasure centre stimulation: covered very well in this fun theory sequence article ) against the costs of continuous improvement.
Some of these costs :
Hi, I'm curious what rationalists (you) think of this video if you have time:
Why Rationality Is WRONG! - A Critique Of Rationalism https://www.youtube.com/watch?v=iaV6S45AD1w 1 h 22 min 47 s
Personally, I don't know much about all of the different obstacles in figuring out the truth so I can't do this myself. I simply bought it because it made sense to me, but if you can somehow go meta on the already meta, I would appreciate it.
I tried listening to the video on the 1.5× speed. Even so, the density of ideas is horribly low. It's something like:
Science is successful, but that makes scientists overconfident. By 'rationalists' I mean people who believe they already understand everything.
Those fools don't understand that "what they understand" is just a tiny fraction of the universe. Also, they don't realize that the universe is not rational; for example the animals are not rational. Existence itself has nothing to do with rationality or logic. Rationalists believe that the universe is rational, but that's just their projection. Rationality is an emergent property. Existence doesn't need logic, but logic needs existence, therefore existence is primary.
You can't use logic to prove whether the sun is shining or not; you have to look out of the window. You can invent an explanation for empirical facts, but there are hundreds of other equally valid explanations.
That was the first 16 minutes, then I became too bored to continue.
My opinion?
Well, of course if you define a "rationalist" as a strawman, you can easily prove the strawman is foolish. You don't need more than one hour to convince me...
I agree with the other commenters about this.
So I thought "maybe it gets more interesting later on" and skipped to 50:00. At which point he isn't bothering to make any arguments, merely preening over how he understands the world so much more deeply than rationalists, who will come and bother him with their "arguments" and "contradictions" and he can just see that they "haven't got any awareness" and trying to engage with them would be like trying to teach calculus to a dog, and that the mechanism used to brainwash suicide bombers and fundam...
Oh god. This is really bad.
Someone should tell him about the straw vulcan.
The more we (lw'ers) are tied to the word "Rationality". That should happen less. If you feel personally affected by the idea that someone says this part of your identity is wrong, then maybe it's time to be more fox and less hedgehog.
"Researchers discover machines can learn by simply observing, without being told what to look for"
Giving "rewards" for discovering rules, Turing Learning.
http://sciencebulletin.org/archives/4761.html
http://link.springer.com/article/10.1007%2Fs11721-016-0126-1
And China and Russia have the best coders for algorithms
https://arc.applause.com/2016/08/30/best-software-developers-in-the-world/
can anyone get this page to open? It's a stanford report on AI, all 2,800 pages...
My girlfriend and I disagreed about focussing on poor vs richer countries in terms of doing good. She made an argument along the lines of:
'In poorer countries the consumer goods are targeted to that class of poor people so making difference in inequality in places like Australia is more important than in poor countries because they are deprived of a supply of goods because the consumer culture is targeted towards the wealthier middle class.'
What do you make of it?
I've been trying to hammer out something like a blog post, but can't seem to get past the 'over-wordy technical' draft to the 'explain why I should actually care' draft; and am also having a touch of trouble emphasizing the important point. That said, here's one ugly draft of explanation for your amusement:
Two Questions:
The point of this exercise is to learn more about what you value, when you have to face a certain choice with no escape hatches. So, for the purposes of these questions, assume that there is no significant measurable evidence of the supernatural, of the afterlife, of alien intelligence, or of parallel worlds; that if the universe is a Matrix-like simulation, it's just being left to run without any interference. We're also going to assume that you've done as much research as possible with your available resources before you have to make these choices, and that you've done all the thinking and calculating that you can to produce the best possible estimates.
Question 1:
You are faced with a choice between two actions, which will have a significant effect on your life and the life of everyone else. If you choose Action A, then there is a 10% chance that you will survive into the long-term future, what's sometimes called Deep Time (by which I mean far enough into the future that you can't predict even the vaguest outline of things, and which may or may not include a fundamental discovery of physics that opens one of the escape hatches; and, given the nature of the laws of statistics as we know them, may involve you making copies of yourself so that a random meteor strike to one of you won't kill all of you, among other strange and wonderous possibilities), but that everyone else will die; and there's a 20% chance that you will permanently and irrevocably die, but some number of other people will survive into Deep Time; and a 70% chance that both you and everyone else die. It may not seem optimistic, but choosing Action B has its own ups and downs - by taking this action, you improve your own chances of survival into Deep Time to 15%, but the chances that you will die and someone else will survive change to 10%, and there's a 75% chance that everyone dies. (If you have trouble choosing, then assume that if you choose neither A nor B, the default Action C is a 100% chance that everyone dies.)
Question 2:
Much like Question 1, you are faced with a choice between your personal survival and the survival of other sapient people, only this time the odds are somewhat different. If you choose Action D, there is a 15% chance of your personal survival (while everyone else dies), and a 15% chance of other people surviving (while you die), and a 70% chance of everyone dying. Meanwhile, if you choose Action E, there is a 10% chance of your personal survival, and a 25% chance of other people surviving instead of you, and a 65% chance of everyone dying. (And if you need a spur, the default of Action F is a 100% chance that both you and everyone else die.)
Questionable:
When considering these questions, you most likely used one of three rules of thumb to figure out your answer. If you chose actions A and D, then you are choosing consistently with someone whose core value is their personal survival. If you chose A and E, then you are making the same choices as someone whose goal is the welfare of others, regardless of personal gain or loss. And if you choose B and E, then you are choosing the same way as someone who wishes to ensure the survival of at least some sapience, regardless of whether that is yourself or someone else.
No Question:
I am not going to ask you to publicize your answers; in fact, quite the opposite. There's a confounding factor involved here, in that we humans have evolved as a cooperative species, in which various pressures have developed to punish people who make choices that don't benefit the group, the least of which is public social disapproval. A more subtle effect is our ability to believe false things about what we really value. Which means that whatever choice you would make if actually faced with such a decision, if that choice isn't the one that matches the publicly-proclaimed values of your culture or subculture, then there is little information to be gained from whatever you claim your answers to be.
Any Questions?
While the three value-systems described above are the simplest, and amongst the most likely for people's choices to imitate, real-world human values are complex. For example, a number of people who picked the 'altruistic' choices may be willing to accept a small decrease in the odds of other people surviving, say from 10% to 9.999%, if it increases the odds of their personal survival from 5% to 85%. That is, they value other lives more than their own - but they do value their own lives /some/. And the troubles mentioned above for the simple two questions mean that it will be infeasible to measure such complicated value-systems with any accuracy. Not to mention more complicated questions, even just ones which include the option of both yourself and other people possibly being able to survive. But there are many clever people out there, who are very good at coming up with ways of extracting useful data that nobody expected could be collected at all, often through careful and subtle means; and so, at some point, it may become feasible to figure out how many people value which lives, and by how much more than they value other lives. At which point, if your past public pronouncements of your values don't match your actual values, then your credibility on such matters may take a hit at precisely the moment when such credibility massively increases in value. But knowing, ahead of time, what your values actually are, and how much you value X more than Y, could be of inestimable value.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "