This thread is for discussing anything that doesn't seem to deserve its own post.
If the resulting discussion becomes impractical to continue here, it means the topic is a promising candidate for its own thread.
This thread is for discussing anything that doesn't seem to deserve its own post.
If the resulting discussion becomes impractical to continue here, it means the topic is a promising candidate for its own thread.
Some time ago, I had a simple insight that seems crucial and really important, and has been on my mind a lot. Yet at the same time, I'm unable to really share it, because on the surface it seems so obvious as to not be worth stating, and very few people would probably get much out of me just stating it. I presume that this an instance of the Burrito Phenomenon:
...While working on an article for the Monad.Reader, I’ve had the opportunity to think about how people learn and gain intuition for abstraction, and the implications for pedagogy. The heart of the matter is that people begin with the concrete, and move to the abstract. Humans are very good at pattern recognition, so this is a natural progression. By examining concrete objects in detail, one begins to notice similarities and patterns, until one comes to understand on a more abstract, intuitive level. This is why it’s such good pedagogical practice to demonstrate examples of concepts you are trying to teach. It’s particularly important to note that this process doesn’t change even when one is presented with the abstraction up front! For example, when presented with a mathematical definition for the first time, most people (me i
I propose a thread in which people practice saying they were wrong and possibly also saying they were surprised.
For the passed year or two I've felt like there are literally no avenues open to me towards social, romantic, or professional advancement, up from my current position of zero. On reflection, it seems highly unlikely that this is actually true, so it follows that I'm rather egregiously missing something. Are there any rationalist techniques designed to make one better at noticing opportunities (ones that come along and ones that have always been there) in general?
I was about to explain why nobody has an answer to the question you asked, when it turned out you already figured it out :) As for what you should actually do, here's my suggestion:
Explain your actual situation and ask for advice.
For each piece of advice given, notice that you have immediately come up with at least one reason why you can't follow it.
Your natural reaction will be to post those reasons, thereby getting into an argument with the advice givers. You will win this argument, thereby establishing that there is indeed nothing you can do.
This is the important bit: don't do step 3! Instead, work on defeating or bypassing those reasons. If you can't do this by yourself, go ahead and post the reasons, but always in a frame of "I know this reason can be defeated or bypassed, help me figure out how," that aligns you with instead of against the advice givers.
You are allowed to reject some of the given advice, as long as you don't reject all of it.
Unfortunately, most advice-givers in my experience tend to mistake #4 for #3. I point out that they've made an incorrect assumption when formulating their advice, and I immediately get yelled at for making excuses.
If this conversation is representative, 'making excuses' might not be entirely accurate, though I can see why people would pattern-match to that as the nearest cached thing of relevance. But to be more accurate, it's more like you're asking "what car is best for driving to the moon" and then rejecting any replies that talk about rockets, since that's not an answer to the actual question you asked. It could even be that the advice about building rockets is entirely useless to you, if you're in a situation where you can't go on a rocket for whatever reason, and they need to introduce you to the idea of space elevators or something, but staying focused on cars isn't going to get you what you want and people are likely to get frustrated with that pretty quickly.
I'm saying that not all situations present the same amount of opportunities, and your situation makes a difference whether or not you think it does.
Okay, and that's not something I dispute. If I did somehow manage to correct my cognitive flaw, one of the possibilities is that I'd discover that I really don't have any options. But I can't know that until the flaw is solved.
I do not think there is a fully-general piece of advice for you, but you clearly believe there is.
Of course I believe there is a fully general algorithm for identifying avenues of advancement towards a terminal goal. But phrasing it like that just made me realize that anyone who actually knew it would have already built an AGI.
Well, crap.
Grocery stores should have a lane where they charge more, such as 5% more per item. It would be like a toll lane for people in a hurry.
On the Freakonomics blog, Steven Pinker had this to say:
There are many statistical predictors of violence that we choose not to use in our decision-making for moral and political reasons, because the ideal of fairness trumps the ideal of cost-effectiveness. A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher. Thankfully, this rational policy would be seen as a moral abomination.
I've seen a common theme on LW that is more or less "if the consequences are awful, the reasoning probably wasn't rational". Where do you think Pinker's analysis went wrong, if it did go wrong?
One possibility is that the utility function to be optimized in Pinker's example amounts to "convict the guilty and acquit the innocent", whereas we probably want to give weight to another consideration as well, such as "promote the kind of society I'd wish to live in".
A rational decision-maker using Bayes’ theorem would say, for example, that one should convict a black defendant with less evidence than one needs with a white defendant, because these days the base rates for violence among blacks is higher.
One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.
Where do you think Pinker's analysis went wrong, if it did go wrong?
He began a sentence by characterizing what a member of a group "would say".
One would compare black defendants with guilty black defendants and white defendants with guilty white defendants. It's far from obvious that (guilty black defendants/black defendants) > (guilty white defendants/white defendants). Differing arrest rates, plea bargaining etc. would be factors.
60% of convicts who have been exonerated through DNA testing are black. Whereas blacks make up 40% of inmates convicted of violent crimes. Obviously this is affected by the fact that "crimes where DNA evidence is available" does not equal "violent crimes". But the proportion of inmates incarcerated for rape/sexual assault who are black is even smaller: ~33%. There are other confounding factors like which convicts received DNA testing for their crime. But it looks like a reasonable case can be made that the criminal justice system's false positive rate is higher for blacks than whites. Of course, the false negative rate could be higher too. If cross-racial eyewitness identification is to blame for wrongful convictions then uncertain cross-racial eyewitnesses might cause wrongful acquittals.
If you instituted a policy to require less evidence to convict black defendants, you would convict more black defendants, which would make the measured "base rates for violence among blacks" go up, which would mean that you could need even less evidence to convict, which...
Pinker didn't address evidence screening off other evidence. Race would be rendered zero evidence in many cases, in particular in criminal cases for which there is approximately enough evidence to convict. I'm not exactly sure how often, I don't know how much e.g. poverty, crime, and race coincide.
It is perhaps counterintuitive to think that Bayesian evidence can apparently be ignored, but of course it isn't really being ignored, just carefully not double counted.
I do not consider it laudable that, when someone makes a rational suggestion, it is seen as a moral abomination. If it's a bad idea, there are rational ways to declare it a bad idea, and "moral abomination" is lazy. If it is a good idea, then "moral abomination" goes from laziness to villainy.
If his argument is "this causes a self-fulfilling prophecy, because we will convict blacks and not convict Asians because blacks are convicted more and Asians convicted less, suggesting that we will over-bias ourselves," then he's right that this policy is problematic. If his argument is "we can't admit that blacks are more likely to commit crimes because that would make us terrible people," then I don't want any part of it. Since he labeled it a moral abomination, that suggests the latter rather than the former.
Whatever works, I don't have any specific policies in mind (I'm far from being an expert in law enforcement).
But to take a specific example, I don't think information about higher crime rates for blacks is enough to tell whether we need "increased police presence in the Ghetto" - for all I know, police presence could already be 10 times the national average there.
There is a tendency I dislike in political punditry/activism/whatever (not that I'm accusing you of it, you just gave me a pretext to get on my soap box) to say "we need more X" or "we need less X" (regulation, police, taxes, immigrants, education, whatever) without any reference to evidence about what the ideal level of X would be, and about whether we are above or below that level - sometimes the same claims are made in countries with wildly different levels of X.
There shouldn't be any such distinction. The audience (I assume you mean the courtroom audience) should reason the exact same way the jury does.
The prosecution is required to make an explicit presentation of the evidence for guilt, so that the mere fact that charges were brought is screened off. As a consequence, failure to present a convincing explicit case is strong evidence of innocence; prosecutors have no incentive to hide evidence of guilt! Hence any juror or audience member who reasons "the prosecution's case as presented is weak, but the defendant has a high likelihood of guilt just because they suspect him" is almost certainly committing a Bayesian error. (Indeed, this is how information cascades are formed.)
See Argument Screens Off Authority: once in the courtroom, prosecutors have to present their arguments, which renders their "authority" worthless.
I wrote this long post defending my point, and about halfway through, I realize it was mostly wrong. I think the screening off point is probably a better description of what's wrong with Pinker's analysis. Specifically, the higher rate of crime among blacks should be screened off from consideration by the fact that this particular black defendant was charged.
To elaborate on my earlier point, the presumption of innocence also serves to remind the juror that the propensity of the population to commit crimes is screened off by the fact that this particular person went through the arrest and prosecution screening processes in order to arrive in the position of defendant. In other words, a rationalist should not use less evidence to convict black defendants than required for white defendants because this is double counting the crime rate of blacks.
People are bothered by some words and phrases.
Recently, I learned that the original meaning of "tl;dr" has stuck in people's mind such that they don't consider it a polite response. That's good to know.
Some things that bother me are:
I'm not going to pretend that referring to women as"girls" inherently bothers me, but it bothers other people, so it by extension bothers me and I wouldn't want it excluded from this discussion.
Some say to say not "complexity" or "emergence".
Why does the argument "I've used math to justify my views, so it must have some validity" tend to override "Garbage In - Garbage Out"? It can be this thread:
I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.
or it can be the subprime mortgage default risk.
What is the name for this cognitive bias of trusting the conclusions more (or sometimes less) when math is involved?
Sounds like a special case or "judging an argument by its appearance" (maybe somebody can make that snappier). It's fairly similar to "it's in latin, therefore it must be profound", "it's 500 pages, therefore it must be carefully thought-out" and "it's in helvetica, therefore it's from a trustworthy source".
Note that this is entirely separate from judging by the arguer's appearance.
I propose a thread in which people refine their questions for the speakers at the Singularity Summit.
I'm having trouble finding a piece which I am fairly confident was either written on LW or linked to from here. It dealt with a stone which had the power render all the actions of the person who held it morally right. So a guy goes on a quest to get the stone, crossing the ocean and defeating the fearful guardian, and finds it and returns home. At some point he kills another guy, and gets sentenced by a judge, and it is pointed out that the stone protects him from committing morally wrong actions, not from the human institution of law. Then the guy notices...
I propose a discussion thread in which people can submit requests for pdfs of scholarly articles. I have found promising things for debiasing racism but I've been figuring out the contents of important articles indirectly - through their abstract and descriptions of them in free articles.
In his talk on Optimism (roughly minute 30 to roughly minute 35), David Deutsch said that the idea that the world may be inexplicable from a human perspective is wrong and is only an invitation to superstitious thinking. He even mentions an argument by Richard Dawkins stating that evolution would have no reason to produce a brain capable of comprehending everything in our universe. It reminds me of something I heard about the inability to teach algebra or whatever to dogs. He writes this argument off for reasons evolution didn't prepare me for, so I was...
Is there a term for the following fallacy (related to the false dilemma)?
I propose a permanent jobs thread, to lower the barrier to posting relevant job information and reduce clutter in the discussion section.
I'd love to know what the community here thinks of some critiques of Nick Bostrom's conception of existential risks, and his more general typology of risks. I'm new to the community, so a bit unsure whether I should completely dive in with a new article, or approach the subject some other way. Thoughts?
Can someone explain to me the point of sequence reruns?
I honestly don't get it. Sequences are well organised and easily findable; what benefit is there from duplicating the content? It seems to me like it just spreads the relevant discussion into multiple places, adds noise to google results, and bloats the site.
Many people find blogs easier to read than books. Reacting to prompts with bite size chunks of information requires far less executive control and motivation than working through a mass of text unprompted.
Where can I find arguments that intelligence self-amplification is not likely to quickly yield rapidly diminishing returns? I know Chalmers has a brief discussion of it in his singularity analysis article, but I'd like to see some lengthier expositions.
Last night, I had an idea for a post to write for LW. My idea was something along the following:
For many good reasons, LessWrong-ers have gone through great lengths to explain how to use Bayes' theorem. While understanding Bayes' theorem is essential to rationality, has anyone written an explanation (targeted toward traditional rationalists) about why Bayes' theorem is so essential in the first place? If such a post hasn't be written, maybe I could write that post...
Here are some of the main points I'm thinking of addressing
I'm working to improve my knowledge of epistemology. Can anyone recommend a good reference/text book on the subject? I'm especially looking to better understand LW's approach to epistemology (or an analogous approach) in a rigorous, scholarly way.
Until recently, I was a traditional rationalist. Epistemologically speaking, I was a foundationalist with belief in a priori knowledge. Through recent studying of Bayes, Quine, etc., these beliefs have been strongly challenged. I have been left with a feeling of cognitive dissidence.
I'd really appreciate if my thi...
I propose a thread in which ideas commonly discussed on LW can be discussed with a different dynamic - that of the relatively respectable minority position being granted a slight preponderance in number and size of comments.
This might include feminism in which one is offended by "manipulation", deontology, arguments for charities other than X-risk ones, and the like.
Nothing would be censored or off limits, those used to being in the majority would merely have to wait to comment if most of the comments already supported "their" "side" (both words used loosely).
I've been told that people use the word "morals" to mean different things. Please answer this poll or add comments to help me understand better.
When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?
[pollid:1165]
If asked to guess a number that a human chose that is between zero and what they say is "infinity", how would one go about assigning probabilities to both a) assign higher numbers lower probabilities on average than lower numbers and b) assign higher values to low complexity numbers than higher complexity ones?
For example, 3^^^3 is more likely than 3^^^3 - 42.
Is a) necessary so the area under the curve adds up to 1? Generally, what other things than a) and b) would be needed when guessing most humans' "random" number?
Assuming infinite matter were available, is there a limit to the possible consciousnesses that could be made out of it?
http://becominggaia.wordpress.com/2011/03/15/why-do-you-hate-the-siailesswrong/#entry I'll reserve my opinion about this clown, but honestly I do not get how he gets invited to AGI conferences, having neither work or even serious educational credentials.
I'll reserve my opinion about this clown
Downvoted. Unless "clown" is his actual profession, you didn't reserve your opinion.
Wow, I loved the essay. I hadn't realized I was part of such a united, powerful organisation and that I was so impressively intelligent, rhetorically powerful and ruthlessly self interested. I seriously felt flattered.
Not to call attention to the elephant in the room, but what exactly are Eliezer Yudkowsky's work and educational credentials re: AGI? I see a lot of philosophy relevant to AI as a discipline, but nothing that suggests any kind of hands-on-experience...
...They...build a high wall around themselves rather than building roads to their neighbors. I can understand self-protection and short-sighted conservatism but extremes aren’t healthy for anyone...repetitively screaming their fear rather than listening to rational advice. Worse, they’re kicking rocks down on us.
If it weren’t for their fear-mongering...AND their arguing for unwise, dangerous actions (because they can’t see the even larger dangers that they are causing), I would ignore them like harmless individuals...rather than [like] junkies who need to do anti-societal/immoral things to support their habits...fear-mongering and manipulating others...
...very good at rhetorical rationalization and who are selfishly, unwilling to honestly interact and cooperate with others. Their fearful, conservative selfishness extends far beyond their “necessary” enslavement of the non-human and dangerous...raising strawmen, reducing to sound bites and other misdirections. They dismiss anyone and anything they don’t like with pejoratives like clueless and confused. Rather than honest engagement they attempt to shut down anyone who doesn’t see the world as they do. And they are very active in tryin
...I think that that was one of those occasional comments you make which are sarcastic, and which no-one gets, and which always get downvoted.
But I could be wrong. Please clarify if you were kidding or not, for this slow uncertain person.
Maybe he submits papers and conference program comittee find them relevant and interesting enough?
After all, Yudkowsky has no credentials to speak of, either - what is SIAI? Weird charity?
I read his paper. Well, the point he raises against FAI concept and for rational cooperation are quite convincing-looking. So are pro-FAI points. It is hard to tell which are more convincing with both sides being relatively vague.
Based on the abstract, it's not worth my time to read it.
Abstract. Insanity is doing the same thing over and over and expecting a different result. “Friendly AI” (FAI) meets these criteria on four separate counts by expecting a good result after: 1) it not only puts all of humanity’s eggs into one basket but relies upon a totally new and untested basket, 2) it allows fear to dictate our lives, 3) it divides the universe into us vs. them, and finally 4) it rejects the value of diversity. In addition, FAI goal initialization relies on being able to correctly calculate a “Coherent Extrapolated Volition of Humanity” (CEV) via some as-yet-undiscovered algorithm. Rational Universal Benevolence (RUB) is based upon established game theory and evolutionary ethics and is simple, safe, stable, self-correcting, and sensitive to current human thinking, intuitions, and feelings. Which strategy would you prefer to rest the fate of humanity upon?
Points 2), 3), and 4) are simply inane.
Upvoted, agreed, and addendum: Similarly inane is the cliche "insanity is doing the same thing over and over and expecting a different result."
Interestingly, back in 2007, when I was naive and stupid, I thought Mark Waser one of the most competent participants of agi and sl4 mailing lists. Must be something appealing to an unprepared mind in the way he talks. Can't simulate that impression now, so it's not clear what that is, but probably mostly general contrarian attitude without too many spelling errors.
http://www.questionablecontent.net/view.php?comic=2070
Let's get FAI right. It would be the ultimate insult if we were ever turned into paperclips by something naming itself Gary.
I face a trivial inconvenience. If I want to send someone a message through LW, I am held up by trying to think of a Subject line.
What is a good convention or social norm that would enable people to not have to think about what to put there and how that affects the message?
If there is a safe and effective way to induce short term amnesia, wouldn't that be useful for police lineups?
People are good at picking the person who most resembles who they saw, but not at determining if someone was who they saw. Amnesia would allow people to pick among different lineups without remembering who they chose in the first lineup and whether or not that is someone in a later lineup.
People would be given a drug or machine interfering with their memory and pick someone out of a lineup of a suspect and similar looking people. Then, the person t...
Negative utility: how does it differ from positive utility, and what is the relationship between the two?
Useful analogies might include the relationship of positive numbers to negative ones, the relationship of hot to cold, or other.
Mathematically, all that matters is the ratio of the differences in the utilities of the possible alternatives, so it's not really important whether utilities are positive or negative. Informally, negative utility generally means something less desirable than the status quo.
Has the comment deletion behavior changed back?
No.
Does LW have any system in place for detecting and dealing with abuses of the karma system? It looks like someone went through around two pages of my comments and downvoted all of them between yesterday and today; not that this particular incident is a big deal, I'm only down 16 points, but I'd be concerned if it continues, and I know this sort of thing has happened before.
ArsTechnica article "Rise of the Machines". A bit confused, but interesting instance of the meme. http://arstechnica.com/tech-policy/news/2011/10/rise-of-the-machines-why-we-still-read-hg-wells.ars
I estimate, that a currently working and growing superintelligence has a probability in a range of 1/million to 1/1000. I am at least 50% confident that it is so.
Not a big probability but given the immense importance of such an object, it is already a significant event to consider. The very near term birth of a superintelligence is something to think about. It wouldn't be just another Sputnik launched by some other people you thought they are unable to make it, but they sure were. We know that well, it wouldn't be just a minor blow for a pride as Sputnik ...
Well, the probability is computed by an algorithm that is itself imperfect. "I'm 50% confident that the probability is 1/1000" means something like "My computation gives a probably of 1/1000, but I'm only 50% confident that I did it right". For example, if given a complex maths problem about probabilities of getting some card patterns from a deck with twisted rules of drawing and shuffling, you can do that maths, ends up with a probability of 1/5 that you'll get the pattern, but not be confident you didn't make a mistake in applying the laws of probability, so you'll only give a 50% confidence to that answer.
And there is also the difference between belief and belief in belief. I can something "I believe the probability to be of 1/1000, but I'm only 50% confident that this is my real belief, and not just a belief in belief".
My friend suggested a point about Pascal's Mugging. There is a non-zero chance that if you tried to Pascal mug someone, they're a genie who would be pissed off by your presumptuousness, and punish you severely enough that you should have originally decided not to try it. I know the argument isn't watertight, but it is entertaining.
The whole point of Pascal's Mugging is that an arbitrarily small probability of something happening is automatically swamped if there's infinite utility or disutility if it does, according to all usual ways of calculating. Making the arbitrarily small probability smaller doesn't fix this.
If I'm understanding the original question properly, the issue is along the lines of the following situation: EphemeralNight finds emself sitting at home, thinking 'I wish there was something fun I could do tonight. But I don't know of anything. So how might I find something? I have no idea.' It's not that e's running into akrasia on the path to doing X, it's that e doesn't have an X in the first place and doesn't know how to find one.
Useful answers will probably be along the lines of either 'try meeetup.com/okcupid/your local LW meetup/etc', or 'here's how you find out about things like meetup.com/okcupid/LW meetups/etc'.
That's what I thought, too, but the comment seemed to be asking for a general rather than a specific solution.