ITT we talk about whatever.
ITT we talk about whatever.
Sharing my Christmas (totally non-supernatural) miracle:
My theist girlfriend on Christmas Eve: "For the first time ever I went to mass and thought it was ridiculous. I was just like, this is nuts. The priest was like 'oh, we have to convert the rest of the world, the baby Jesus spoke to us as an infant without speaking, etc.' I almost laughed."
I'm looking for a particular fallacy or bias that I can't find on any list.
Specifically, this is when people say "one more can't hurt;" like a person throwing an extra piece of garbage on an already littered sidewalk, a gambler who has lost nearly everything deciding to bet away the rest, a person in bad health continuing the behavior that caused the problem, etc. I can think of dozens of examples, but I can't find a name. I would expect it to be called the "Lost Cause Fallacy" or the "Fallacy of Futility" or something, but neither seems to be recognized anywhere. Does this have a standard name that I don't know, or is it so obvious that no one ever bothered to name it?
What are the implications to FAI theory of Robin's claim that most of what we do is really status-seeking? If an FAI were to try to extract or extrapolate our values, would it mostly end up with "status" as the answer and see our detailed interests, such as charity or curiosity about decision theory, as mere instrumental values?
(reposted from last month's open thread)
An interesting site I recently stumbled upon:
They have huge lists of biases, techniques, explanations, and other stuff, with short summaries and longer articles.
Here's the results from typing in "bias" into their search bar.
A quick search for "changingminds" in LW's search bar shows that noone has mentioned this site before on LW.
Is this site of any use to anyone here?
Does anyone here think they're particularly good at introspection or modeling themselves, or have a method for training up these skills? It seems like it would be really useful to understand more about the true causes of my behavior, so I can figure out what conditions lead to me being good and what conditions lead to me behaving poorly, and then deliberately set up good conditions. But whenever I try to analyze my behavior, I just hit a brick wall---it all just feels like I chose to do what I did out of my magical free will. Which doesn't explain anything...
Just thought I'd mention this: as a child, I detested praise. (I'm guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it's affected my overall development.
Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.
I wrote a short story with something of a transhumanism theme. People can read it here. Actionable feedback welcome; it's still subject to revision.
Note: The protagonist's name is "Key". Key, and one other character, receive Spivak pronouns, which can make either Key's name or eir pronouns look like some kind of typo or formatting error if you don't know it's coming. If this annoys enough people, I may change Key's name or switch to a different genderless pronoun system. I'm curious if anyone finds that they think of Key and the other Spivak...
Tags now sort chronologically oldest-to-newest by default, making them much more useful for reading posts in order.
If, say, I have a basic question, is it appropriate to post it to open thread, to a top level post, or what? ie, say if I'm working through Pearl's Causality and am having trouble deriving something... or say I've stared at the wikipedia pages for ages and STILL don't get the difference between Minimum Description Length and Minimum Message Length... is LW an appropriate place to go "please help me understand this", and if so, should I request it in a top level post or in an open thread or...
More generally: LW is about developing human rationalit...
David Chalmers surveys the kinds of crazy believed by modern philosophers, as well as their own predictions of the results of the survey.
56% of target faculty responding favor (i.e. accept or lean toward) physicalism, while 27% favor nonphysicalism (for respondents as a whole, the figure is 54:29). A priori knowledge is favored by 71-18%, an analytic-synthetic distinction is favored by 65-27%, Millianism is favored over Fregeanism by 34-29%, while the view that zombies are conceivable but not metaphysically possible is favored over metaphysical possibility and conceivability respectively by 35-23-16% respectively.
This blog comment describes what seems to me the obvious default scenario for an unFriendly AI takeoff. I'd be interested to see more discussion of it.
I intend to participate in the StarCraft AI Competition. I figured there are lots of AI buffs here that could toss some pieces of wisdom at me. Shower me with links you deem relevant and recommend books to read.
Generally, what approaches should I explore and what dead ends should I avoid? Essentially, tell me how to discard large portions of porential-starcraft-AI thingspace quickly.
Specifically, the two hardest problems that I see are:
I have some advice.
Pay attention to the timing of your edit/compile/test cycle time. Efforts to get this shorter pay off both in more iterations and in your personal motivation (interacting with a more-responsive system is more rewarding). Definitely try to get it under a minute.
A good dataset is incredibly valuable. When starting to attack a problem - both the whole thing, and subproblems that will arise - build a dataset first. This would be necessary if you are doing any machine learning, but it is still incredibly helpful even if you personally are doing the learning.
Succeed "instantaneously" - and don't break it. Make getting to "victory" - a complete entry - your first priority and aim to be done with it in a day or a week. Often, there's temptation to do a lot of "foundational" work before getting something complete working, or a "big refactoring" that will break lots of things for a while. Do something (continuous integration or nightly build-and-test) to make sure that you're not breaking it.
I'm going to repeat my request (for the last time) that the most recent Open Thread have a link in the bar up top, between 'Top' and 'Comments', so that people can reach it a tad easier. (Possible downside: people could amble onto the site and more easily post time-wasting nonsense.)
I am posting this in the open thread because I assume that somewhere in the depths of posts and comments there is an answer to the question:
If someone thought we lived in an internally consistent simulation that is undetectable and inescapable, is it even worth discussing? Wouldn't the practical implications of such a simulation imply the same things as the material world/reality/whatever you call it?
Would it matter if we dropped "undetectable" from the proposed simulation? At what point would it begin to matter?
In two recent comments [1][2], it has been suggested that to combine ostensibly Bayesian probability assessments, it is appropriate to take the mean on the log-odds scale. But Bayes' Theorem already tells us how we should combine information. Given two probability assessments, we treat one as the prior, sort out the redundant information in the second, and update based on the likelihood of the non-redundant information. This is practically infeasible, so we have to do something else, but whatever else it is we choose to do, we need to justify it as an appr...
Hmm, this "mentat wiki" seems to have some reasonably practical intelligence (and maybe rationality) techniques.
It has been awhile since I have been around, so please ignore if this has been brought up before.
I would appreciate it if offsite links were a different color. The main reason for this is because of the way I skim online articles. Links are generally more important text and I if I see a link for [interesting topic] it helps me to know at a glance that there will be a good read with a LessWrong discussion at the end as opposed to a link to Amazon where I get to see the cover of a book.
Ivan Sutherland (inventor of Sketchpad - the first computer-aided drawing program) wrote about how "courage" feels, internally, when doing research or technological projects.
"[...] When I get bogged down in a project, the failure of my courage to go on never feels to me like a failure of courage, but always feels like something entirely dif- ferent. One such feeling is that my research isn't going anywhere anyhow, it isn't that important. Another feeling involves the urgency of something else. I have come to recognize these feelings as “who cares” and “the urgent drives out the important.” [...]"
I'm looking for a certain quote I think I may have read on either this blog or Overcoming Bias before the split. It goes something like this: "You can't really be sure evolution is true until you've listened to a creationist for five minutes."
Ah, never mind, I found it.
I'd like a pithier way of phrasing it, though, than the original quote.
http://scicom.ucsc.edu/SciNotes/0901/pages/geeks/geeks.html
" They told them that half the test generally showed gender differences (though they didn't mention which gender it favored), and the other half didn't.
Women and men did equally well on the supposedly gender-neutral half. But on the sexist section, women flopped. They scored significantly lower than on the portion they thought was gender-blind."
Big Edit: Jack formulated my ideas better, so see his comment.
This was the original:
The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenari...
I like the color red. When people around me wear red, it makes me happy - when they wear any other color, it makes me sad. I crunch some numbers and tell myself, "People wear red about 15% of the time, but they wear blue 40% of the time." I campaign for increasing the amount that people wear red, but my campaign fails miserably.
"It'd be great if I could like blue instead of red," I tell myself. So I start trying to get myself to like blue - I choose blue over red whenever possible, surround myself in blue, start trying to put blue in places where I experience other happinesses so I associate blue with those things, etc.
What just happened? Did a belief or a preference change?
By coincidence, two blog posts went up today that should be of interest to people here.
Gene Callahan argues that Bayesianism lacks the ability to smoothly update beliefs as new evidence arrives, forcing the Bayesian to irrationally reset priors.
Tyler Cowen offering a reason why the CRU hacked emails should raise our confidence in AGW. An excellent exercise in framing an issue in Bayesian terms. Also discusses metaethical issues related to bending rules.
(Needless to say, I don't agree with either of these arguments, but they're great for application of yo...
I think this is close to the question that has been lurking in my mind for some time: Why optimize our strategies to achieve what we happen to want, instead of just modifying what we want?
Suppose, for my next question, that it was trivial to modify what we want. Is there some objective meta-goal we really do need to pay attention to?
Robin Hanson podcast due 2009-12-23:
Repost from Bruce Schneier's CRYPTO-GRAM:
The Psychology of Being Scammed
This is a very interesting paper: "Understanding scam victims: seven principles for systems security, by Frank Stajano and Paul Wilson." Paul Wilson produces and stars in the British television show The Real Hustle, which does hidden camera demonstrations of con games. (There's no DVD of the show available, but there are bits of it on YouTube.) Frank Stajano is at the Computer Laboratory of the University of Cambridge.
The paper describes a dozen different con scenarios -- en...
With Channukah right around the corner, it occurs to me that "Light One Candle" becomes a transhumanist/existential-risk-reduction song with just a few line edits.
Light one candle for all humankind's children
With thanks that their light didn't die
Light one candle for the pain they endured
When the end of their world was surmised
Light one candle for the terrible sacrifice
Justice and freedom demand
But light one candle for the wisdom to know
When the singleton's time is at handDon't let the light go out! (&c.)
Is there a proof anywhere that occam's razor is correct? More specifically, that occam priors are the correct priors. Going from the conjunction rule to P(A) >= P(B & C) when A and B&C are equally favored by the evidence seems simple enough (and A, B, and C are atomic propositions), but I don't (immediately) see how to get from here to an actual number that you can plug into Baye's rule. Is this just something that is buried in textbook on information theory?
On that note, assuming someone had a strong background in statistics (phd level) and ...
I'm interested in values. Rationality is usually defined as something like an agent that tries to maximize its own utility function. But, humans, as far as I can tell, don't really have anything like "values" beside "stay alive, get immediatly satisfying sensory input".
This, afaict, results to lip servive to "greater good", when people just select some nice values that they signal they want to promote, when in reality they haven't done the math by which these selected "values" derive from these "stay alive"...
Is the status obsession that Robin Hanson finds all around him partially due to the fact that we live in a part of a world where our immediate needs are easily met? So we have a lot of time and resources to devote to signaling compared to times past.
Are people interested in discussing bounded memory rationality? I see a fair number of people talking about solomonov type systems, but not much about what a finite system should do.
"People are crazy, the world is mad. "
This is in response to this comment.
Given that we're sentient products of evolution, shouldn't we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a state space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components.
Observing the world for 32-odd years, it appears to...
Would it be worthwhile for us to create societal simulation software to look into how preferences can change given technological change and social interactions? (knew more, grew up together) One goal would be to clarify terms like spread,muddle,distance, and convergence. Another (funner) goal would be to watch imaginary alternate histories and futures (given guesses about potential technologies)
Goals would not include building any detailed model of human preferences or intelligence.
I think we would find some general patterns that might also apply to more complex simulations.
I've read Newcomb's problem (Omega, two boxes, etc.), but I was wondering if, shortly, "Newcomb's problem is when someone reliably wins as a result of acting on wrong beliefs." Is Peter walking on water a special case of Newcomb? Is the story from Count of Monte Cristo, about Napoleon attempting suicide with too much poison and therefore surviving, a special case of Newcomb?
A poem, not a post:
Intelligence is not computation.
As you know.
Yet the converse bears … contemplation, reputation. Only then refutation.
We are irritated by our fellows that observe that A mostly implies B, and B mostly implies C, but they will not, will not concede that A implies C, to any extent.
We consider this; an error in logic, an error in logic.
Even though! we know: intelligence is not computation.
Intelligence is finding the solution in the space of the impossible. I don’t mean luck At all. I mean: while mathematical proofs are formal, absolute, wi...
Does anyone know how many neurons various species of birds have? I'd like to put it into perspective with the Whole Brain Emulation road map, but my googlefu has failed me.
I could test this hypothesis, but I would rather not have to create a fake account or lose my posting karma on this one.
I strongly suspect that lesswrong.com has an ideological bias in favor of "morality." There is nothing wrong with this, but perhaps the community should be honest with itself and change the professed objectives of this site. As it says on the "about" page, "Less Wrong is devoted to refining the art of human rationality."
There has been no proof that rationality requires morality. Yet I suspect that posts comin...
I love the new gloss on "What do you want to be when you grow up?"
Don't. Spivak is easy to remember because it's just they/them/their with the ths lopped off. Nonstandard pronouns are difficult enough already without trying to get people to remember sie and hir.
Totally agreed. Spivak pronouns are the only ones I've seen that took almost no effort to get used to, for exactly the reason you mention.