If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Hey, not sure where else to put this:
I just logged back on after a brief absence from the site (a few days) to find I seem to have been genuinely karmassassinated. As far as I can tell, every comment I ever made has been downvoted, which was apparently enough to put me from 1200+ karma to -80 (although oddly, the "last 30 days" thingy claims I only got -148 karma; maybe it's maxed out?)
I understand it's possible to check who downvoted comments? It was probably a generically-named sockpupuppet, I guess, but still.
EDIT: great, I apparently need to wait 8 minutes because I already commented. Did not know low karma did that.
I just logged back on after a brief absence from the site (a few days) to find I seem to have been genuinely karmassassinated. As far as I can tell, every comment I ever made has been downvoted, which was apparently enough to put me from 1200+ karma to -80
Wow! That's the first serious karmassassination I have become aware of. People do pissweak karma assassinations every now and again but I haven't seen a serious 1,300+ kill. I'm frankly surprised anyone bothered.
(although oddly, the "last 30 days" thingy claims I only got -148 karma; maybe it's maxed out?)
The last 30 days measure is only for votes on comments that were written in the last 30 days, not votes received in the last 30 days. The initial implementation included all votes received but had the effect that certain users could sometimes stay on the Top Contributers, 30 days list even if we didn't log in.
I understand it's possible to check who downvoted comments? It was probably a generically-named sockpupuppet, I guess, but still.
Possible, sure. I'm not sure how convenient it is. It may require direct database access.
...EDIT: great, I apparently need to wait 8 minutes because I already commented. Did n
The "last 30 days" score gives you a score for posts/comments that you submitted in the last 30 days, not the downvotes/upvotes you received in that duration. So if people downvoted comments of yours from further back, it wouldn't be counted.
Um, can any administrator types chime in on this? Apparently it must have been an established user, so it would be nice to find out who it is so they can at least get mocked for taking the Karma system so seriously (very passé).
Also, y'know, all those features that prevent passing trolls from annoying people too much are also applying to me, so if it's possible to reverse this that would be nice too.
EDIT: bump bump.
I understand it's possible to check who downvoted comments? It was probably a generically-named sockpupuppet, I guess, but still.
I understand that there is a limit on downvoting -- it doesn't cost karma, but it's only possible up to the limit of one's karma, or something like that. If so, a sockpuppet wouldn't have enough karma. It must have been an account with a real presence here, either bingeing on OCD or using an automated script.
Ah, excellent point. Now I'm even more curious about who the culprit might be.
... in other news, I apparently need "1164 more points" to downvote a comment.
... in other news, I apparently need "1164 more points" to downvote a comment.
Oooh, now you can calculate exactly how many downvotes you have given out.
It's amusing that someone would care enough about LessWrong karma to view it as worth this sort of effort. As character assassination and revenge goes, that's pretty weak sauce. I suspect they got more disutility from wasting their time downvoting than you did by actually losing the karma.
In a recent conflict with someone (who seemed to be mad at me for no reason I could agree to), I've tried two strategies consecutively: reasonable discussion & mediation techniques, and rage fits (I've basically faked being really mad and upset at them to see what would happen; I'm sorry, I know, I was being a manipulative bastard). My faith in humanity took a hit (even though it shouldn't have) after seeing that this particular person was basically immune to logos but very readily respondent to pathos.
So just a little reminder that may or may not be redundant among here: don't do this. Don't give more of a chance to the person who screams and acts crazy than to the person who tries to work things out with you the calm, mature way. It's exactly the wrong way to respond if you want to incentivize rational behavior on the part of the other party. The message is basically "I won't listen to any attempt at reasonable discussion, but try going hysterical on me, that one has good odds of success", thereby earning yourself more hissy fits in the future. And especially don't do this as parents, to your kids.
(I don't know why I'm saying this here, it may go without saying for a smart bunch of people like you. Perhaps I'm temporarily under the impression that it is not obvious to everybody how astonishingly stupid it is to be more convinced by pathos than by logos, just because it wasn't obvious to my IQ<95 acquaintance.)
I think that's the kind of a thing that most people know in principle, but which is very hard to actually stick to when you have a raging hissy person in front of you, so it's good to be reminded of it every now and then.
Thought I would repeat something I recently posted buried deep in a digressionary comment thread of an old post:
http://pss.sagepub.com/content/14/6/623.short
http://dl.dropboxusercontent.com/u/67168735/heritability%20of%20iq.pdf
"Socioeconomic status modifies heritability of IQ in young children".
To make a long story short, this analysis of a large cohort of children assigned a 'socioeconomic status indicator' to each family they were following from 0 to 100 based on a large number of factors. They found that the heritability of IQ was a VERY strong positive function of socioenomic status. At the bottom, they think less than 5% of IQ variation is moderated by genetics. At the top of the scale, over 80%.
Obvious interpretation: low socioeconomic status masks genetic predisposition. Alternative restatement: high status environments allow previously cryptic variation to show itself. Low status populations are too genetically diverse for there to be a common factor that doesn't vary between any of them.
This is, of course, exactly the kind of result that you would expect to get given the way that heritability is defined. When you make environment more uniform in terms of quality, you drive up the heritability, and vice versa.
The strength of the effect is still interesting, though.
A surprising find in the light urban fantasy novel (Blood Engines): One of the sorcerers in the story starts referring to Nick Bostrom. It describes his philosophical pedigree at Oxford and gives a rather detailed explanation of the Simulation Hypothesis, Bostrom's trilemma and various implications thereof. Decidedly not what I was expecting given the genre.
Hi all,
Would this be the best place to introduce onesself?
I'm 24, male, and resident in Bristol, England. I'm currently studying (read: procrastinating) for a master's degree in computer science, and my undergrad degree was in English lit and mathematics.
I've been lurking LW on and off for some twelve or eighteen months, but I held off from registering for a while because I feel like my interests overlap only somewhat with those of the Lesswrong community. For example, I'm not especially interested in AI friendliness or existential risk; on the other hand, I am interested in effective altruism, ways to work more effectively and procrastinate less, and general bias-awareness, all of which seem to be staple topics here.
Other things i care and think a lot about include music and books, gender and sexuality. (Incidentally, I thought the Lesswrong Women series that ran recently was in general very good.) Politically speaking, I'm more left-leaning than libertarian, which seems like it might be the default political stance on LW (but which, incidentally, barely seems to exist out here in the old world). In brief, I think I'm more of what the North Americans would call a 'liberal arts' type by temperament, but with kind of a rational/scientific bent, too. In any case, I hope to make a decent contribution here.
Would this be the best place to introduce onesself?
That would actually be the welcome thread, but this is a close second best.
my interests overlap only somewhat with those of the Lesswrong community. I'm not especially interested in AI friendliness or existential risk
This is not uncommon in the LW community. Artificial intelligence, personal rationality, effective altruism are the three big topics here, but many people are interested only in one or two of them.
"Socialist" was tabooed on the census, as were the other political orientations. The text of the option was:
Socialist, for example Scandinavian countries: socially permissive, high taxes, major redistribution of wealth
I picked "Socialist" on this basis. There was a separate option for Soviet-style communism, which 0.7% of respondents picked.
Yeah, that makes sense. The utopia, the Scandinavian-style social democracy, the Soviet-style communisms all belong to a greater "socialism" superset, just like Friendly AI and the paperclip maximizer both belong to an "artificial intelligence" superset.
And that is also a reason why someone telling "we are ready to build an artificial intelligence tomorrow", without providing any more details, would make some people here scared. Not because all AIs are wrong; not because we don't want a kind of AI here; not because we know that their AI would be unfriendly. But simply because the fact that they didn't specify the details is an evidence that they didn't think about the details, and thus they are likely to build an unfriendly AI without actually wanting to. Because the prior probability of unfriendly AI is greater than the prior probability of a friendly AI, so if you just blindly hit a point within the "artificial intelligence" space, it is likely to go wrong.
In a similar way, I am concerned that people who want utopia-socialism don't pay much attention to the details (my evidence is that they don't find the details worth mentioning), and are ...
I am new here, and I am not sure what to do.
I think most LWers would advise you to read the Sequences, but I reckon you could get 80% of the value from doing so by reading two of the following four books (which would be much less time consuming):
Harry Potter and the Methods of Rationality by Eliezer Yudkowsky
Gödel, Escher, Bach: An Eternal Golden Braid by Douglas Hofstadter
Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher
Thinking, Fast and Slow by Daniel Kahneman
HPMOR is like the sequneces but with way less explanations and way more narrative. If you want to save time then reading HPMOR instead of the sequences is the opposite of what you should do.
The books are good but there are multiple sequences which have maybe 5% overlap tops with any of those books. Not saying that they aren't good recommendations - I am saying that your claim is invalid.
So yeah, read the sequences. If you need motivation to read the sequences read HPMOR.
Here's an interview Bill Gates gave on healthcare. It's not directly rationality related, but it's very good. I'd recommend reading all the way to the end. It's especially good at the end.
I'm planning on making a flashcard deck (for anki) with basic emergency scenarios (e.g. What should you do if you're in a car that's fallen into deep water? What should you do if you encounter a bear in the woods?). Does anybody know of some good sources about this kind of thing? I'm especially interested in data comparing frequency/mortality rates of different situations, so I can pick the most important topics to make cards about, and quantify how likely adding these cards to someone's deck is to save their life.
Of course, I'll share the deck if/when it's completed.
I have a couple of questions I'd like to ask you all. It's research for something I'd like to do with my local meetup group, and I'd appreciate your help.
1) Can you name one or more activities or experiences you find to be depletary, in the sense of ego depletion, (i.e. something that fatigues you along some psychological axis the more you do it). If they exist, I would prefer examples that are personally salient to you.
2) Can you name one or more unreasonable demands you feel you are under, psychologically speaking? I'm thinking of cases which you might phrase "I'd like to (not) do X, but my brain just won't let me". As an example, if I get into a protracted internet argument, I feel like my brain compels me to rehash the argument over and over again. This feels like an unreasonable demand from my brain, and I would like to not be subject to it. Again, personally salient examples are especially welcome.
Thank you.
1) Official paperwork, especially if it's the sort where a trivial error can be a major hassle, like tax forms. I have some fairly serious Ugh Fields around the latter.
2) I obsess over previous mistakes, even ones that were made months or years ago. I even obsess over hypothetical mistakes, where I could have screwed something up if I'd been a trifle less lucky or if my synapses had fired a little differently. The mental monologue in the first case runs something like goddamnit, I'm supposed to be better than that. In the second case it's more of an abject horror at what Could Have Happened.
Right now I think I do not want to have children. I'm 24. But I'm worried that if I plan not to have children, I'll change my mind later. How will I know when my preferences are stable enough to plan around? At what age should I expect to not change my mind about wanting children?
I distinctly remember telling a friend when I was 24 that I didn't think I would have children and didn't particularly want any. She laughed at me and told me I couldn't fight biology. I sneeringly informed her that I was master of my own existence and wouldn't be pushed around by evolution.
By age 27 I had a child, which, by that time, I wanted very much.
I would say the stability of your preferences depends very much on your reasons for not wanting children. I find that abstract reasons such as "the world is overpopulated and I don't want to contribute to that" or "my professional life would suffer for a few years" are very easily shrugged off, when it comes down to it. By far the most important factor is how your spouse and/or romantic partner feels about the issue, assuming you have or want one of those. So if you're really serious, be sure to choose a mate who also doesn't want kids, and be sure that they are just as firm in their convictions as you are.
My wife and I have decided we're going to homeschool our son, almost five, for various reasons. What age do you think it would be appropriate to start rationality training, and how would you go about it? Are there any particularly kid-friendly resources on rationality that anyone can recommend? (The sequences are good for beginners, but they're well above the level of a five year old).
I have attempted this with my daughter. "Is Father Christmas real?" "Yes!" "How do you know?" "Because [best friend] saw him!" "How do you know [best friend] is right?" "'Cos she is!" At this point I had exhausted a 5yo's philosophical introspection.
In your position I'd be curious about her response to "Has [best friend] ever said things that turned out not to be true?", but I'd also be worried about poisoning her relationship to her best friend in the process of asking.
[best friend] is also magical and has power over the weather, or at least making it sunny on a rainy day. Daughter's mother and I have both attempted to gently stimulate skepticism on this point. (Said best friend has a somewhat troubled family life and I suspect is claiming to be magical to feel power over her life, so we're happy to be gentle over this one.)
First, I got a more instrumental response from a 7yo on whether the tooth fairy is real: "As long as I find my dollar under the pillow, she is!"
Second, you were using an adult language with a small child. Asking instead what her friend saw in more detail, and discussing that instead could have been more illuminating. Or not.
Has anyone tried e-cigarettes as a method to quit smoking or at least ameliorate the effects of smoking?
I smoke about a pack or two a week (3 a day minimum, sometimes binging once a week) and would like to reduce that in order to increase my chances of living longer. Anyone have experience they can share?
After buying an e-cig, I never bought another pack of cigarettes. It has been roughly six weeks, I think. My consumption was slightly higher than yours.
Congratulations, by the way. You have successfully added years to your life, ceased to constantly stink, saved a lot of money, and retained the mental edge and social benefits of smoking nicotine. Instrumental rationality at its finest.
I'm kinda starting to panic. (Warning: Wall-o-text follows.)
I don't like giving out my age, but I was born in mid March 1988. That makes all of these much scarier:
Suggestion: the open thread for each month should be pinned to the top of Discussion for the duration of that month. Otherwise, the longer the month goes on, the less likely a particular post in the open thread is to be read.
Norbert Weiner, a mathematician from MIT, postulated unfriendly AI in 1949.
The possibility of learning may be built in by allowing the taping to be re-established in a new way by the performance of the machine and the external impulses coming into it, rather than having it determined by a closed and rigid setup, to be imposed on the apparatus from the beginning.
...
Moreover, if we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us.
Is CFAR's Paypal donation page not working for anyone else, or is it just me? Both monthly and one-time donations fail to process the transaction.
Error messages after logging in to Paypal:
The link you have used to enter the PayPal system is invalid. Please review the link and try again.
or
This payment cannot be completed and your account has not been charged. Please contact your merchant for more information. Return to merchant and try a different payment method We are not able to process your payment using your PayPal account at this time. Please return to the merchant's website and try using a different payment method (if available).
Imagine sufficiently strange aliens were peeking into our low-dimensional slice of totality. They'd see matter/energy states which change, matter/energy states which stay the same, change at different rates. They wouldn't prima facie find "bipeds walking around" as any more special-consideration-worthy than "bubbles in a pond", it wouldn't trigger any "sentience alarms" (maybe their intuition rests on a nano scale).
Consider they were searching for something interesting, maybe approaching whatever life-analogues they defined. ...
Set Theory and Uncaused Causes
I'm relocating part of a thread that was originally on "Welcome to Less Wrong" but has wandered way off topic. It also seems that a remote ancestor comment was heavily downvoted, discouraging further contributions in the original place. So I'm moving into the Open thread.
...(Huh. One of the ancestors to this comment - several levels up - has been downvoted enough to require a karma penalty. I wonder if there should be some statute of limitations on that; whether, say, ten levels of positive-karma posts can protect ag
Does anyone know how to view or expand the snippet of a google search result? One can lengthen their query to include what they know comes after or before an ellipsis, but even if one remembers, eventually the snippet stops expanding.
I'm trying to access a page of a site that has been deleted from the site's servers (apparently), but when the right query is entered into google, text from the absent page is displayed in a snippet. The whole text appears to be somewhere - either hidden on the site or in google's database - as modifying the query changes th...
I would like to help to create LessWrong communities in the Russian speaking countries, can anyone provide me with site visitor statistics from Russia and CIS?
I'm not confident this is the right outlet (and if so I apologize) but does anyone have tips on good data sources for ex; poultry statistics - trying to get hold of data for each individual country the amount of eggs produced on a year by year, country by country basis. appreciate any tips! Where do you go to find your data? (choose to make this an open question)
Sorry, I'll ask a really dumb question here because it's the middle of the night and my brain doesn't work. What's the "official" Bayesian response to this joke (see part 2)? To summarize, when a Bayesian talks about a coin with unknown bias, that involves a prior over possible biases, i.e. a subjective probability distribution over objective probability distributions. But Bayesians are supposed to think that objective probabilities don't exist ("meaningless speculations about the propensities of different coins"). So how does that make sense?
Category theory gives a few hits at LW, but doesn't seem to be recognized very wildly. On a first glance it seems to be relevant for Bayes nets, cognitive architectures and several other topics. Recent text book that seems very promising:
Category theory for scientists by David I. Spivak: http://arxiv.org/abs/1302.6946
Abstract: There are many books designed to introduce category theory to either a mathematical audience or a computer science audience. In this book, our audience is the broader scientific community. We attempt to show that category theory can ...
The Candle Problem is an experiment which demonstrates how time pressure and rewards can diminish people's ability to solve creativity-requiring problems. People who weren't offered rewards for solving a clever problem solved the problem faster than those who were offered even significant rewards. Another finding was that when the problem was simplified and the creativity requirement removed, the participants who were offered rewards performed the task much faster.
There are multiple theories that try to explain the result of this study, and the many other...
Daniel Dennett's tools for thinking
Some of it strikes me as very likely to be correct-- learn from your mistakes, respect your opponent, choose opponents worthy of respect (my re-phrasing of his "don't waste your time on rubbish"). Some of it is ideas I'm going to check-- "surely" and rhetorical questions are what people do to shore up weak points in their arguments.
Social anti-induction: "People seem to like me, therefore their patience for me is not yet exhausted, but it eventually will be."
I'm looking for more on the should-universe you occasionally see referenced around lesswrong.
So far all I can see is some vague references from EY (eg http://lesswrong.com/lw/2nz/less_wrong_open_thread_september_2010/2k50 )
Anyone got anything?
Yesterday, I looked up information regarding what may be one of the issues with my right eye, based on one conversation I heard between my father and an eye doctor a few years ago. It kinda made me update my and the doctor's levels of rationality downward a little, but I also realize I'm only working with surface-level information from like two Google searches and the Wikipedia article.
Uveitis is an inflammation of the uvea, generally the iris and surrounding tissues, which can lead to photophobia and vision loss. I looked up the Wikipedia article on Uveit...
I don't suppose any users here have experience with trans-cranial direct current stimulation. More specifically, the Focus V1?
I'm thinking of making a Discussion post about this, but I'm not sure if it has already been mentioned.
We're not atheists - we're rationalists.
I think it's worth distinguishing ourselves from the "atheist" label. On the internet, and in society (what I've seen of it, which is limited), the label includes a certain kind of "militant atheist" who love to pick fights with the religious and crusade against religion whenever possible. The arguments are, obviously, the sames ones being used over and over again, and even people who would ide...
What do people mean by this sort of probability estimate, this one from Angelina Jolie's NYTimes article? "My doctors estimated that I had an 87 percent risk of breast cancer and a 50 percent risk of ovarian cancer, although the risk is different in the case of each woman" (Italics added.)
Do they mean:
I have a few pieces of knowledge that I think could be somehow synthesized to form a really powerful idea with a lot of implications.
In Yvain's excellent "A Thrive/Survive Theory of the Political Spectrum" (read it if you haven't already!) he makes a really compelling argument that "rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment."
It seems to me that a similar analogy can be made with happiness, where happiness is t
Is someone of you here in a position, that you review research papers for scientific journal ? Has it ever happened to you, that you had an impression, the authors were lying ? How did You handle that ?
Has anyone used Beansight? It seems like it could be a replacement for things like predictionbook with a few imrpovements.
Does anyone have a solution to Post Narcissism?
...I'm a post Narcissist. I don't know how frequent that is, and only one person at FHI told me they were an email Narcissus, but maybe there are others, and more importantly, maybe there is a solution. It may not be clear to you what post Narcissism is, if so, throw your arms up in the air, and scream "Victory!" because you just escaped a terrible thing...
Post Narcissism: An absolutely intense eagerness to read your own posts and comments after you wrote them, accompanied by a feeling of flow while
A point of metaphysics:
It is impossible for anyone to force anyone to do anything: you can adjust their incentives, but it's always open to someone to just refuse. And this is metaphysical because it's true no matter how the world is constituted: imperious curses wouldn't make this possible either.
But points of metaphysics are mostly (if not always) misunderstandings of some kind. What am I misunderstanding?
After reading some of the comments in the discussion on souls, I got to thinking about near death experiences, in the context of dream thought patterns (based entirely on my observations about how I think in dreams). This led to me imagining what an NDE might be like, which somehow ended in hypothetical dying me managing to overcome the absolute horror of realizing what was going on long enough to think "Maybe there's a way I can think that will help keep the information in my brain in tact a little longer...". (Obviously, there'd be some serious...
I'm teaching some classes for a test prep company in a town 2 hours away. They're paying me fairly for my expenses and travel time, but it still feels like kind of a waste-- it's like 20 hours a week! Of course most productive things cannot safely be done while driving, but listening is a notable exception.
Can anyone recommend some good educational podcasts, or other free downloadable audio that will make me better in some way? I'm working on learning Spanish, so that seems like a good place to start.
Why aren't teachers as respected as other professionals? It's too bad that the field is lower paid and less respected than other professional fields, because the quality of the teachers (probably) suffers in consequence. There's a vicious cycle: teachers aren't highly respected --> parents and others don't respect their experience -->no one wants to go into teaching and teachers aren't motivated to excel --> teachers aren't highly respected.
It's almost surprising that I had so many excellent teachers through the years. The personal connection b...
Really? The BBC thinks they're the second highest status profession, just after professor (and before CEO).
They're significantly better paid than you would expect given the qualifications required to be a teacher (none).
It´s probably something like Linus's "I love humanity ... It's people I can't stand".
I wonder if there isn't the opposite effect for some group, like CEOs, where people may have somewhat negative feelings about the abstract concept, but show a great deal of respect in person.
Warning, awkward self disclosure, grief and death
I normally hate this kind f post, but honestly the lesswrong community are the only people I trust to give me useful advice, A relative of mine is dying what is the best way for me to deal with this?
Can you force your computer to do anything? Can the computer refuse to do what you want? Of course the computer can crash. Does that count as refusing to obey your commands?
If you don't think the computer has the free will to refuse your commands, why do you think you do? Because your brain runs on neurons and the computer runs on silicon?
There are many ways to influence other people that don't have something to do with adjusting incentives. Just look into the psychology literature.
The ability to refuse needs the knowledge that someone tries to influence you. On example: Andrew Berwick put a lot of effort into people trying to read his book. He studied the way ideas spread on the internet. Conspiracy theorists do a lot to spread certain idea. Andrew knew that conspiracy theorist like to talk about Freemasons.
Andrew then went to four freemasons meetings and put Freemason images on his facebook account. As a result all of the conspiracy theory people had their Freemason story when Andrew committed his terrorist act.
No one of the conspiracy folks got the idea that those images were specifically crafted to play them because the conspiracy folks don't think that someone would treat them in that way.
The couldn't refuse in a meaningful sense because they were ignorant.
That's a good point; trickery does seem like a kind of force.