You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:lsparrish
15 April 2013 10:09:51PM
24 points
[-]
I know this comes up from time to time, but how soon until we split into more subreddits? Discussion is a bit of firehose lately, and has changed drastically from its earlier role as a place to clean up your post and get it ready for main. We get all kinds of meetup stuff, philosophical issues, and so forth which mostly lack relevance to me. Not knocking the topics (they are valuable to the people they serve) but it isn't helpful for me.
Mostly I am interested in scientific/technological stuff, especially if it is fairly speculative and in need of advocacy. Cryonics, satellite-based computing, cryptocurrency, open source software. Assessing probability and/or optimal development paths with statistics and clean epistemology is great, but I'm not super enthused about probability theory or philosophy for its own sake.
Simply having more threads in the techno-transhumanist category could increase the level of fun for me. But there also needs to be more of a space for long-term discussions. Initial reactions often aren't as useful as considered reactions a few days later. When they get bumped off the list in only a few days, that makes it harder to come back with considered responses, and it makes for fewer considered counter-responses. Ultimately the discussion is shallower as a result.
Also, the recent comments bar on the right is less immediately useful because you have to click to the Recent Comments page and scroll back to see anything more than a few hours in the past.
Comment author:Viliam_Bur
16 April 2013 07:14:34AM
7 points
[-]
I guess instead of complaining publicly, it would be better to send a private message to a person who can do something about it, preferably with a specific suggestion, and a link to a discussion which proves that many people want it.
Long-term threads separately seems to be a very popular idea... there were even some polls in the past to prove it.
Comment author:Kaj_Sotala
17 April 2013 10:57:18AM
10 points
[-]
MIRI's strategy for 2013 involves more strongly focusing on math research, which I think is probably the right move, even though it leaves them with less use for me. (Math isn't my weakest suit, but not my strongest, either.)
Comment author:Dahlen
16 April 2013 07:57:58AM
*
8 points
[-]
How much difference can nootropics make to one's studying performance / habits? The problems are with motivation (the impulse to learn useful stuff winning out over the impulse to waste your time) and concentration (not losing interest / closing the book as soon as the first equation appears -- or, to be more clear, as soon as I anticipate a difficult task laying ahead). There are no other factors (to my knowledge) that have a negative impact on my studying habits.
Or, to put it differently: if a defective motivational system is the only thing standing between me and success, can I turn into an uber-nerd that studies 10 h/day by popping the right pills?
EDIT: Never messed with my neurochemistry before. Not depressed, not hyperactive... not ruling out some ADD though. My sleep "schedule" is messed up beyond belief; in truth, I don't think I've even tried to sleep like a normal person since childhood. Externally imposed schedules always result in chronic sleep deprivation; I habitually push myself to stay awake till a later hour than I had gone to sleep at the previous night (/morning/afternoon) -- all of this meaning, I don't trust myself to further mess with my sleeping habits. Of what I've read so far, selegiline seems closest to the effects I'm looking for, but then again all I know about nootropics I've learned in the past 6 hours. I can't guarantee I can find most substances in my country.
Comment author:Izeinwinter
22 April 2013 06:45:34PM
6 points
[-]
... Bad or insufficient sleep can cause catastrophic levels of akrasia. Fix that, then if you still have trouble, consider other options. Results should be apparent in days, so it is not a very hard experiment to carry out - set alarms on your phone or something for when to go to bed, and make your bedroom actually dark (this causes deeper sleep) you should get more done overall because you will waste less of your waking hours.
I agree with ThrustVectoring that you'll probably get more mileage out of implementing something like a GTD system (or at least that doing this will be cheaper and seems like it would complement any additional mileage you get out of nootropics). There are lots of easy behavioral / motivational hacks you can use before you start messing with your neurochemistry, e.g. rewarding your inner pigeon.
I've had some success recently with Beeminding my Pomodoros. It forces me to maintain a minimal level of work per unit time (e.g. recently I was at the MIRI workshop, and even though ordinarily I would have been able to justify not doing anything else during that week I still spent 25 minutes every day working on problem sets for grad school classes) which I'm about to increase.
Comment author:Dahlen
22 April 2013 02:45:52PM
*
4 points
[-]
Tried. Failed. Everything that requires me, in my current state, to police myself, fails miserably. It's like my guardian demon keeps whispering in my ear, "hey... who's to stop me from breaking the same rules that I have set for myself?" -- cue yet another day wasted.
Eat candy every time I clear an item off my to-do list? Eat candy even when I don't!
Pomodoros? Y-yeah, let's stop this timer now, shall we -- I've just got this sudden imperious urge to play a certain videogame, 10 minutes into my Pomodoro session...
... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.
... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.
I could have easily written this exact same post two years ago. I used to be incredibly akratic. For example, at one point in high school I concluded that I was simply incapable of doing any schoolwork at home. I started a sort of anti-system where I would do all the homework and studying I could during my free period the day it was due, and simply not do the rest. This was my "solution" to procrastination.
Starting in January, however, I made a very conscious effort to combat akrasia in my life. I made slow, frustrating progress until about a week and a half ago where something "clicked" and now I spend probably 80% of my free time working on personal projects (and enjoying it). I know, I know, this could very easily be a temporary peak, but I have very high hopes for continuing to improve.
So, keep your head up, I guess.
I think on LessWrong, quick simple "tricks" like Pomodoro / feeding yourself candy / working in the same room as someone else / disabling Chrome are way, way, over emphasized. (The only trick I use is writing down my impulses e.g. "check reddit" before indulging in them.) What actually helped/helps me is introspection. Try to figure out what is it about working that's so unpleasant. Why does your brain resist it so much? Luke's algorithm for beating procrastination is something along the lines of what I'm talking about. I think a lot of people have a "use willpower in order to fight through the pain" mentality, but I think what you really want to do is eliminate the pain. If work is torture for you, then I don't really think you can ever be productive unless you change that fact.
From books that I've read and my own experience, it seems to me that one of the easiest traps to fall into (and one of the most fatal) is tying your productivity to your sense of self-worth, especially if you use use your self-worth to motivate yourself ("If I can complete this assignment, I'll be like who my dad wanted me to be!"), especially if you use your self-worth to negatively motivate yourself ("If I don't pass this test, I'll basically be a failure in life"), especially if you actively foster this attitude in order to push yourself, and especially if you suffer or have recently suffered from depression or low self-esteem.
I can say more, but I don't want to waste my time typing it all out if nobody's going to read it, so just reply to this post if you want me to share more of my experiences. (That goes for anyone reading this, not just the OP).
To be honest, it's really hard to say exactly what lead to my change in willpower/productivity. Now that I actually try to write down concrete things I do that I didn't do two months ago, it's hard, and my probability that my recent success is a fluke has gone up a little.
I feel like what happened is that after reading a few self-help books and thinking a lot about the problem, I ended up completely changing the way I think about working in a difficult-to-describe way. It's kind of like how when I first found LessWrong, read through all the sequences, and did some musings on my own, I completely changed the way I form beliefs. Now I say to myself stuff like "How would the world look differently if x were true?" and "Of all the people who believe x will happen to them, how many are correct?", even without consciously thinking about it. Perhaps more importantly, I also stopped thinking certain thoughts, like "all the evidence might point to x, but it's morally right to believe y, so I believe y", etc.
Similarly, now, I now have a bunch of mental habits related to getting myself to work harder and snap out of pessimistic mindstates, but since I wasn't handed them all in one nicely arranged body of information like I was with LessWrong, and had to instead draw from this source and that source and make my own inferences, I find it really hard to think in concrete terms about my new mental habits. Writing down these habits and making them explicit is one of my goals, and if I end up doing that, I'll probably post it somewhere here. But until then, what I can do is point you in the direction of what I read, and outline a few of what I think are the biggest things that helped me.
The material I read was
various LessWrong writings
PJ Eby's Thinking Things Done
Succeed: How We Can Reach Our Goals by Heidi Halvorson
Switch: How to Change When Change Is Hard by Chip and Dan Heath
Feeling Good: The New Mood Therapy by David D. Burns
The Procrastination Equation by Piers Steel
Getting Things Done by David Allen
Out of all of these, I most recommend Succeed and Switch. PJ Eby is a weird example because he is One Of Us, but he has no credentials, the book is actually unfinished, and he now admits on his website that writing it was one of the worst periods in his life and he was procrastinating every day. So it makes sense to be very skeptical. However, I actually really enjoyed Thinking Things Done and I think that it's probably the best book out of all of these to get you into the "mind hacking" mindset that I attributed my success to, even if its contents aren't literally true. So you can make your own decision on that. Feeling Good isn't a productivity book at all, but I found it really helpful in dealing with akrasia for reasons that I'll sort of explain later. I wouldn't bother to read the Procrastination Equation because there's a summary by lukeprog on this site that basically says everything the book says. And Getting Things Done just describes an organizational system that seems tailored for very busy white collar professionals, so if that doesn't describe you I don't think it's worth it.
Obviously if your akrasia extends to reading these books then this isn't very helpful, but perhaps you could make it your goal to read just one of them (I recommend Succeed) over a period of two months or so. I think this would go a long way.
And then here are the things that most helped me, and can actually be written down at this time. I have the impression that there isn't a singular "key to success" - instead, success requires a whole bunch of attributes to all be in place, and most people have many but not all. So the insights that you need might be very different than the ones I needed, but perhaps not.
1: Not tying my self-worth to my success
The thesis of PJ Eby's Thinking Things Done is that the main reason why people are unsuccessful is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal, and animals have a freezing response to imminent death, so if you are fearing failure you will freeze up.
In Succeed, Heidi Halverson portrays positive motivation and negative motivation as having pros and cons, but has her own dichotomy of unhealthy motivation and healthy motivation: "Be good" motivation, which is tied to identity and status and focuses on proving oneself and high levels of performance, and "get better" motivation, which is what it sounds like. According to her and several empirical studies, "get better" is better than "be good" in almost every way.
In Feeling Good, David Burns describes a tendency of behavior he calls "do-nothingism" where depressed people will lie in bed all day, then feel terrible for doing so, leading them to keep lying in bed, leading them to feel even worse, etc. etc.
It seems like a pretty intuitive for a depressed, lazy person to motivate themselves by saying "Okay, self, gotta stop being lazy. Do you want to be a worthless, lazy failure in life? No you don't. So get moving!" But it seems like synthesizing these three pieces of information informs us that this is basically the worst thing you can possibly do. I definitely fell into this trap, and climbing out of it was probably one of the biggest things that helped me.
2: Being realistic
I feel like something a lot of people tend to do is tell themselves "From this day now on, I'll be perfect!" and then try to spend six hours a day working on personal projects, along with doing 100 push ups and meditating. This is obviously stupid, but for some reason at least for me was a really hard trap to get out of.
For example, I've always been a person who is really easily inspired i.e. if I read a good book, I'll want to write a book, if I listen to a good rap album, I'll want to become a rapper. Due to this tendency, I've done a fair bit of exploration in visual art, music, and video game programming. When I initially attempted my akrasia intervention, I tried to get myself to work on all three of these areas and achieve meaningful results in all of them. I held onto the naive belief that this was possible for far too long, and eventually had a mini-crisis of faith where I decided that I would cut my losses and from then on exclusively work on video game programming. Since then, things have been going much better.
This also goes with the get better mindset from the last point. If you are the worst procrastinator you know, your initial goal should be to be a merely below average procrastinator, then to be an average procrastinator, and on and on until you cure akrasia.
3: Realistic optimism
All the studies show that optimists are more successful in almost every domain. So how is that compatible with my "being realistic" point? The key is that the best, most healthy kind of optimism is the belief that you can eventually succeed in your goals (and will if you are persistent), but that it will take a lot of effort and setbacks along the way to do so. This is usually a valid belief, and combines the motivation of optimism and the cautiousness of pessimism. (This is straight from Succeed, by the way.)
4: Elephant / Rider analogy
I'm not going to go into detail about this because this post is getting long as fuck, but if this idea is unfamiliar to you, search for it on Google and LessWrong, it's been written about extensively and is a very very useful (and liberating)metaphor for how your brain works.
5: Willpower is like a muscle
Willpower is like a muscle and if you give it regular workouts it gets stronger. People who quit smoking often also start exercising or stop drinking, depressed people who are given a pet to care for often become much happier because the responsibility encourages them to enact changes in their own life, etc.
This implies that once you start changing a little, it will be easier to change more and more. But you can also artificially jump start this process by exercising your willpower. Probably the best willpower exercises are physical exercise and meditation (and they both of course have numerous other benefits), but if you lack the energy/time/desire to do either of those, you could always do something very simple and gradually build. If you have a bad habit like biting your nails, that could be a good starting point.
So yeah, this post is long as fuck, didn't really mean to write that much. Hope it helped, though. Maybe I'll revise this and turn it into a discussion post.
Some people in a similar position recruit other people to police us when our ability to police ourselves is exhausted/inadequate. Of course, this requires some kind of policing mechanism... e.g., whereby the coach can unilaterally withhold rewards/invoke punishments/apply costs in case of noncompliance.
nicotine has been a significant help with motivation. I only vape eliquid with nicotine when I am studying. This seems to have resulted in a large reduction in ugh fields.
Comment author:ModusPonies
18 April 2013 07:52:25PM
3 points
[-]
Find at least one person who you can easily communicate with (i.e., small inferential distances) and whose opinion you trust. Have a long conversation about your hopes and dreams. I recommend doing this in person if at all possible.
See which time discounts and distance discounts you make for how much you care about others. Compare how much you care about others with how much you care about you. act accordingly.
To know what you care about in the first place, either assess happiness at random times and activities, or go through Connection Theory and Goal factoring.
It's been done to me, too, and as I recall, it didn't do all that much good. The major good effect that I can remember is indirect-- it was something to be able to talk about the inside of my head with someone who found it all interesting and a possibly useful tool for untangling problems-- this helped pull me away from my usual feeling that there's something wrong/defective/shameful about a lot of it.
Comment author:Armok_GoB
16 April 2013 08:06:21PM
0 points
[-]
Looki into my eyes. You want to give all your money to the MIRI. You want to give all your money to the MIRI. You want to give all your money to the MIRI.
Comment author:[deleted]
16 April 2013 02:19:15AM
7 points
[-]
I have a super dumb question.
So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".
So there must be some other reason for the rule, 'don't divide by zero.' What is it?
Comment author:Qiaochu_Yuan
16 April 2013 06:33:44AM
*
18 points
[-]
We don't divide by zero because it's boring.
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say "you can't do that" or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things - that's called localization - which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It's not that we can't study it, but that we don't want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I've answered a version of this question there already, I think).
Comment author:Kindly
16 April 2013 03:23:54AM
*
9 points
[-]
The rule isn't that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you're not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A's to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
Comment author:ciphergoth
22 April 2013 08:30:11PM
6 points
[-]
For the real numbers, the equation a * x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there's nearly always exactly one solution, it's convenient to have a symbol for "the one solution to the equation a * x = b" and that symbol is b / a; b but you can't write that if a = 0 because then there isn't exactly one solution.
Comment author:latanius
16 April 2013 04:42:16AM
2 points
[-]
Didn't they do the same with set theory? You can derive a contradiction from the existence of "the set of sets that don't contain themselves"... therefore, build a system where you just can't do that.
(of course, coming from the axioms, it's more like "it wasn't ever allowed", like in Kindly's comment, but the "new and updated" axioms were invented specifically so that wouldn't happen.)
Comment author:jooyous
22 April 2013 05:59:10AM
*
5 points
[-]
I keep accidentally accumulating small trinkets as presents or souvenirs from well-meaning relatives! Can anyone suggest a compact unit of furniture for storing/displaying these objects? Preferably in a way that is scalable, minimizes dustiness and falling-off and has pretty good ease of packing/unpacking. Surely there's a lifehack for this!
Or maybe I would appreciate suggestions on how to deal with this social phenomenon in general! I find that I appreciate the individual objects when I receive them, but after that initial moment, they just turn into ... stuff.
Comment author:Desrtopa
20 April 2013 07:59:43PM
5 points
[-]
Today, I finally took a racial/sexual Implicit Association Test.
I had always more or less accepted that it was, if not perfect, at least a fairly meaningful indicator of some sort of bias in the testing population. Now, I'm rather less confident in that conclusion.
According to the test, in terms of positive associations, I rank black women above black men above white women above white men. I do not think this is accurate.
Obviously, this is an atypical result, but I believe that I received it due to confounding factors which prevented the test from being an accurate reflection of my associations are likely to affect a large proportion of the testing population.
First, the most significant factor in how successful I was in correctly associating words and faces was simply practice. I made more mistakes in the first phase than the second phase, and more in the second than the third, etc. I believe that my test could have showed significantly different results simply by re-ordering the phases.
Second, I suspect that I was trying harder in the phases where I was matching black faces than white faces. I don't want to corrupt the test, but I also don't want it to tell me I'm a racist; would I have been so enthusiastic about making the final phase my most accurate one of all, if it had been matching white male faces rather than black male faces?
Third, I felt that many of the questions on the survey that followed the matching phase were too loaded to properly answer on their own terms. They presented a series of options from "strongly agree" to "strongly disagree," where I felt that my real answer would most accurately be framed as ADBOC.
If anyone here has access to university resources and would like to collaborate on an experiment which would attempt to discern subjects' associations while correcting for these faults, please let me know.
Comment author:Unnamed
20 April 2013 09:49:21PM
5 points
[-]
Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn't be an issue in published research. (The keyword for this is "order effect.")
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man's face or a white man's face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn't have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture - many people taking the test have the experience of it getting harder when they're doing a "mismatched" pairing; they don't need to rely solely on the website's report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
Comment author:gwern
20 April 2013 05:19:26PM
2 points
[-]
I've skimmed the paper and read the summary publicity, and I don't really get how this could be construed as a general intelligence. At best, I think they may've encoded a simple objective definition of a convergent AI drive like 'keep your options open and acquire any kind of influence' but nothing in it seems to map onto utility functions or anything like that.
I would like to recommend Nick Winter's book, The Motivation Hacker. From an announcement posted recently to the Minicamp Graduates mailing list:
"The book takes Luke's post about the Motivation Equation and tries to answer the question, how far can you go? How much motivation can you create with these hacks? (Turns out, a lot.) Using the example of eighteen missions I pursued over three months, it goes over in more detail how to get yourself to want to do what you always wanted to want to do."
(Disclaimer: I hadn't heard of Nick Winter until a friend forwarded me the email containing that announcement, and I have no interest in promoting the book other than to help folks here attain their goals more effectively.)
A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.
Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I could mold the site as I grew.
Now after >2 years of blogging about basically the same thing, I think my blog will always be about utilitarianism (both practical and philosophical), lifestyle design (my quest to make myself more productive and frugal, mainly so I can be a better utilitarian), political commentary (from a utilitarian perspective), and psychology (of morality and community and that which basically underlies practical utilitarianism).
I probably would want to talk about religion/atheism from time to time, which used to be my biggest interest, but I can already tell it's moderately unpopular with my current readership (yawnnn... we really have to go over why the Bible has errors again?) and I'm already personally getting increasingly bored with it, so I can do away with discussing atheism if I needed to keep to a "topic"-focused blog.
Basically, at this point, I think I stand to gain more by making my blog and domain name more descriptive than I stand to lose by risking my interests shifting away from utilitarianism (or at least the public discussion thereof). But the big question... what should I name my blog?
Option #1: Keep with Greatplay.net: There will be costs with shifting to a new domain name. The monetary cost is mostly insignificant (<$20/yr for a new domain name), but it will take a moderate amount of time to move all the archives over and make sure all the new hyperlinks on the site work. Also, there will be confusion among the readership, and everyone who was linking to my site externally would now be linking to dead stuff. So, if I've misestimated the benefits of moving, I might want to stick with the current name and not incur the costs.
Option #2: Go to PeterHurford.com: I already use this site as an online résumé of sorts, so I wouldn't need to get the domain. This also seems the most descriptive of what the site would be about (a personal blog, about me) and fits in with what the cool kids are doing. However, some of my opinions are controversial relative to the mainstream and I don't know what I'll be doing in my future. Keeping my real name hidden from my website might be an asset (so I don't lose opportunities because of association with unpopular mainstream opinions), though it might also be a drawback (I think I have gotten some recognition and opportunity from those who share my unpopular mainstream opinions).
Option #3: A new name: If Option #1 and #2 don't work, I'd want to just rename the blog to something descriptive of a blog about utilitarianism. Some ideas I've come up with:
Comment author:Jonii
25 April 2013 08:08:42PM
2 points
[-]
I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with "<brand name> utilitarianism", "<brand name> design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: <stuff name>". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.
The Girl Scouts currently offer a badge in the "science of happiness." I don't have a daughter, but if you do, perhaps you should look into the "science of style" badge as well.
Comment author:CAE_Jones
18 April 2013 01:50:01PM
4 points
[-]
So far, I haven't found a good way to compare organizations for the blind other than reading their wikipedia pages.
And, well, blindness organizations are frankly a political issue. Finding unbiased information on them is horribly difficult. Add to this my relatively weak Google-fu, and I haven't found much.
Conclusions:
NFB is identity politics. They're also extremely assertive.
AFB focuses on technology, inherited Hellen Keller's everything, etc.
ACB... umm... exists. They did give me a scholarship, and made the case for accessible money (Good luck with that. :P), I guess.
I want to find the one with the most to offer, and take advantage of those opportunities.
The difficulty is figuring out which one is the most useful. NFB comes across as cultish and pushing their ideology on anyone who comes to them, and they seem to be ignoring medical professionals advising them against using sleep shades on people with residual sight in their training programs. Also, their specialized cane sounds like an identity symbol more than a utility maximizer; it has better reach, but is flimsy-yet-unfolding and gets in the way. I do like the implication that it optimizes arm usage, but otherwise it sounds annoying.
On the upside, they seem to be the loudest, and as we all know, America is the country where the loudest get large chunks of attention. I've read some of their legal recommendations, and they seem to be the work of someone who knows how to aim for a goal and shoot until they hit it. Also, they're intense about braille.
Meanwhile, I'm imagining AFB being a possible avenue for getting my hands on a blasted tactile display, and possibly other meaningful technology-related projects, without having to put up indoctrination shields. Eah, there doesn't seem to be as much to say on them, which tells me that they have much less to criticize, but at the same time, it makes me wonder if they're powerful enough for the vague notion of whatever nonspecific ideas spawned this investigation.
NFB's sleep shades and specialized cane are rational for their purpose: to force the trainee to strengthen blindness as an identifying quality. They have other excuses--sleep shades prepare people for the possibility of losing what sight they have, the specialized cane provides better reach and is easier on the arms--but in light of the responses to these, and their responses to those responses, it's pretty clear that the identity advertisement is their main purpose. And quite frankly, that's annoying; my vision is not an identifying quality I care much about, so much as it's an obstacle that's made its troubles much clearer to me as of late. None of the other organizations seem to be functionally equivalent to the NFB, minus that element. Their main rival, the ACB, doesn't seem to do much of anything other than have fancy meetings and occasionally talk to legal people.
Gah, I would just continue ignoring them all, as I always have, if I wasn't living in a freakin' box.
Comment author:CAE_Jones
22 April 2013 06:32:11PM
*
1 point
[-]
I still can't find much useful information on the AFB, but the NFB publicizes most of their major operations. The only successful one I've come across so far is the cancellation of the ABC sit com "Good and Evil" (it's worth noting that ABC denied that the NFB protests had anything to do with this). They don't seem to be having success at improving Kendel accessibility, which is more a political matter than a technological one (Amazon eventually cut communications with them). They're protesting Goodwill because 64/165 of their stores pay disabled employees less than minimum wage, in a manner that strikes me as poorly thought out (it seems to me that Goodwill has a much better image than the NFB, so this will most likely cost the NFB a lot of political capital).
This isn't really enough for me to determine whether they're powerful, or just loud, but so far it's making me update ever so slightly in favor of just loud.
It is worth noting that all of the above information came from publications written by NFB members, mostly hosted on NFB web sites. If my confidence in their abilities is hurt by writings seemingly designed to favor them, I can only imagine what something more objective would look like.
[edit]Originally typed Givewell instead of Goodwill! Fixed![/edit]
Comment author:gwern
17 April 2013 05:45:05PM
3 points
[-]
SDr actually gave me his research-edition Emotiv EPOC, but... I haven't actually gotten around to using it because I've been busy with things like Coursera and statistics. So, eventually! Hopefully.
Comment author:[deleted]
17 April 2013 07:12:15PM
4 points
[-]
In case the answer to Qiaochu_Yuan's question is something like “I'm trying to establish the moral status of tickling in my provisional moral system”, note that IIUC the sensation felt when eating spicy foods is also pain according to most definitions, but a moral system according to which eating spicy foods is bad can go #$%& itself for all that I'm concerned.
One question I like to ask in response to questions like this is "what do you plan on doing with this information?" I've generally found that thinking consequentially is a good way to focus questions.
Comment author:[deleted]
16 April 2013 01:45:58PM
*
4 points
[-]
or did you miss the part about him calling homosexuals human petri dishes?
The way you phrase that makes it sound way worse than the original, I think you misunderstood him. Here is the relevant part of the text you refer to:
One of my all-time favorites involved a New York City public health official talking about AIDS and homosexuality. He wasn’t saying anything generally verboten – he wasn’t pointing out that homosexual men are nature’s Petri dishes. He said this: the health department had previously estimated the number of homosexual men with AIDS in the Big Apple by doing a survey of the AIDS rate among gay men and then multiplying by someone else’s estimate of the prevalence of homosexuality. He announced that further work indicated that although their estimate of the frequency of AIDS among homosexual men in New York seemed correct, their new estimate of total cases was down by half. One of the (sharper) reporters asked ” So, does this mean that according to your new estimate, there are only half as many gay men in New York as you previously thought?” The hapless health official said “Yes, that would follow. “
He was giving this as an example of an offensive, perhaps even derogatory phrasing of something true (the higher STD rates of homosexuals etc.) that would be forbidden to say and that we would expect someone to get into trouble over. Then he contrasted it with the plain statement based on very hard to dispute reasonable inference that is apparently enough to get someone into trouble. Enough trouble to pressure them into publicly proclaiming something rather absurd.
After a week or so, he had to give a press conference. He said ” I said A. there are only half as many cases as we thought, B. We had the percentage of gay men infected right C. But I never said that there only half as many gay men in New York as previously thought. “
He had been forced to publicly renounce arithmetic.
In general he does not shy away from using controversial examples or poking fun of social norms, he nearly always speaks in similar tone, so this is not a "nasty" setting for homosexuals in particular if that is what you fear. He is quite the jerk when criticizing any position he thinks is wrong. But I'll be honest, it generally makes him a better writer. See this piece for example of his style.
Comment author:knb
16 April 2013 07:54:51PM
1 point
[-]
He is quite the jerk when criticizing any position he thinks is wrong.
It's amazing to me that you don't understand that this was exactly my point.
In general he does not shy away from using controversial examples or poking fun of social norms, he nearly always speaks in similar tone, so this is not a "nasty" setting for homosexuals in particular if that is what you fear.
In fact, If you actually read my comment I said his posts are often interesting but that he frequently comes across as a bitter and sour-tongued. From this context, you should have been able to understand that I'm familiar with his writing style.
Comment author:MixedNuts
17 April 2013 10:45:52AM
11 points
[-]
My current understanding of how hypnosis works is:
The overwhelming majority of our actions happen automatically, unconsciously, in response to triggers. Those can be external stimuli, or internal stimuli at the end of a trigger-response chain started by an external stimulus. Stimulus-response mapping are learnt through reinforcement. Examples: walking somewhere without thinking about your route (and sometimes arriving and noticing you intended to go someplace else), unthinkingly drinking from a cup in front of you. (Finding and exploiting those triggers is incredibly useful if you have executive function issues.)
This "free won't" isn't very reliable. In particular, there's very little you can do about imagery ("Don't think of a purple elephant"). Examples: advertising, priming effects, conformity.
Conscious processes can't multitask much, so by focusing attention elsewhere, stimuli cause responses more reliably and less consciously. See any study on cognitive load.
Hypnosis works by putting you in a frame of mind where cooperation is easy; that's mostly accomplished by your expectation to be hypnotised. For self-hypnosis you're pretty cooperative already ("I am doing that, therefore it works and it's good."), otherwise rapport with the hypnotist and yes sets (consenting to hypnosis, agreeing to listen/sit/look at something, truisms) help. Inducing trance seems to be mostly a matter of directing attention elsewhere while preserving this frame of mind. Old school hypnotists liked external foci like swinging pocket watches, candle flames and spirals; mindfulness inductions work similarly; Erickson was fond of pleasant imagery; I'm partial to thinking about the process of hypnosis itself.
Modern writers tend to use "trance" to mean a highly suggestible state, whereas older ones just mean a state where you act on autopilot. Flow is the latter kind of trance but not the former, as the thing you're concentrating on does prompt you to take some actions ("play these notes") but not in any form that resembles suggestion. I'm less certain about this than about the rest of my model, the link between trance and suggestibility might be deeper.
So the evolutionary explanation for hypnosis would look something like this:
It's easier to build a reflex agent than a utility maximiser, so evolution did that.
However, conscious decision-making does better, especially if you're going to be all technological and social, so evolution added one on top of the preexisting connectionist idiot.
It is easily disrupted, because evolution is a complete hack and only builds things that are robust as long as you don't do anything unusual.
Comment author:jimmy
17 April 2013 06:49:02AM
2 points
[-]
As far as I can tell, it's more of a spandrel than anything. As a general rule, anything you can do with "hypnosis", you can do without. Depending on what you're doing with it, it can be more of a feature or more of a bug that comes inherent to the architecture.
I could probably give a better answer if you explained exactly what you mean by "hypnosis", since no one can agree on a definition.
Comment author:CAE_Jones
30 April 2013 09:20:31PM
3 points
[-]
There's a phenomenon I'd like more research done on. Specifically, the ability to sense solid objects nonvisually without direct physical contact.
I suspect that there might be some association with the human echolocation phenomenon. I've found evidence that there is definitely an audio component; I entirely by accident simulated it in a wav file (It was a long time before I could listen to that all the way through, for the strong sense that something was reaching for my head; system2 had little say in the matter).
I've also done my own experiments involving covering my ears, and have still been able to sense things to some extent, if more weakly. I notice that if I walk around with headphones on, I have a much harder time getting a sense of my surroundings.
The size of the object, and its proximity to my head are related to how well I can sense it (large walls and trees are easier than bike racks or benches. My college had a lot of knee-high brick walls lining its paths, which was hell on my normal navigation methods).
My selfish motivation for researching this is that, if it can be perfectly simulated in audio, then game accessibility has a potential avenue to gain much strength. I would like to understand it even without that perk, though.
If there is, in fact, decent published research on this that I don't know about, I'd be grateful if someone could provide one or more links. Otherwise, I'd like an idea of who I might contact to try and initiate such research; at the moment, I'm considering recommending it to Lighthouse International.
Comment author:Tenoke
19 April 2013 01:17:22PM
3 points
[-]
I started following DavidM's meditation technique Is there anything that I should know? Any advice or reasons on why I should choose a different type of meditation?
Comment author:Tenoke
19 April 2013 02:16:34PM
1 point
[-]
FWIW adding tags to distracting thoughts and feelings seems like a useful thing (for me) even when not meditating and I haven't encountered this act of labeling in my past short research on meditation.
Does anyone have any real-world, object-level examples of degenerate cases?
I think degeneracy has some mileage in terms of explaining certain types of category error, (eg. "atheism is a religion"), but a lot of people just switch off when they start hearing a mathematical example. So far, the only example I've come up with is a platform pass at a train station, which is a degenerate case of a train ticket. It gets you on the platform and lets you travel a certain number of stops (zero) down the train line.
Comment author:MileyCyrus
16 April 2013 04:57:19AM
3 points
[-]
Cal Newport and Scott H. Young are collobarating to form a start deliberate practice course by email. Here's an excerpt from on Cal's emails to inquiring people:
The goal of the course is simple: to teach you how to apply the principles of deliberate practice to become a stand out in your job.
Why is this important? The Career Capital Theory I teach in my latest book and on Study Hacks maintains that the skills that make you remarkable are also your leverage for taking control of your working life, and transforming it into a source of passion.
The goal for Scott and I in offering a limited pilot run of the course at this point, is to get feedback from real people in real jobs. Adapting deliberate practice to knowledge work is difficult. We think experiments of this type are the only way to keep advancing our understanding.
The course lasts four weeks and is e-mail based. During each week you will receive three e-mails concluding with a concrete action step to help you solidify what you learned and start applying it to your life immediately.
Here is the curriculum: Week One: Mapping out How Success Actually Works in Your Field
Week Two: Hard Facts, Driving Your Career by Metrics
Week Three: Designing and Choosing Projects to Build Skills Faster
Week Four: Enabling Deep Work
Comment author:DaFranker
16 April 2013 02:32:00PM
*
8 points
[-]
Errh
On an uncharitable reading, this sounds like two wide-eyed broscientist prophets who found The One Right Way To Have A Successful Career (because by doing this their career got successful, of course), and are now preaching The Good Word by running an uncontrolled, unblinded experiment for which you pay 100$ just to be one of the lucky test subjects.
Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names. The facts that they've sold popular books with "get better" in the description and that they are socially-recognized as scientists are rather impressive, but doesn't substantially raise my priors of this working or not.
So if you've already tried some of their advice in enough quantity that your updated belief that any given advice from them will work is high enough and stable enough, this seems more than worth 100$.
Just the possible monetary benefits probably outweigh the upfront costs if it works, and even without that, depending on the kind of career you're in, the VoI and RoI here might be quite high, so depending on one's career situation this might need only a 30% to 50% probability of being useful for it to be worth the time and money.
Comment author:dspeyer
15 April 2013 05:32:45PM
11 points
[-]
The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.
Most common in politics, where people describe the unpleasantness of Somalia or North Korea when arguing for more or less government regulation as if it had some kind of relevance. Silliest is when people try to argue over which of the two is worse. Establishing the silliness of this is easy. Somalia beats assimilation by the borg, so government power is bad. North Korea beats the Infinite Layers of the Abyss, so government power is good. Surely no universal principle of government can be changed by which contrived example I pick.
And, with a little thought, it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia. Figuring out what that amount is and which side of it any given goverment lies on are important and hard questions. But looking at the extremes doesn't tell us anything about them.
(Treating "government power" as a scalar can be another fallacy, but I'll leave that for another post.)
Comment author:Viliam_Bur
16 April 2013 07:18:21AM
2 points
[-]
it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia
More nasty details: An amount of government which supports the most eudaemonia in the short term, may not be the best in the long term. For example, it could create a situation where the government can expand easily and has natural incentives to expand. Also, the specific amount of government may depend significantly on the technological level of society; inventions like internet or home-made pandemic viruses can change it.
Comment author:jaibot
15 April 2013 08:12:17PM
2 points
[-]
I think the "non-scalar" point is a much more important take-away.
Generalizing: "Many concepts which people describe in linear terms are not actually linear, especially when those concepts involve any degree of complexity."
Comment author:newguy
16 April 2013 04:24:25AM
5 points
[-]
Sex. I have a problem with it and would like to solve it. I get seriously anxious every time I'm about to have sex for the first time with a new partner. Further times are great and awesome. But the first time leaves me very anxious; which makes me delay it as much as I can. This is not optimal. I don't know how to fix it, if anyone can help I'd be greatly grateful
--
I notice I'm confused: I always tried to keep a healthy life: sleeping many hours, no alcohol, no smoke. I've just been living 5 days in a different country with some friends. We sleep 7 hours at most, they are smoking all the time, I've drank once. We hardly eat: My face looks better, I feel better, I just look healthier. Also feel like that. Possible confounds: I live mostly alone, now I'm also hanging out with at least 3 people, usually closer to 10. I'm going out and dancing at least 4 hours every night. I'm talking to new people every night. I don't know how I'd go about to test what caused this, but I'd like to know and keep that factor in my life. Any ideas?
Re: sex... is there anyone with whom you're already having great awesome sex who would be willing to help out with some desensitization? For example, adding role-playing "our first time" to your repertoire? If not, how would you feel about hiring sex workers for this purpose?
Re: lifestyle... list the novel factors (dancing 4 hrs/night, spending time with people rather than alone, sleeping <7 hrs/night, diet changes, etc. etc. etc.). When you're back home, identify the ones that are easy to introduce and experiment with introducing them, one at a time, for a week. If you don't see a benefit, move on to the next one. If none of them work, try them all at once. If that doesn't work, move on to the difficult-to-introduce ones and repeat the process.
Personally, I would guess that several hours of sustained exercise and a different diet are the primary factors, but that's just a guess.
Comment author:Manfred
17 April 2013 10:07:09AM
*
2 points
[-]
I will make the typical recommendation: cognitive behavioral therapy techniques. Try to notice your emotions and responses, and just sort them into helpful or not helpful. Studies also seem to show that this sort of thing works better when you're talking with a professional.
Michael Huemer explains why he isn't an Objectivist here and this blog is almost nothing but critiques of Rand's doctrines. Also, keep in mind that you are essentially asking for help engaging in motivated cognition. I'm not saying you shouldn't in this case, but don't forget that is what you are doing.
With that said, I enjoyed Atlas Shrugged. The idea that you shouldn't be ashamed for doing something awesome was (for me, at the time I read it) incredibly refreshing.
Comment author:mstevens
18 April 2013 11:14:17AM
2 points
[-]
Quoting from the linked blog:
"Assume that a stranger shouted at you "Broccoli!" Would you have any idea what he meant? You would not. If instead he shouted "I like broccoli" or "I hate broccoli" you would know immediately what he meant. But the word by itself, unless used as an answer to a question (e.g., "What vegetable would you like?"), conveys no meaning"
I don't think that's true? Surely the meaning is an attempt to bring that particular kind of cabbage to my attention, for as yet unexplained reasons.
Comment author:Desrtopa
20 April 2013 09:04:10PM
1 point
[-]
I don't think that's true? Surely the meaning is an attempt to bring that particular kind of cabbage to my attention, for as yet unexplained reasons.
That's a possible interpretation, but I wouldn't say "surely."
Some other possibilities.
The person picked the word apropos of nothing because they think it would be funny to mess with a stranger's head.
It's some kind of in-joke or code word, and they're doing it for the amusement of someone else who's present (or just themselves if they're the sort of person who makes jokes nobody else in the room is going to get.)
If I heard someone shout "Broccoli" at me without context, my first assumption would be that they'd actually said something else and I'd misunderstood.
My own deconversion was prompted by realizing that Rand sucked at psychology. Most of her ideas about how humans should think and behave fail repeatedly and embarrassingly as you try to apply it to your life and the lives of those around you. In this way, the disease gradually cures itself, and you eventually feel like a fool.
It might also help to find a more powerful thing to call yourself, such as Empiricist. Seize onto the impulse that it is not virtuous to adhere to any dogma for its own sake. If part of Objectivism makes sense, and seems to work, great. Otherwise, hold nothing holy.
Comment author:OrphanWilde
16 April 2013 01:15:03PM
3 points
[-]
Laughs I'm an Objectivist by my own accord, but I may be able to help if you find this undesirable.
The shortest - her derivations from her axioms have a lot of implicit and unmentioned axioms thrown in ad-hoc. One problematic case is her defense of property - she implicitly assumes no other mechanism of proper existence for humans is possible. (And her "proper existence" is really slippery.)
This isn't necessarily a rejection - as mentioned, I am an Objectivist - but it is something you need to be aware of and watch out for in her writings. If a conclusion doesn't seem to be quite right or doesn't square with your own conception of ethics, try to figure out what implicit axioms are being slipped in.
Reading Ayn Rand may be the best cure for Randianism, if Objectivism isn't a natural philosophy for you, which by your apparent distress it isn't. (Honestly, though, I'd stay the hell away from most of the critics, who do an absolutely horrible job of attacking the philosophy. They might be able to cure you of Randianism, but largely through misinformation and unsupported emotional appeals, which may just result in an even worse recurrence later.)
Both feature characters with super-human focus / capability (Rearden and Valentine Micheal Smith). And they have totally different effects on societies superficially similar to each other (and to our own).
There's more to say about Rand in particular, but we should probably move to the media thread for that specifically (Or decline to discuss for Politics is the Mindkiller reasons). Suffice it to say that uncertainty about how to treat the elite productive elements in society predates the 1950s and 1960s.
The (libertarian, but not Randian) philosopher Michael Huemer has an essay entitled "Why I'm not an objectivist." It's not perfect, but at least the discussion of Rand's claim that respect for the libertarian rights of others follows from total egoism is good.
What is the smartest group/cluster/sect/activity/clade/clan that is mostly composed of women? Related to the other thread on how to get more women into rationality besides HPMOR.
Ashkenazi dancing groups? Veterinarian College students? Linguistics students? Lilly Allen admirers?
No seriously, name guesses of really smart groups, identity labels etc... that you are nearly certain have more women than men.
Comment author:Dias
17 April 2013 09:35:11PM
4 points
[-]
Bryn Mawr has gone downhill a lot since the top female students got the chance to go to Harvard, Yale, etc. instead of here. Bryn Mawr does have a cognitive bias course (for undergraduates) but the quality of the students is not that high.
Of course, Bryn Mawr does excellently at the only-women part, and might do well overall once we take into account that constraint.
Comment author:knb
16 April 2013 08:45:10AM
*
10 points
[-]
Academic psychologists are mostly female. That would seem to be a pretty good target audience for LW. There are a few other academic areas that are mostly female now, but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area.
There are lists online of academic specialty by average GRE scores. Averaging the verbal and quantitative scores, and then determining which majority-female discipline has the highest average would probably get you close to your answer.
Comment author:[deleted]
16 April 2013 07:26:07PM
*
1 point
[+]
(3
children)
Comment author:[deleted]
16 April 2013 07:26:07PM
*
1 point
[-]
but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area
Well, keep in mind that 75% of LWers are under 31 anyway, so it's the sex ratio among the younger cohorts you mainly care about, not the sex ratio overall.
Comment author:knb
17 April 2013 01:25:05AM
2 points
[-]
But it isn't the undergrads you're looking for if you want the "smartest mostly female group." Undergrads are less bright on average than advanced degree holders due to various selection effects.
I think we are aiming for "females who can become rationalists" which means that expected smarts are more valuable then real smarts, in particular if the real ones were obtained through decades (implying the person will then be less flexible, since older).
I'm not entirely sure that targeted recruitment of feminists is a good idea. It seems to me like a good way to get LW hijacked into a feminist movement.
Comment author:bogus
16 April 2013 03:21:48PM
*
4 points
[-]
I agree, and would expand this to any politically motivated movement (including libertarians, moldbuggians etc.). After all, this is the main rationale for our norm of not discussing politics on LW itself.
Political movements in general care more about where you are and your usefulness as a soldier for their movement than how you got there. It's something that we are actively trying to avoid.
Comment author:Tenoke
27 April 2013 10:25:50AM
2 points
[-]
I encountered this cute summary of priming findings, thought you guys might like it, too:
You are walking into a room. There is a man sitting behind a table. You sit down across from him. The man sits higher than you, which makes you feel relatively powerless. But he gives you a mug of hot coffee. The warm mug makes you like the man a little more. You warm to him so to speak. He asks you about your relationship with your significant other. You lean on the table. It is wobbly, so you say that your relationship is very stable. You take a sip from the coffee. It is bitter. Now you think the man is a jerk for having asked you about your personal life. Then the man hands you the test. It is attached to a heavy clipboard, which makes you think the test is important. You’re probably not going to do well, because the cover sheet is red. But wait—what a relief!—on the first page is a picture of Einstein! Now you are going to ace the test. If only there wasn’t that lingering smell of the cleaning fluid that was used to sanitize the room. It makes you want to clean the crumbs, which must have been left by a previous test-taker, from the tabletop. You need to focus. Fortunately, there is a ray of sunlight coming through the window. It leaves a bright spot on the floor. At last you can concentrate on the test. The final question of the test asks you to form a sentence that includes the words gray, Florida, bingo, and pension. You leave the room, walking slowly…
Comment author:TimS
25 April 2013 01:12:08AM
2 points
[-]
Amanda Knox and evolutionary psychology - two of LessWrong's favorite topics, together in one news article / opinion piece.
The author explains the anti-Knox reaction as essentially a spandrel of an ev. psych reaction. Money quote:
In our evolutionary past, small groups of hunter-gatherers needed enforcers, individuals who took it upon themselves to punish slackers and transgressors to maintain group cohesion. We evolved this way. As a result, some people are born to be punishers. They are hard-wired for it.
I'm skeptical of the ev. psych because it seems to require a fairly strong form of group selection pressure. But I thought folks might find it interesting.
Comment author:komponisto
25 April 2013 01:54:37AM
*
4 points
[-]
The phenomenon of altruistic punishment itself is apparently not just a matter of speculation. Another quote from Preston's piece:
Experiments show that when some people punish others, the reward part of their brain lights up like a Christmas tree. It turns out we humans avidly engage in something anthropologists call “altruistic punishment.”
He links to this PNAS paper which uses a computer simulation to model the evolution of altruistic punishment. (I haven't looked at it in detail.)
Whatever the explanation for their behavior (and it really cries out for one), the anti-Knoxpeople are truly disturbing, and their existence has taught me some very unpleasant but important lessons about Homo sapiens.
(EDIT: One of them, incidentally, is a mathematician who has written a book about the misuse of mathematics in trials -- one of whose chapters argues, in a highly misleading and even disingenuous manner, that the acquittal of Knox and Sollecito represents such an instance.)
Comment author:TimS
25 April 2013 02:20:48AM
1 point
[-]
Skimming the PNAS paper, it appears that the conclusion is that evolved group co-operation is not mathematically stable without evolved altruistic punishment. I.e. populations with only evolved co-operation drift towards populations without any group focused evolved traits, but altruistic punishment seems to exclude enough defectors that evolved co-operation maintained frequency in the population.
Which makes sense, but I'm nowhere close to qualified to judge the quality of the paper or its implications for evolutionary theory.
Comment author:beoShaffer
23 April 2013 10:31:53PM
2 points
[-]
Buss's Evolutionary Psychology is good if you are specially looking for the evolutionary psychology element not so sure about general evolutionary biology books. Also we have a dedicated textbook thread.
I have noticed an inconsistency between the number of comments actually present on a post and the number declared at the beginning of its comments section, the former often being one less than the latter.
For example, of the seven discussion posts starting at "Pascal's wager" and working back, the "Pascal's wager" post at the moment has 10 comments and says there are 10, but the previous six all show a count one more than the actual number of visible comments. Two of them say there is 1 comment, yet there are no comments and the text "There doesn't seem to be anything here" appears. These are meetup announcements that I would not expect anyone to be posting banworthy comments to.
There is no sign of comments having being deleted or banned, and even if something of the sort is what has happened, I would expect the comment count displayed on a page to agree with the number of accessible comments.
On the Discussion page itself, the comment count displayed for each post agrees with the comment count displayed within the post.
Comment author:pragmatist
22 April 2013 11:01:06AM
3 points
[-]
A short while ago, spam comments in French were posted to a bunch of discussion threads. All of these were deleted. I'm guessing this discrepancy is a consequence of that.
I am aware that there have been several discussions over to what extent x-rationality translates to actual improved outcomes, at least outside of certain very hard problems like metaethics. It seems to me that one of the best ways to translate epistemic rationality directly into actual utility is through financial investment/speculation, and so this would be a good subject for discussion (I assume it probably has been discussed before, but I've read most of this website and cannot remember any in depth-thread about this, except for the mention of markets being at least partially anti-inductive).
Partially the reason for my writing this is that I have been reading about neuroeconomics and doing some academic research of my own (as in actually running experiments), and I am shocked by how near-universal irrational behavior displayed is (and therefore, exploitable by more rational agents). Even professional traders behavior is swayed by things like fluctuating testosterone levels. (Not that I know how to compensate for this!)
On a related note I've also been thinking about:
1) Applications for machine learning/narrow AI to finance.
2) Economic irrationality invalidating the libertarian free-market ideas, and possibly libertarianism in general, seeing as personal decisions can often be conceptualized economically. (I should point out that libertarianism used to appeal to me, and I find this line of reasoning mildly disturbing)
3) Gender relations, and the possibility that men are on average better at maths then women has been discussed here, and so discussion of the possibility that women are generally better at finance (see link above) could be beneficial, both in the context of pointing out opportunities to female rationalists, and to help dispel any appearance of misogyny that this community may have.
Again, I can't remember these being discussed here, and (1) seems very relevant to this community, although (2) is probably mind-killing and not very productive, unless any of us actually have the power to influence politics.
Apologies if this all has been already discussed in-depth somewhere.
Comment author:Metus
18 April 2013 02:06:19PM
2 points
[-]
Toying around with the Kelly criterion I get that the amount I should spend on insurance increases with my income though my intuition says that the higher your income is the less you should insure. Can someone less confused about the Kelly criterion provide some kind of calculation?
For anyone asking, I wondered if, given income and savings rate how much should be invested in bonds, stocks, etc. and how much should be put into insurance, e.g. health, fire, car, etc. from a purely monetary perspective.
The Kelly criterion returns a fraction of your bankroll; it follows that for any (positive-expected-value) bet whatsoever, it will advise you to increase your bet linearly in your income. Could this be the problem, or have you already taken that into account?
That aside, I'm slightly confused about how you can use the Kelly criterion in this case. Insurance must necessarily have negative expected value for the buyer, or the insurer makes no profit. So Kelly should be advising you not to buy any. How are you setting up the problem?
Comment author:Metus
20 April 2013 10:28:20AM
*
2 points
[-]
The Kelly criterion returns a fraction of your bankroll; it follows that for any (positive-expected-value) bet whatsoever, it will advise you to increase your bet linearly in your income. Could this be the problem, or have you already taken that into account?
Well that is exactly the point. It confuses me that the richer I am the more insurance I should buy, though the richer I am the more I am able to compensate the risk in not buying any insurance.
That aside, I'm slightly confused about how you can use the Kelly criterion in this case. Insurance must necessarily have negative expected value for the buyer, or the insurer makes no profit.
Yes and no. The insurer makes only a profit if the total cost of insurance is lower than the expected value of the case with no insurance. What you pay the insurer for is that the insurer takes on a risk you yourself are not able to survive (financially), that is catastrophically high costs of medical procedures, liabilities or similar. It is easily possible for the average Joe to foot the bill if he breaks a $5 mug but it would be catastrophic for him if he runs into an oil tank and has to foot the $10,000,000 bill to clean up the environment. (This example is not made up but actually happened around here.)
It is here where my intuition says that the richer you are, the less insurance you need. I could also argue that if it was the other way around, that you should insure more the richer you are, insurance couldn't exist, seeing as the insurer is the one who should buy insurance from the poor!
You can use the Kelly criterion in any case, either negative or positive expected value. In the case of negative value it just tells you to take the other side of the bet or to pay to avoid the bet. The latter is exactly what insurance is.
So Kelly should be advising you not to buy any. How are you setting up the problem?
I model insurance from the point of view of the buyer. In any given time frame, I can avoid the insurance case with probability q, saving the cost of insurance b. Or I could lose and have to pay a with the probability p = 1-q. This is the case of not buying insurance, though it is available. So if f = p/a - q/a is negative I should insure, if f is positive, I should take the risk. This follows my intuition insofar that catastrophic but improbable risk (very high a, very low p) should be insured but not probable and cheap liabilities (high p, low a).
The trick is now that f is actually the fraction of my bankroll I have to invest. So the richer I am the more I should insure absolutely but my intuition says I should by less insurance. I know I have ignored something fundamental in my model. Is it the cost of insurance? Is it some hidden assumption in the formulation of the Kelly criterion as applied to bets? Did I accidentally assume that someone knows something the other party doesn't? Did I ignore fixed costs? This eats me up.
Edit: Maybe the results have to be interpreted differently? Of course if I don't pay the insurance, Kelly still says to invest the money somehow, maybe in having a small amount always at hand as a form of personally organized insurance. Intuition again says that this pool should grow with my wealth, effectively increasing the amount of insurance I buy, though not from an insurer but in opportunity cost.
I know I have ignored something fundamental in my model.
The Kelly formula assumes that you can bet any amount you like, but there are only so many things worth insuring against. Once those are covered, there is no opportunity to spend more, even if you're still below what the formula says.
In addition, what is a catastrophic loss, hence worth insuring against, varies with wealth. If the risks that you actually face scale linearly with your wealth, then so should your expenditure on insurance. But if having ten times the wealth, your taste were only to live in twice as expensive a house, drive twice as expensive a car, etc. then this will not be the case. You will run out of insurance opportunities even faster than when you were poorer. At the Jobs or Gates level of wealth, there are essentially no insurable catastrophes. Anything big enough to wipe out your fortune would also wipe out the insurance company.
Comment author:aleksiL
20 April 2013 04:51:54PM
*
1 point
[-]
You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.
Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.
Comment author:lsparrish
16 April 2013 08:54:23PM
2 points
[-]
I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.
Comment author:lsparrish
17 April 2013 07:25:31PM
*
2 points
[-]
Ok... Well... If that's the case, and if you can tell me why you feel that way, I might have a response that would modify your preference. Then again, your reasoning might modify my own preference. Cryptic non-argument isn't particularly interesting, or helpful for coming to an Aumann Agreement.
1) I am not at all convinced that investing in bitcoins is positive expected value, 2) they seem high-variance and I'm wary about increasing the variance of my money too much, 3) I am not a domain expert in finance and would strongly prefer to learn more about finance in general before making investment decisions of any kind, and 4) your initial comment rubbed me the wrong way because it took as a standing assumption that bitcoins are obviously a sensible investment and didn't take into account the possibility that this isn't a universally shared opinion. (Your initial follow-up comment read to me like "okay, then you're obviously an idiot," and that also rubbed me the wrong way.)
If the bitcoin situation is so clear to you, I would appreciate a Discussion post making the case for bitcoin investment in more detail.
Comment author:Kaj_Sotala
18 April 2013 05:01:45AM
*
4 points
[-]
The standard advice is that normal people should never try to beat the market by picking any single investment, but rather put their money in index funds. The best publicly available information is already considered to be reflected in the current prices: if you recommend in buying a particular investment, that implies that you have knowledge that the best traders currently on the market do not have. As a friend commented:
The only rational reasons to hold a highly volatile, speculative investment are either if you have a huge risk preference (and with bitcoin we're talking about crack users) or if it's a really small share of your investments, of which the majority are really low-risk investments.
So if you think that people should be buying Bitcoins, it's up to you to explain why the standard wisdom on investment is wrong in this case.
(For what it's worth, personally I do own Bitcoins, but I view it as a form of geek gambling, not investment. It's fun watching your coins lose 60% in value and go up 40% from that, all within a matter of a few days.)
Bitcoins are more like investing in a startup. The plausible scenarios to bitcoins netting you a return commensurate with the risk involve it disrupting several 100 billion+ markets (paypal, western union). I think investing in startups that have plausible paths towards such disruptions are worthy of a small portion of your portfolio.
Comment author:[deleted]
16 April 2013 07:59:55PM
2 points
[-]
This has most likely been mentioned in various places, but is it possible to make new meetup posts (via the "Add new meetup" button) to only show up under "Nearest Meetups", and not be in Discussion? Also, renaming the link to "Upcoming Meetups" to match the title on that page, and listing more than two - perhaps a rolling schedule of the next 7 days.
Comment author:latanius
16 April 2013 01:37:21AM
*
2 points
[-]
Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.
... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)
Comment author:asparisi
15 April 2013 11:46:33PM
2 points
[-]
Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)
Comment author:kenzi
16 April 2013 01:36:39AM
10 points
[-]
Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.
Edit: We reached our deadline on May 1st. Site is live.
Some of you may recall the previous announcement of the blog. I envisioned it as a site that discusses right wing ideas. Sanity but not value checking them. Steelmanning both the ideas themselves and the counterarguments. Most of the authors should be sympathetic to them, but a competent loyal opposition should be sought out. In sum a kind of inversion of the LessWrong demographics (see Alternative Politics Question). Outreach will not be a priority, mutual aid on an epistemically tricky path of knowledge seeking is.
The current core group working on making the site a reality consists of me, ErikM, Athrelon, KarmaKaiser and MichaelAnissimov and Abudhabi. As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me. If you are interested in the hard to discuss subjects or the politics and want to join as a coauthor or approved commenter (we are seeking more) send me a PM with an email adress or comment here.
Comment author:bogus
26 April 2013 10:41:11AM
*
5 points
[-]
This is a great idea. We should create rationalist blogs for other political factions too, such as progressivism, feminism, anarchism, green politics and others. Such efforts could bring our programme of "raising the sanity waterline" to the public policy sphere -- and this might even lay some of the groundwork for eventually relaxing the "no politics at LW" rule.
I don't expect LessWrong itself to become a good venue to discuss politics. I do think LessWrong could keep its spot at the center of a "rationalist" blogosphere that may be slowly growing. Discussions between different value systems part of it might actually be worth following! And I do think nearly all political factions within such a blogosphere would find benefits in keeping their norms as sanity friendly as possible.
Comment author:MugaSofer
28 April 2013 09:13:44PM
1 point
[-]
I hold more liberal than conservative beliefs, but I'm increasingly reluctant to identify with any position on the left-right "spectrum". I definitely hold or could convincingly steelman lots of beliefs associated with "conservativism", especially if you include criticism of "liberal" positions. Would this be included in the sort of demographic you're seeking?
Comment author:shminux
26 April 2013 04:34:07PM
*
-5 points
[-]
Having read Yvain's excellent steelmanning and subsequent critique of conservatism on his blog, I wonder what else can be usefully said about the subject.
EDIT: changed wording a bit. Hopefully someone will reply, not just silently downvote.
Comment author:MugaSofer
28 April 2013 09:09:20PM
2 points
[-]
Yup, no way there could be anything more to say on the subject of a huge and varied group of ideologies.</sarcasm>
More seriously, what about, y'know, counterarguments? Steelmanning is all very well, but this would involve steelmanning by people who actually ascribe to conservative positions.
Comment author:Viliam_Bur
17 April 2013 07:00:03PM
3 points
[-]
Sometimes, success is the first step towards a specific kind of failure.
I heard that the most difficult moment for a company is the moment it starts making decent money. Until then, the partners shared a common dream and worked together against the rest of the world. Suddenly, the profit is getting close to one million, and each partner becomes aware that he made the most important contributions, while the others did less critical things which technically could be done by employees, so having to share the whole million with them equally is completely stupid. At this moment the company often falls apart.
When a group of people becomes very successful, fighting against other people within the group can bring higher profit than cooperating against the environment. It is like playing a variant of a Prisonner's Dilemma where the game ends at the first defection and the rewards for defection are growing each turn. It's only semi-iterated; if you cooperate, you can continue to cooperate in the next turn, but if you manage to defect successfully, there may be no revenge, because the other person will be out.
Will something like this happen to the rationalist community one day (assuming the Singularity will not happen soon)? At this moment, there are small islands of sanity in the vast oceans of irrationality. But what if some day LW-style rationality becomes popular? What are the risks of success analogical to a successful company falling apart?
I can imagine that many charismatic leaders will try to become known as the most rational individual on the planet. (If rationality becomes 1000× more popular than it is today, imagine the possible temptations: people sending you millions of dollars to support your mission, hundreds of willing attractive poly partners, millions of fans...) There will be honest competition, which is good, but there will also be backstabbing. Some groups will experiment with mixing 99% rationality and 1% applause lights (or maybe 90% rationality and 10% applause lights), where "applause lights" will be different for different groups; it could be religion, marxism, feminism, libertarianism, racism, whatever. Or perhaps just removing the controversial parts, starting with many-worlds interpretation. Groups which optimize for popularity could spread faster; the question is how quickly would they diverge from rationality.
Do you think an outcome like this is likely? Do you think it is good or bad? (Maybe it is better to have million people with 90% of rationality, than only a thousand with 99% of rationality.) When will it happen? How could we prevent it?
Comment author:OrphanWilde
17 April 2013 09:16:50PM
3 points
[-]
True. It's harder to fake rationality than it is to fake the things that matter today, however (say, piety). And given that the sanity waterline has increased enough that "rational" is one of the most desirable traits for somebody to have, fake signaling should be much harder to execute. (Somebody who views rationality as such a positive trait is likely to be trying to hone their own rationality skills, after all, and should be harder to fool than the same person without any such respect for rationality or desire to improve their own.)
Here I assume that the popularity of the word "rationality" will come before there are millions of x-rationalists to provide feedback against wannabe rationalists. It would be enough if some political movement decided to use this word as their applause light.
Comment author:Viliam_Bur
18 April 2013 12:07:59PM
5 points
[-]
The community here is heavily centered around Eliezer. I guess if someone started promoting some kind of fake rationality here, sooner or later they would get into conflict with Eliezer, and then most likely lose the support of the community.
For another wannabe rationalist guru it would be better to start their own website, not interact with people on LW, but start recruiting somewhere else, until they have greater user base than LW. At the moment their users notice LW, all they have to do is: 1) publish a few articles about cults and mindkilling, to prime their readers, and 2) publish a critique of LW with hyperlinks to all currently existing critical sources. The proper framing would be that LW is a fringe group which uses "rationality" as applause lights, but fails horribly (insert a lot of quotations and hyperlinks here), and discussing them is really low-status.
It would help if the new rationalist website had a more professional design, and emphasised its compatibility with mainstream science, e.g. by linking to high-status scientific institutions, and sometimes writing completely uncontroversial articles about what those institutions do. In other words, the new website should be optimized to get 100% approval of the RationalWiki community. (For someone trying to do this, becoming a trusted member of RationalWiki community could be a good starting point.)
I'm busy having pretty much every function of RW come my way, in a Ponder Stibbons-like manner, so if you can tell me where the money is in this I'll see what I can come up with. (So far I've started a blog with no ads. This may not be the way to fame and fortune.)
Comment author:gwern
27 April 2013 09:18:27PM
*
0 points
[-]
The money or lack thereof doesn't matter, since RW is obviously not an implementation of Villam's proposed strategy: it fails on the ugliness with its stock MediaWiki appearance, has too broad a remit, and like El Reg it shoots itself in the foot with its oh-so-hilarious-not! sense of humor (I dislike reading it even on pages completely unrelated to LW). It may be successful in its niche, but its niche is essentially the same niche as /r/atheism or Richard Dawkins - mockery of the enemy leavened with some facts and references.
If - purely hypothetically speaking here, of course - one wished to discredit LW by making the respective RW article as negative as possible, I would expect it to do real damage. But not be any sort of fatal takedown that set a mainstream tone or gave a general population its marching orders, along the lines of Shermer's 'cryonics is a scam because frozen strawberries' or Gould's Mismeasure of Man's 'IQ is racist, involved researchers like Merton faked the data because they are racist, and it caused the Holocaust too'.
Comment author:lukeprog
25 April 2013 02:13:19AM
*
2 points
[-]
In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
There are some in the AI research community who believe that logic is (to put it crudely) the work of the devil, and that the effort devoted to such problems as logical knowledge representation and theorem proving over the years has been, at best, a waste of time. At least a brief justification for the use of logic therefore seems necessary.
First, by fixing on a structured, well-defined artificial language (as opposed to unstructured, ill-defined natural language), it is possible to investigate the question of what can be expressed in a rigorous, mathematical way (see, for example, Emerson and Halpern [50], where the expressive power of a number of temporal logics are compared formally). Another major advantage is that any ambiguity can be removed (see, e.g., proofs of the unique readability of propositional logic and first-order predicate logic [52, pp.39-43]).
Transparency is another advantage: "By expressing the properties of agents, and multiagent systems as logical axioms and theorems in a language with clear semantics, the focal points of (the theory) are explicit. The theory is transparent; properties, interrelationships, and inferences are open to examination. This contrasts with the use of computer code, which requires implementational and control aspects within which the issues to be tested can often become confused." [68, p.88]
Finally, by adopting a logic-based approach, one makes available all the results and techniques of what is arguably the oldest, richest, most fundamental, and best-established branch of mathematics.
Comment author:shminux
22 April 2013 07:25:11PM
*
2 points
[-]
I've always felt that Atlas Shrugged was mostly an annoying ad nauseum attack on the same strawman over and over, but given the recent critique of Google, Amazon and others working to minimize their tax payments, I may have underestimated human idiocy:
the Public Accounts Committee, whose general verdict was that while companies weren't doing anything legally wrong when they shifted profits around the world to lower their total tax bill, the practice was "immoral".
On the other hand, these are people wearing their MP hats, they probably sing a different tune as board members. Or maybe Britain is overdue for another Thatcher.
North Korea is threatening to start a nuclear war. The rest of the world seems to be dismissing this threat, claiming it's being done for domestic political reasons. It's true that North Korea has in the past made what have turned out to be false threats, and the North Korean leadership would almost certainly be made much worse off if they started an all out war.
But imagine that North Korea does launch a first strike nuclear attack, and later investigations reveal that the North Korean leadership truly believed that it was about to be attacked and so made the threats in an attempt to get the U.S. to take a less aggressive posture. Wouldn't future historians (perhaps suffering from hindsight bias) judge us to be idiots for ignoring clear and repeated threats from a nuclear-armed government that appeared crazy (map doesn't match territory) and obsessed with war.
Comment author:gwern
15 April 2013 10:33:43PM
2 points
[-]
Wouldn't future historians (perhaps suffering from hindsight bias) judge us to be idiots for ignoring clear and repeated threats from a nuclear-armed government that appeared crazy (map doesn't match territory) and obsessed with war.
Why do we care what they think, and can you name previous examples of this?
As someone who studies lots of history while often thinking, "how could they have been this stupid didn't they know what would happen?", I thought it useful to frame the question this way.
Hitler's professed intentions were not taken seriously by many.
Comment author:gwern
15 April 2013 11:14:29PM
16 points
[-]
Hitler's professed intentions were not taken seriously by many.
Taken seriously... when? Back when he was a crazy failed artist imprisoned after a beer hall putsch, sure; up to the mid-1930s people took him seriously but were more interested in accommodationism. After he took Austria, I imagine pretty much everyone started taking him seriously, with Chamberlain conceding Czechoslovakia but then deciding to go to war if Poland was invaded (hardly a decision to make if you didn't take the possibilities seriously). Which it then was. And after that...
If we were to analogize North Korea to Hitler's career, we're not at the conquest of France, or Poland, or Czechoslovakia; we're at maybe breaking treaties & remilitarizing the Rhineland in 1936 (Un claiming to abandon the cease-fire and closing down Kaesŏng).
One thing that hopefully the future historians will notice is that when North Korea attacks, it doesn't give warnings. There were no warnings or buildups of tension or propaganda crescendos before bombing & hijacking & kidnapping of Korean airliners, the DMZ ax murders, the commando assault on the Blue House, the sinking of the Cheonan, kidnapping Korean or Japanese citizens over the decades, bombing the SK president & cabinet in Burma, shelling Yeonpyeong, the attempted assassination of Park Sang-hak... you know, all the stuff North Korea has done before.
To the extent that history can be a guide, the propaganda war and threats ought to make us less worried about there being any attack. When NK beats the war drums, it want talks and concessions; when it is silent, then that is when it attacks. Hence, war drums are comforting and silence worrisome.
Certainly the consequences of us being wrong are bad, but that isn't necessarily enough to outweigh the presumably low prior probability that we're wrong. (I'm not taking a stance on how low this probability is because I don't know enough about the situation.) Presumably people also feel like there are game-theoretic reasons not to respond to such threats.
Comment author:mstevens
16 April 2013 01:36:08PM
2 points
[-]
Not that I've seen. It'd be cool though. I think maybe you can see traces in people like Peter Watts, but if you take HPMOR as the defining example, I can't think of anything.
Comment author:TimS
16 April 2013 01:53:57PM
1 point
[-]
I'm always found Stross (and to a lesser extent, Scalzi) to be fairly rationalist - in the sense that I don't see anyone holding the idiot ball all that frequently. People do stupid things, but they tend not to miss the obvious ways of implementing their preferences.
Comment author:gwern
24 April 2013 11:09:50PM
7 points
[-]
All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.
I'm not sure that's true. When I looked in the 2012 survey, I didn't see any striking gender disparity based on MoR: http://lesswrong.com/lw/fp5/2012_survey_results/8bms - something like 31% of the women found LW via MoR vs 21% of the men, but there are just not that many women in the survey...
That does not factor the main point " that would not otherwise have become rationalist"
There are loads of women out there on a certain road into rationalism. Those don't matter. By definition, they will become rationalists anyway.
There are large numbers who could, and we don't know how large, or how else they could, except HPMOR
Comment author:TimS
25 April 2013 01:16:21AM
2 points
[-]
Leaving aside gwern's rudeness, he is right - if MoR doesn't entice more women towards rationality than the average intervention, and your goal is to change the current gender imbalance among LW-rationalists, then MoR is not a good investment for your attention or time.
Comment author:diegocaleiro
25 April 2013 12:27:05AM
*
-1 points
[-]
It is not a claim, it is an assumption that the reader ought to take for granted, not verify. If I thought there were reliable large N data of a double blind on the subject, I'd simply have linked the stats. As I know there are not, I said something based on personal experience (as one should) and asked for advice on how to improve the world, if the world turns out to correlate with my experience of it.
Your response reminds me of Russell's joke about those who believe that "all murderers have been caught, since all muderers we know have been caught"...
The point is to find attractors, not to reject the stats.
In a few places — possibly here! — I've recently seen people refer to governments as being agents, in an economic or optimizing sense. But when I reflect on the idea that humans are only kinda-sorta agents, it seems obvious to me that organizations generally are not. (And governments are a sort of organization.)
People often refer to governments, political parties, charities, or corporations as having goals ... and even as having specific goals which are written down here in this constitution, party platform, or mission statement. They express dismay and outrage when these organizations act in ways that contradict or ignore those stated goals.
Does this really make sense?
It seems to me that just as the art or science of acting like you have goals is "instrumental rationality", it may be that the art or science of causing organizations to act like they have goals is called "management".
Comment author:Omid
07 July 2013 05:19:20PM
1 point
[-]
Who is the best pro-feminist blogger still active? In the past I enjoyed reading Ozy Frantz, Clarisse Thorn, Julia Wise and Yvain, but none of them post regularly anymore. Who's left?
Comment author:khafra
26 April 2013 07:01:43PM
1 point
[-]
Are you a guy that wants more social interaction? Do you wish you could get complimented on your appearance?
Grow a beard! For some reason, it seems to be socially acceptable to compliment guys on a full, >1", neatly trimmed beard. I've gotten compliments on mine from both men and women, although requests to touch it come mostly from the latter (but aren't always sexual--women with no sexual attraction to men also like it). Getting the compliments pretty much invariably improves my mood; so I highly recommend it if you have the follicular support.
Comment author:lukeprog
25 April 2013 08:31:17PM
1 point
[-]
I wrote something on Facebook recently that may interest people, so I'll cross-post it here.
Cem Sertoglu of Earlybird Venture Capital asked me: "will traders be able to look at their algorithms, and adjust them to prevent what happened yesterday from recurring?
My reply was:
I wouldn't be surprised if traders will be able to update their algorithms so that this particular problem doesn't re-occur, but traders have very little incentive to write their algorithms such that those algorithms would be significantly more robust in general. The approaches they use now are intrinsically not as transparent as (e.g.) logic-based approaches to software agent design, but they are more immediately profitable than logic-based approaches.
Wall Street has tried before to update its systems to be more robust, but their "band-aids" approach won't be sufficient. For example: in response to the flash crash of 2010, regulators installed a kind of "circuit breaker" that halts trading when there are extreme changes in a stock's price. Unfortunately, this did not prevent high-frequency trading programs from disrupting markets again on August 1st, 2012, in part because the circuit breaker wasn't also programmed to halt trading if there were extreme changes in the number of shares being traded (see: http://is.gd/vBqf53).
We can design multi-agent ecosystems using only logic-based agents that are (in some cases) subject to "formal verification" (mathematical proof of correct operation). See, for example, http://is.gd/XgRJYn. But these approaches haven't seen nearly as much development as the approaches currently in use on Wall Street, because they are not as immediately profitable.
Only regulators could have sufficient incentive to implement a more trustworthy ecosystem of high-frequency trading programs, but they succumbed to regulatory capture long ago, and therefore won't do anything so drastic.
I'm not too worried about the next 5 years, though. Mostly it will just be momentary scares, like the flash crash and the recent fake tweet disruption. I'm more worried about the far more powerful autonomous programs of the future, and those programs are the focus of our research at MIRI.
Comment author:CAE_Jones
25 April 2013 11:47:43AM
1 point
[-]
Considering making my livejournal into something resembling the rationality diaries (I'd keep the horrible rambling/stupid posts for honesty/archival purposes). I can't tell if this is a good idea or not; the probability that it'd end like everything else I do (quietly stewing where only I bother going) seems absurdly high. On the other hand, trying to draw this kind of attention to it and adding structure would probably help spawn success spirals. Perhaps I should try posting on a schedule (Sunday/Tuesday/Thursday seems good, since weekends typically suck and probably will motivate me to post, but holding off on that until Monday could keep me in a negative mindset that could delay rebounding). I suppose I'll have an answer (to the question that no one asked) by Sunday, then, unless someone convinces me one way or the other before then.
Comment author:Document
24 April 2013 11:07:35PM
*
1 point
[-]
I started browsing under Google Chrome for Android on a tablet recently. Since there's no tablet equivalent of mouse hovering, to see where a link points without opening it I have to press and hold on it. For off-site links in posts and comments, though, LW passes them through api.viglink.com, so I can't see the original URL through press-and-hold. Is there a way to turn that function off, or an Android-compatible browser plugin to reverse it?
Would you like to gain experience in non-profit operations by working for the Centre for Effective Altruism, a young and rapidly expanding charity based in Oxford? If so, we encourage you to apply to join our Graduate Volunteer Scheme as Finance and Fundraising Manager
Comment author:ahbwramc
23 April 2013 03:05:35PM
2 points
[-]
No, I didn't delete it. It went down to -3 karma, which apparently hides it on the discussion page. That's how I'm assuming it works anyway, given that it reappeared as soon as it went back up to -2. Incidentally, it now seems to be attracting random cold fusion "enthusiasts" from the greater internet, which was not my intention.
Comment author:TimS
23 April 2013 03:33:16PM
4 points
[-]
The hide / not hide can be set individually by clicking Preferences next to one's name. I think you are seeing the result for the default settings - I changed mine a while ago and don't remember what the default is.
Is there anyway to see authors classified by h-index? Google scholar seems not to have that functionality. And online lists only exist of some topics...
Lewis Dennett and Pinker for instance have nearly the same h-index.
Ed Witten's is much larger than Stephen Hawkings..... etc........
If you know where to find listings of top h-indexes, please let me know!
Comments (459)
I know this comes up from time to time, but how soon until we split into more subreddits? Discussion is a bit of firehose lately, and has changed drastically from its earlier role as a place to clean up your post and get it ready for main. We get all kinds of meetup stuff, philosophical issues, and so forth which mostly lack relevance to me. Not knocking the topics (they are valuable to the people they serve) but it isn't helpful for me.
Mostly I am interested in scientific/technological stuff, especially if it is fairly speculative and in need of advocacy. Cryonics, satellite-based computing, cryptocurrency, open source software. Assessing probability and/or optimal development paths with statistics and clean epistemology is great, but I'm not super enthused about probability theory or philosophy for its own sake.
Simply having more threads in the techno-transhumanist category could increase the level of fun for me. But there also needs to be more of a space for long-term discussions. Initial reactions often aren't as useful as considered reactions a few days later. When they get bumped off the list in only a few days, that makes it harder to come back with considered responses, and it makes for fewer considered counter-responses. Ultimately the discussion is shallower as a result.
Also, the recent comments bar on the right is less immediately useful because you have to click to the Recent Comments page and scroll back to see anything more than a few hours in the past.
I guess instead of complaining publicly, it would be better to send a private message to a person who can do something about it, preferably with a specific suggestion, and a link to a discussion which proves that many people want it.
Long-term threads separately seems to be a very popular idea... there were even some polls in the past to prove it.
MIRI's strategy for 2013 involves more strongly focusing on math research, which I think is probably the right move, even though it leaves them with less use for me. (Math isn't my weakest suit, but not my strongest, either.)
How much difference can nootropics make to one's studying performance / habits? The problems are with motivation (the impulse to learn useful stuff winning out over the impulse to waste your time) and concentration (not losing interest / closing the book as soon as the first equation appears -- or, to be more clear, as soon as I anticipate a difficult task laying ahead). There are no other factors (to my knowledge) that have a negative impact on my studying habits.
Or, to put it differently: if a defective motivational system is the only thing standing between me and success, can I turn into an uber-nerd that studies 10 h/day by popping the right pills?
EDIT: Never messed with my neurochemistry before. Not depressed, not hyperactive... not ruling out some ADD though. My sleep "schedule" is messed up beyond belief; in truth, I don't think I've even tried to sleep like a normal person since childhood. Externally imposed schedules always result in chronic sleep deprivation; I habitually push myself to stay awake till a later hour than I had gone to sleep at the previous night (/morning/afternoon) -- all of this meaning, I don't trust myself to further mess with my sleeping habits. Of what I've read so far, selegiline seems closest to the effects I'm looking for, but then again all I know about nootropics I've learned in the past 6 hours. I can't guarantee I can find most substances in my country.
... Bad or insufficient sleep can cause catastrophic levels of akrasia. Fix that, then if you still have trouble, consider other options. Results should be apparent in days, so it is not a very hard experiment to carry out - set alarms on your phone or something for when to go to bed, and make your bedroom actually dark (this causes deeper sleep) you should get more done overall because you will waste less of your waking hours.
I agree with ThrustVectoring that you'll probably get more mileage out of implementing something like a GTD system (or at least that doing this will be cheaper and seems like it would complement any additional mileage you get out of nootropics). There are lots of easy behavioral / motivational hacks you can use before you start messing with your neurochemistry, e.g. rewarding your inner pigeon.
I've had some success recently with Beeminding my Pomodoros. It forces me to maintain a minimal level of work per unit time (e.g. recently I was at the MIRI workshop, and even though ordinarily I would have been able to justify not doing anything else during that week I still spent 25 minutes every day working on problem sets for grad school classes) which I'm about to increase.
Tried. Failed. Everything that requires me, in my current state, to police myself, fails miserably. It's like my guardian demon keeps whispering in my ear, "hey... who's to stop me from breaking the same rules that I have set for myself?" -- cue yet another day wasted.
Eat candy every time I clear an item off my to-do list? Eat candy even when I don't!
Pomodoros? Y-yeah, let's stop this timer now, shall we -- I've just got this sudden imperious urge to play a certain videogame, 10 minutes into my Pomodoro session...
Schedule says "do 7 physics problems"? Strike that, write underneath "browse 4chan for 7 hours"!
... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.
I could have easily written this exact same post two years ago. I used to be incredibly akratic. For example, at one point in high school I concluded that I was simply incapable of doing any schoolwork at home. I started a sort of anti-system where I would do all the homework and studying I could during my free period the day it was due, and simply not do the rest. This was my "solution" to procrastination.
Starting in January, however, I made a very conscious effort to combat akrasia in my life. I made slow, frustrating progress until about a week and a half ago where something "clicked" and now I spend probably 80% of my free time working on personal projects (and enjoying it). I know, I know, this could very easily be a temporary peak, but I have very high hopes for continuing to improve.
So, keep your head up, I guess.
I think on LessWrong, quick simple "tricks" like Pomodoro / feeding yourself candy / working in the same room as someone else / disabling Chrome are way, way, over emphasized. (The only trick I use is writing down my impulses e.g. "check reddit" before indulging in them.) What actually helped/helps me is introspection. Try to figure out what is it about working that's so unpleasant. Why does your brain resist it so much? Luke's algorithm for beating procrastination is something along the lines of what I'm talking about. I think a lot of people have a "use willpower in order to fight through the pain" mentality, but I think what you really want to do is eliminate the pain. If work is torture for you, then I don't really think you can ever be productive unless you change that fact.
From books that I've read and my own experience, it seems to me that one of the easiest traps to fall into (and one of the most fatal) is tying your productivity to your sense of self-worth, especially if you use use your self-worth to motivate yourself ("If I can complete this assignment, I'll be like who my dad wanted me to be!"), especially if you use your self-worth to negatively motivate yourself ("If I don't pass this test, I'll basically be a failure in life"), especially if you actively foster this attitude in order to push yourself, and especially if you suffer or have recently suffered from depression or low self-esteem.
I can say more, but I don't want to waste my time typing it all out if nobody's going to read it, so just reply to this post if you want me to share more of my experiences. (That goes for anyone reading this, not just the OP).
Please do go on; I'd be very much interested in what you have to say.
Okay.
To be honest, it's really hard to say exactly what lead to my change in willpower/productivity. Now that I actually try to write down concrete things I do that I didn't do two months ago, it's hard, and my probability that my recent success is a fluke has gone up a little.
I feel like what happened is that after reading a few self-help books and thinking a lot about the problem, I ended up completely changing the way I think about working in a difficult-to-describe way. It's kind of like how when I first found LessWrong, read through all the sequences, and did some musings on my own, I completely changed the way I form beliefs. Now I say to myself stuff like "How would the world look differently if x were true?" and "Of all the people who believe x will happen to them, how many are correct?", even without consciously thinking about it. Perhaps more importantly, I also stopped thinking certain thoughts, like "all the evidence might point to x, but it's morally right to believe y, so I believe y", etc.
Similarly, now, I now have a bunch of mental habits related to getting myself to work harder and snap out of pessimistic mindstates, but since I wasn't handed them all in one nicely arranged body of information like I was with LessWrong, and had to instead draw from this source and that source and make my own inferences, I find it really hard to think in concrete terms about my new mental habits. Writing down these habits and making them explicit is one of my goals, and if I end up doing that, I'll probably post it somewhere here. But until then, what I can do is point you in the direction of what I read, and outline a few of what I think are the biggest things that helped me.
The material I read was
Out of all of these, I most recommend Succeed and Switch. PJ Eby is a weird example because he is One Of Us, but he has no credentials, the book is actually unfinished, and he now admits on his website that writing it was one of the worst periods in his life and he was procrastinating every day. So it makes sense to be very skeptical. However, I actually really enjoyed Thinking Things Done and I think that it's probably the best book out of all of these to get you into the "mind hacking" mindset that I attributed my success to, even if its contents aren't literally true. So you can make your own decision on that. Feeling Good isn't a productivity book at all, but I found it really helpful in dealing with akrasia for reasons that I'll sort of explain later. I wouldn't bother to read the Procrastination Equation because there's a summary by lukeprog on this site that basically says everything the book says. And Getting Things Done just describes an organizational system that seems tailored for very busy white collar professionals, so if that doesn't describe you I don't think it's worth it.
Obviously if your akrasia extends to reading these books then this isn't very helpful, but perhaps you could make it your goal to read just one of them (I recommend Succeed) over a period of two months or so. I think this would go a long way.
And then here are the things that most helped me, and can actually be written down at this time. I have the impression that there isn't a singular "key to success" - instead, success requires a whole bunch of attributes to all be in place, and most people have many but not all. So the insights that you need might be very different than the ones I needed, but perhaps not.
1: Not tying my self-worth to my success
The thesis of PJ Eby's Thinking Things Done is that the main reason why people are unsuccessful is that they use negative motivation ("if I don't do x, some negative y will happen") as opposed to positive motivation ("if i do x, some positive y will happen"). He has the following evo-psych explanation for this: in the ancestral environment, personal failure meant that you could possibly be kicked out of your tribe, which would be fatal, and animals have a freezing response to imminent death, so if you are fearing failure you will freeze up.
In Succeed, Heidi Halverson portrays positive motivation and negative motivation as having pros and cons, but has her own dichotomy of unhealthy motivation and healthy motivation: "Be good" motivation, which is tied to identity and status and focuses on proving oneself and high levels of performance, and "get better" motivation, which is what it sounds like. According to her and several empirical studies, "get better" is better than "be good" in almost every way.
In Feeling Good, David Burns describes a tendency of behavior he calls "do-nothingism" where depressed people will lie in bed all day, then feel terrible for doing so, leading them to keep lying in bed, leading them to feel even worse, etc. etc.
It seems like a pretty intuitive for a depressed, lazy person to motivate themselves by saying "Okay, self, gotta stop being lazy. Do you want to be a worthless, lazy failure in life? No you don't. So get moving!" But it seems like synthesizing these three pieces of information informs us that this is basically the worst thing you can possibly do. I definitely fell into this trap, and climbing out of it was probably one of the biggest things that helped me.
2: Being realistic
I feel like something a lot of people tend to do is tell themselves "From this day now on, I'll be perfect!" and then try to spend six hours a day working on personal projects, along with doing 100 push ups and meditating. This is obviously stupid, but for some reason at least for me was a really hard trap to get out of.
For example, I've always been a person who is really easily inspired i.e. if I read a good book, I'll want to write a book, if I listen to a good rap album, I'll want to become a rapper. Due to this tendency, I've done a fair bit of exploration in visual art, music, and video game programming. When I initially attempted my akrasia intervention, I tried to get myself to work on all three of these areas and achieve meaningful results in all of them. I held onto the naive belief that this was possible for far too long, and eventually had a mini-crisis of faith where I decided that I would cut my losses and from then on exclusively work on video game programming. Since then, things have been going much better.
This also goes with the get better mindset from the last point. If you are the worst procrastinator you know, your initial goal should be to be a merely below average procrastinator, then to be an average procrastinator, and on and on until you cure akrasia.
3: Realistic optimism
All the studies show that optimists are more successful in almost every domain. So how is that compatible with my "being realistic" point? The key is that the best, most healthy kind of optimism is the belief that you can eventually succeed in your goals (and will if you are persistent), but that it will take a lot of effort and setbacks along the way to do so. This is usually a valid belief, and combines the motivation of optimism and the cautiousness of pessimism. (This is straight from Succeed, by the way.)
4: Elephant / Rider analogy
I'm not going to go into detail about this because this post is getting long as fuck, but if this idea is unfamiliar to you, search for it on Google and LessWrong, it's been written about extensively and is a very very useful (and liberating)metaphor for how your brain works.
5: Willpower is like a muscle
Willpower is like a muscle and if you give it regular workouts it gets stronger. People who quit smoking often also start exercising or stop drinking, depressed people who are given a pet to care for often become much happier because the responsibility encourages them to enact changes in their own life, etc.
This implies that once you start changing a little, it will be easier to change more and more. But you can also artificially jump start this process by exercising your willpower. Probably the best willpower exercises are physical exercise and meditation (and they both of course have numerous other benefits), but if you lack the energy/time/desire to do either of those, you could always do something very simple and gradually build. If you have a bad habit like biting your nails, that could be a good starting point.
So yeah, this post is long as fuck, didn't really mean to write that much. Hope it helped, though. Maybe I'll revise this and turn it into a discussion post.
Some people in a similar position recruit other people to police us when our ability to police ourselves is exhausted/inadequate. Of course, this requires some kind of policing mechanism... e.g., whereby the coach can unilaterally withhold rewards/invoke punishments/apply costs in case of noncompliance.
nicotine has been a significant help with motivation. I only vape eliquid with nicotine when I am studying. This seems to have resulted in a large reduction in ugh fields.
Request for practical advice on determining/discovering/deciding 'what you want.'
Find at least one person who you can easily communicate with (i.e., small inferential distances) and whose opinion you trust. Have a long conversation about your hopes and dreams. I recommend doing this in person if at all possible.
A good place to start the search is the intersection of "things I find enjoyable" and "things that are scarce / in demand".
See which time discounts and distance discounts you make for how much you care about others. Compare how much you care about others with how much you care about you. act accordingly.
To know what you care about in the first place, either assess happiness at random times and activities, or go through Connection Theory and Goal factoring.
Why do you recommend Connection Theory?
It's been done to me and I like it.
It's been done to me, too, and as I recall, it didn't do all that much good. The major good effect that I can remember is indirect-- it was something to be able to talk about the inside of my head with someone who found it all interesting and a possibly useful tool for untangling problems-- this helped pull me away from my usual feeling that there's something wrong/defective/shameful about a lot of it.
What did you get out of Connection Theory?
Looki into my eyes. You want to give all your money to the MIRI. You want to give all your money to the MIRI. You want to give all your money to the MIRI.
Sadly I have +2 hypnosis resistance, nice try.
I have a super dumb question.
So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".
So there must be some other reason for the rule, 'don't divide by zero.' What is it?
We don't divide by zero because it's boring.
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say "you can't do that" or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things - that's called localization - which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It's not that we can't study it, but that we don't want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I've answered a version of this question there already, I think).
The rule isn't that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you're not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A's to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
For the real numbers, the equation a * x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there's nearly always exactly one solution, it's convenient to have a symbol for "the one solution to the equation a * x = b" and that symbol is b / a; b but you can't write that if a = 0 because then there isn't exactly one solution.
This is true of any field, almost by definition.
Didn't they do the same with set theory? You can derive a contradiction from the existence of "the set of sets that don't contain themselves"... therefore, build a system where you just can't do that.
(of course, coming from the axioms, it's more like "it wasn't ever allowed", like in Kindly's comment, but the "new and updated" axioms were invented specifically so that wouldn't happen.)
I keep accidentally accumulating small trinkets as presents or souvenirs from well-meaning relatives! Can anyone suggest a compact unit of furniture for storing/displaying these objects? Preferably in a way that is scalable, minimizes dustiness and falling-off and has pretty good ease of packing/unpacking. Surely there's a lifehack for this!
Or maybe I would appreciate suggestions on how to deal with this social phenomenon in general! I find that I appreciate the individual objects when I receive them, but after that initial moment, they just turn into ... stuff.
spice racks!
I knew someone had an answer but I would have never thought of that myself; I use like a total of one spices. Thank you!
In that case my further advice is: Cumin! Garlic! Pepper! Coriander!
Ohh yeahh, I guess I also use pepper. And garlic is a veggie. =P
Today, I finally took a racial/sexual Implicit Association Test.
I had always more or less accepted that it was, if not perfect, at least a fairly meaningful indicator of some sort of bias in the testing population. Now, I'm rather less confident in that conclusion.
According to the test, in terms of positive associations, I rank black women above black men above white women above white men. I do not think this is accurate.
Obviously, this is an atypical result, but I believe that I received it due to confounding factors which prevented the test from being an accurate reflection of my associations are likely to affect a large proportion of the testing population.
First, the most significant factor in how successful I was in correctly associating words and faces was simply practice. I made more mistakes in the first phase than the second phase, and more in the second than the third, etc. I believe that my test could have showed significantly different results simply by re-ordering the phases.
Second, I suspect that I was trying harder in the phases where I was matching black faces than white faces. I don't want to corrupt the test, but I also don't want it to tell me I'm a racist; would I have been so enthusiastic about making the final phase my most accurate one of all, if it had been matching white male faces rather than black male faces?
Third, I felt that many of the questions on the survey that followed the matching phase were too loaded to properly answer on their own terms. They presented a series of options from "strongly agree" to "strongly disagree," where I felt that my real answer would most accurately be framed as ADBOC.
If anyone here has access to university resources and would like to collaborate on an experiment which would attempt to discern subjects' associations while correcting for these faults, please let me know.
Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn't be an issue in published research. (The keyword for this is "order effect.")
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man's face or a white man's face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn't have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture - many people taking the test have the experience of it getting harder when they're doing a "mismatched" pairing; they don't need to rely solely on the website's report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
Article on an attempt to explain intelligence in thermodynamic terms.
Interesting stuff. Some links to the original material:
Original paper (paywalled)
Original paper (free). (Does not include supplementary material.)
Summary paper about the paper.
Their software. Demo video, further details only on application.
Author 1. Author 2.
On the one hand, these are really smart guys, no question. On the other, toy demos + "this could be the solution to AI!" => likely to be a damp squib.
I've skimmed the paper and read the summary publicity, and I don't really get how this could be construed as a general intelligence. At best, I think they may've encoded a simple objective definition of a convergent AI drive like 'keep your options open and acquire any kind of influence' but nothing in it seems to map onto utility functions or anything like that.
I would like to recommend Nick Winter's book, The Motivation Hacker. From an announcement posted recently to the Minicamp Graduates mailing list:
"The book takes Luke's post about the Motivation Equation and tries to answer the question, how far can you go? How much motivation can you create with these hacks? (Turns out, a lot.) Using the example of eighteen missions I pursued over three months, it goes over in more detail how to get yourself to want to do what you always wanted to want to do."
(Disclaimer: I hadn't heard of Nick Winter until a friend forwarded me the email containing that announcement, and I have no interest in promoting the book other than to help folks here attain their goals more effectively.)
Yet another fake number of sex partners self-reported:
Unless, of course, Canadian men tap the border.
Note: it basically evens out if you remove the 20+ partners boasters.
A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.
Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I could mold the site as I grew.
Now after >2 years of blogging about basically the same thing, I think my blog will always be about utilitarianism (both practical and philosophical), lifestyle design (my quest to make myself more productive and frugal, mainly so I can be a better utilitarian), political commentary (from a utilitarian perspective), and psychology (of morality and community and that which basically underlies practical utilitarianism).
I probably would want to talk about religion/atheism from time to time, which used to be my biggest interest, but I can already tell it's moderately unpopular with my current readership (yawnnn... we really have to go over why the Bible has errors again?) and I'm already personally getting increasingly bored with it, so I can do away with discussing atheism if I needed to keep to a "topic"-focused blog.
Basically, at this point, I think I stand to gain more by making my blog and domain name more descriptive than I stand to lose by risking my interests shifting away from utilitarianism (or at least the public discussion thereof). But the big question... what should I name my blog?
Option #1: Keep with Greatplay.net: There will be costs with shifting to a new domain name. The monetary cost is mostly insignificant (<$20/yr for a new domain name), but it will take a moderate amount of time to move all the archives over and make sure all the new hyperlinks on the site work. Also, there will be confusion among the readership, and everyone who was linking to my site externally would now be linking to dead stuff. So, if I've misestimated the benefits of moving, I might want to stick with the current name and not incur the costs.
Option #2: Go to PeterHurford.com: I already use this site as an online résumé of sorts, so I wouldn't need to get the domain. This also seems the most descriptive of what the site would be about (a personal blog, about me) and fits in with what the cool kids are doing. However, some of my opinions are controversial relative to the mainstream and I don't know what I'll be doing in my future. Keeping my real name hidden from my website might be an asset (so I don't lose opportunities because of association with unpopular mainstream opinions), though it might also be a drawback (I think I have gotten some recognition and opportunity from those who share my unpopular mainstream opinions).
Option #3: A new name: If Option #1 and #2 don't work, I'd want to just rename the blog to something descriptive of a blog about utilitarianism. Some ideas I've come up with:
Though feel free to suggest your own!
I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with "<brand name> utilitarianism", "<brand name> design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: <stuff name>". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.
The Girl Scouts currently offer a badge in the "science of happiness." I don't have a daughter, but if you do, perhaps you should look into the "science of style" badge as well.
We totally need rationality badges like that!
Rationalists should win... badges.
The less scouty and more gamery way to describe them is "achievements."
So far, I haven't found a good way to compare organizations for the blind other than reading their wikipedia pages.
And, well, blindness organizations are frankly a political issue. Finding unbiased information on them is horribly difficult. Add to this my relatively weak Google-fu, and I haven't found much.
Conclusions:
I want to find the one with the most to offer, and take advantage of those opportunities.
The difficulty is figuring out which one is the most useful. NFB comes across as cultish and pushing their ideology on anyone who comes to them, and they seem to be ignoring medical professionals advising them against using sleep shades on people with residual sight in their training programs. Also, their specialized cane sounds like an identity symbol more than a utility maximizer; it has better reach, but is flimsy-yet-unfolding and gets in the way. I do like the implication that it optimizes arm usage, but otherwise it sounds annoying.
On the upside, they seem to be the loudest, and as we all know, America is the country where the loudest get large chunks of attention. I've read some of their legal recommendations, and they seem to be the work of someone who knows how to aim for a goal and shoot until they hit it. Also, they're intense about braille.
Meanwhile, I'm imagining AFB being a possible avenue for getting my hands on a blasted tactile display, and possibly other meaningful technology-related projects, without having to put up indoctrination shields. Eah, there doesn't seem to be as much to say on them, which tells me that they have much less to criticize, but at the same time, it makes me wonder if they're powerful enough for the vague notion of whatever nonspecific ideas spawned this investigation.
NFB's sleep shades and specialized cane are rational for their purpose: to force the trainee to strengthen blindness as an identifying quality. They have other excuses--sleep shades prepare people for the possibility of losing what sight they have, the specialized cane provides better reach and is easier on the arms--but in light of the responses to these, and their responses to those responses, it's pretty clear that the identity advertisement is their main purpose. And quite frankly, that's annoying; my vision is not an identifying quality I care much about, so much as it's an obstacle that's made its troubles much clearer to me as of late. None of the other organizations seem to be functionally equivalent to the NFB, minus that element. Their main rival, the ACB, doesn't seem to do much of anything other than have fancy meetings and occasionally talk to legal people.
Gah, I would just continue ignoring them all, as I always have, if I wasn't living in a freakin' box.
Perhaps it would be easier to help if you said what you wanted help with. "The most to offer" in what specific area?
I still can't find much useful information on the AFB, but the NFB publicizes most of their major operations. The only successful one I've come across so far is the cancellation of the ABC sit com "Good and Evil" (it's worth noting that ABC denied that the NFB protests had anything to do with this). They don't seem to be having success at improving Kendel accessibility, which is more a political matter than a technological one (Amazon eventually cut communications with them). They're protesting Goodwill because 64/165 of their stores pay disabled employees less than minimum wage, in a manner that strikes me as poorly thought out (it seems to me that Goodwill has a much better image than the NFB, so this will most likely cost the NFB a lot of political capital).
This isn't really enough for me to determine whether they're powerful, or just loud, but so far it's making me update ever so slightly in favor of just loud.
It is worth noting that all of the above information came from publications written by NFB members, mostly hosted on NFB web sites. If my confidence in their abilities is hurt by writings seemingly designed to favor them, I can only imagine what something more objective would look like.
[edit]Originally typed Givewell instead of Goodwill! Fixed![/edit]
Does anybody on here use at-home EEG monitors? (Something like http://www.emotiv.com/store/hardware/epoc-bci-eeg/developer-neuroheadset/ although that one looks rather expensive)
If you do, do you get any utility out of them?
SDr actually gave me his research-edition Emotiv EPOC, but... I haven't actually gotten around to using it because I've been busy with things like Coursera and statistics. So, eventually! Hopefully.
Is tickling a type of pain?
In case the answer to Qiaochu_Yuan's question is something like “I'm trying to establish the moral status of tickling in my provisional moral system”, note that IIUC the sensation felt when eating spicy foods is also pain according to most definitions, but a moral system according to which eating spicy foods is bad can go #$%& itself for all that I'm concerned.
The simplest way of categorizing this would be based on the biology of which nerves nerves are involved. It appears that the tickle sensation involves signals from nerve fibres associated with both pain and touch. So... "Kind of".
Dissolve the question.
One question I like to ask in response to questions like this is "what do you plan on doing with this information?" I've generally found that thinking consequentially is a good way to focus questions.
Recantation by Gregory Cochran
Do we know any evolutionary reason why hypnosis is a thing?
My current understanding of how hypnosis works is:
The overwhelming majority of our actions happen automatically, unconsciously, in response to triggers. Those can be external stimuli, or internal stimuli at the end of a trigger-response chain started by an external stimulus. Stimulus-response mapping are learnt through reinforcement. Examples: walking somewhere without thinking about your route (and sometimes arriving and noticing you intended to go someplace else), unthinkingly drinking from a cup in front of you. (Finding and exploiting those triggers is incredibly useful if you have executive function issues.)
In the waking state, responses are sometimes vetted consciously. This causes awareness of intent to act. Example: those studies where you can predict when someone will press a button before they can.
This "free won't" isn't very reliable. In particular, there's very little you can do about imagery ("Don't think of a purple elephant"). Examples: advertising, priming effects, conformity.
Conscious processes can't multitask much, so by focusing attention elsewhere, stimuli cause responses more reliably and less consciously. See any study on cognitive load.
Hypnosis works by putting you in a frame of mind where cooperation is easy; that's mostly accomplished by your expectation to be hypnotised. For self-hypnosis you're pretty cooperative already ("I am doing that, therefore it works and it's good."), otherwise rapport with the hypnotist and yes sets (consenting to hypnosis, agreeing to listen/sit/look at something, truisms) help. Inducing trance seems to be mostly a matter of directing attention elsewhere while preserving this frame of mind. Old school hypnotists liked external foci like swinging pocket watches, candle flames and spirals; mindfulness inductions work similarly; Erickson was fond of pleasant imagery; I'm partial to thinking about the process of hypnosis itself.
Modern writers tend to use "trance" to mean a highly suggestible state, whereas older ones just mean a state where you act on autopilot. Flow is the latter kind of trance but not the former, as the thing you're concentrating on does prompt you to take some actions ("play these notes") but not in any form that resembles suggestion. I'm less certain about this than about the rest of my model, the link between trance and suggestibility might be deeper.
So the evolutionary explanation for hypnosis would look something like this:
It's easier to build a reflex agent than a utility maximiser, so evolution did that.
However, conscious decision-making does better, especially if you're going to be all technological and social, so evolution added one on top of the preexisting connectionist idiot.
It is easily disrupted, because evolution is a complete hack and only builds things that are robust as long as you don't do anything unusual.
As far as I can tell, it's more of a spandrel than anything. As a general rule, anything you can do with "hypnosis", you can do without. Depending on what you're doing with it, it can be more of a feature or more of a bug that comes inherent to the architecture.
I could probably give a better answer if you explained exactly what you mean by "hypnosis", since no one can agree on a definition.
Dennett makes a good case for the word "spandrel" not really meaning much in "Darwin's Dangerous Idea".
There's a phenomenon I'd like more research done on. Specifically, the ability to sense solid objects nonvisually without direct physical contact.
I suspect that there might be some association with the human echolocation phenomenon. I've found evidence that there is definitely an audio component; I entirely by accident simulated it in a wav file (It was a long time before I could listen to that all the way through, for the strong sense that something was reaching for my head; system2 had little say in the matter).
I've also done my own experiments involving covering my ears, and have still been able to sense things to some extent, if more weakly. I notice that if I walk around with headphones on, I have a much harder time getting a sense of my surroundings.
The size of the object, and its proximity to my head are related to how well I can sense it (large walls and trees are easier than bike racks or benches. My college had a lot of knee-high brick walls lining its paths, which was hell on my normal navigation methods).
My selfish motivation for researching this is that, if it can be perfectly simulated in audio, then game accessibility has a potential avenue to gain much strength. I would like to understand it even without that perk, though.
If there is, in fact, decent published research on this that I don't know about, I'd be grateful if someone could provide one or more links. Otherwise, I'd like an idea of who I might contact to try and initiate such research; at the moment, I'm considering recommending it to Lighthouse International.
I started following DavidM's meditation technique Is there anything that I should know? Any advice or reasons on why I should choose a different type of meditation?
FWIW adding tags to distracting thoughts and feelings seems like a useful thing (for me) even when not meditating and I haven't encountered this act of labeling in my past short research on meditation.
Does anyone have any real-world, object-level examples of degenerate cases?
I think degeneracy has some mileage in terms of explaining certain types of category error, (eg. "atheism is a religion"), but a lot of people just switch off when they start hearing a mathematical example. So far, the only example I've come up with is a platform pass at a train station, which is a degenerate case of a train ticket. It gets you on the platform and lets you travel a certain number of stops (zero) down the train line.
Anyone want to propose any others?
Cal Newport and Scott H. Young are collobarating to form a start deliberate practice course by email. Here's an excerpt from on Cal's emails to inquiring people:
Does this sound like it's worth $100?
Errh
On an uncharitable reading, this sounds like two wide-eyed broscientist prophets who found The One Right Way To Have A Successful Career (because by doing this their career got successful, of course), and are now preaching The Good Word by running an uncontrolled, unblinded experiment for which you pay 100$ just to be one of the lucky test subjects.
Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names. The facts that they've sold popular books with "get better" in the description and that they are socially-recognized as scientists are rather impressive, but doesn't substantially raise my priors of this working or not.
So if you've already tried some of their advice in enough quantity that your updated belief that any given advice from them will work is high enough and stable enough, this seems more than worth 100$.
Just the possible monetary benefits probably outweigh the upfront costs if it works, and even without that, depending on the kind of career you're in, the VoI and RoI here might be quite high, so depending on one's career situation this might need only a 30% to 50% probability of being useful for it to be worth the time and money.
I think that the open thread belongs in Discussion, not Main.
It usually goes there, yes - presumably it was put in Main in error.
The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.
Most common in politics, where people describe the unpleasantness of Somalia or North Korea when arguing for more or less government regulation as if it had some kind of relevance. Silliest is when people try to argue over which of the two is worse. Establishing the silliness of this is easy. Somalia beats assimilation by the borg, so government power is bad. North Korea beats the Infinite Layers of the Abyss, so government power is good. Surely no universal principle of government can be changed by which contrived example I pick.
And, with a little thought, it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia. Figuring out what that amount is and which side of it any given goverment lies on are important and hard questions. But looking at the extremes doesn't tell us anything about them.
(Treating "government power" as a scalar can be another fallacy, but I'll leave that for another post.)
More nasty details: An amount of government which supports the most eudaemonia in the short term, may not be the best in the long term. For example, it could create a situation where the government can expand easily and has natural incentives to expand. Also, the specific amount of government may depend significantly on the technological level of society; inventions like internet or home-made pandemic viruses can change it.
I think the "non-scalar" point is a much more important take-away.
Generalizing: "Many concepts which people describe in linear terms are not actually linear, especially when those concepts involve any degree of complexity."
I've seen that applied to all kinds of things, ranging from vitamines to sentences starting with “However”, to name the first two that spring to mind.
Sex. I have a problem with it and would like to solve it. I get seriously anxious every time I'm about to have sex for the first time with a new partner. Further times are great and awesome. But the first time leaves me very anxious; which makes me delay it as much as I can. This is not optimal. I don't know how to fix it, if anyone can help I'd be greatly grateful
--
I notice I'm confused: I always tried to keep a healthy life: sleeping many hours, no alcohol, no smoke. I've just been living 5 days in a different country with some friends. We sleep 7 hours at most, they are smoking all the time, I've drank once. We hardly eat: My face looks better, I feel better, I just look healthier. Also feel like that. Possible confounds: I live mostly alone, now I'm also hanging out with at least 3 people, usually closer to 10. I'm going out and dancing at least 4 hours every night. I'm talking to new people every night. I don't know how I'd go about to test what caused this, but I'd like to know and keep that factor in my life. Any ideas?
Re: sex... is there anyone with whom you're already having great awesome sex who would be willing to help out with some desensitization? For example, adding role-playing "our first time" to your repertoire? If not, how would you feel about hiring sex workers for this purpose?
Re: lifestyle... list the novel factors (dancing 4 hrs/night, spending time with people rather than alone, sleeping <7 hrs/night, diet changes, etc. etc. etc.). When you're back home, identify the ones that are easy to introduce and experiment with introducing them, one at a time, for a week. If you don't see a benefit, move on to the next one. If none of them work, try them all at once. If that doesn't work, move on to the difficult-to-introduce ones and repeat the process.
Personally, I would guess that several hours of sustained exercise and a different diet are the primary factors, but that's just a guess.
I will make the typical recommendation: cognitive behavioral therapy techniques. Try to notice your emotions and responses, and just sort them into helpful or not helpful. Studies also seem to show that this sort of thing works better when you're talking with a professional.
could be a sign of a mold infestation or other environmental thing where you normally live
Are you significantly happier now than before?
I've been reading Atlas Shrugged and seem to have caught a case of Randianism. Can anyone recommend treatment?
Michael Huemer explains why he isn't an Objectivist here and this blog is almost nothing but critiques of Rand's doctrines. Also, keep in mind that you are essentially asking for help engaging in motivated cognition. I'm not saying you shouldn't in this case, but don't forget that is what you are doing.
With that said, I enjoyed Atlas Shrugged. The idea that you shouldn't be ashamed for doing something awesome was (for me, at the time I read it) incredibly refreshing.
Quoting from the linked blog:
"Assume that a stranger shouted at you "Broccoli!" Would you have any idea what he meant? You would not. If instead he shouted "I like broccoli" or "I hate broccoli" you would know immediately what he meant. But the word by itself, unless used as an answer to a question (e.g., "What vegetable would you like?"), conveys no meaning"
I don't think that's true? Surely the meaning is an attempt to bring that particular kind of cabbage to my attention, for as yet unexplained reasons.
That's a possible interpretation, but I wouldn't say "surely."
Some other possibilities.
The person picked the word apropos of nothing because they think it would be funny to mess with a stranger's head.
It's some kind of in-joke or code word, and they're doing it for the amusement of someone else who's present (or just themselves if they're the sort of person who makes jokes nobody else in the room is going to get.)
The person is confused or deranged.
If I heard someone shout "Broccoli" at me without context, my first assumption would be that they'd actually said something else and I'd misunderstood.
Are you looking to treat symptoms? If so, which ones?
My own deconversion was prompted by realizing that Rand sucked at psychology. Most of her ideas about how humans should think and behave fail repeatedly and embarrassingly as you try to apply it to your life and the lives of those around you. In this way, the disease gradually cures itself, and you eventually feel like a fool.
It might also help to find a more powerful thing to call yourself, such as Empiricist. Seize onto the impulse that it is not virtuous to adhere to any dogma for its own sake. If part of Objectivism makes sense, and seems to work, great. Otherwise, hold nothing holy.
Laughs I'm an Objectivist by my own accord, but I may be able to help if you find this undesirable.
The shortest - her derivations from her axioms have a lot of implicit and unmentioned axioms thrown in ad-hoc. One problematic case is her defense of property - she implicitly assumes no other mechanism of proper existence for humans is possible. (And her "proper existence" is really slippery.)
This isn't necessarily a rejection - as mentioned, I am an Objectivist - but it is something you need to be aware of and watch out for in her writings. If a conclusion doesn't seem to be quite right or doesn't square with your own conception of ethics, try to figure out what implicit axioms are being slipped in.
Reading Ayn Rand may be the best cure for Randianism, if Objectivism isn't a natural philosophy for you, which by your apparent distress it isn't. (Honestly, though, I'd stay the hell away from most of the critics, who do an absolutely horrible job of attacking the philosophy. They might be able to cure you of Randianism, but largely through misinformation and unsupported emotional appeals, which may just result in an even worse recurrence later.)
Heinlein? I found Stranger in a Strange Land to be an interesting counterpoint to Atlas Shrugged.
Both feature characters with super-human focus / capability (Rearden and Valentine Micheal Smith). And they have totally different effects on societies superficially similar to each other (and to our own).
There's more to say about Rand in particular, but we should probably move to the media thread for that specifically (Or decline to discuss for Politics is the Mindkiller reasons). Suffice it to say that uncertainty about how to treat the elite productive elements in society predates the 1950s and 1960s.
Think carefully through egoism.
hint: Vs rtbvfg tbnyf naq orunivbef qba'g ybbx snveyl vaqvfgvathvfunoyr sebz gur tbnyf naq orunivbef bs nygehvfgf lbh'ir cebonoyl sbetbggra n grez fbzrjurer va lbhe hgvyvgl shapgvba.
The (libertarian, but not Randian) philosopher Michael Huemer has an essay entitled "Why I'm not an objectivist." It's not perfect, but at least the discussion of Rand's claim that respect for the libertarian rights of others follows from total egoism is good.
Genuine question: What do you find appealing about it? I've always found the writing impenetrable and the philosophy unappealing.
What is the smartest group/cluster/sect/activity/clade/clan that is mostly composed of women? Related to the other thread on how to get more women into rationality besides HPMOR.
Ashkenazi dancing groups? Veterinarian College students? Linguistics students? Lilly Allen admirers?
No seriously, name guesses of really smart groups, identity labels etc... that you are nearly certain have more women than men.
One of my friends has nominated the student body at Bryn Mawr.
Bryn Mawr has gone downhill a lot since the top female students got the chance to go to Harvard, Yale, etc. instead of here. Bryn Mawr does have a cognitive bias course (for undergraduates) but the quality of the students is not that high.
Of course, Bryn Mawr does excellently at the only-women part, and might do well overall once we take into account that constraint.
Academic psychologists are mostly female. That would seem to be a pretty good target audience for LW. There are a few other academic areas that are mostly female now, but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area.
There are lists online of academic specialty by average GRE scores. Averaging the verbal and quantitative scores, and then determining which majority-female discipline has the highest average would probably get you close to your answer.
Well, keep in mind that 75% of LWers are under 31 anyway, so it's the sex ratio among the younger cohorts you mainly care about, not the sex ratio overall.
But it isn't the undergrads you're looking for if you want the "smartest mostly female group." Undergrads are less bright on average than advanced degree holders due to various selection effects.
I think we are aiming for "females who can become rationalists" which means that expected smarts are more valuable then real smarts, in particular if the real ones were obtained through decades (implying the person will then be less flexible, since older).
IME, among post-docs there might not be as many females as among freshers, but there definitely are more than among tenured professors.
Professional associations for women in the smartest professions.
Gender studies graduate programs.
I'm not entirely sure that targeted recruitment of feminists is a good idea. It seems to me like a good way to get LW hijacked into a feminist movement.
I agree, and would expand this to any politically motivated movement (including libertarians, moldbuggians etc.). After all, this is the main rationale for our norm of not discussing politics on LW itself.
Political movements in general care more about where you are and your usefulness as a soldier for their movement than how you got there. It's something that we are actively trying to avoid.
LessWrong+?
LessIncorrect
aren't plenty of other arts+humanities fields female-majority now when you look at newly minted phds?
The Care and Feeding of Your Extrovert
[link] XKCD on saving time; http://xkcd.com/1205/ Image URL (for hotlinking/embedding): http://imgs.xkcd.com/comics/is_it_wor Though it will probably be mostly unseen as the month is about to end.
I encountered this cute summary of priming findings, thought you guys might like it, too:
How do you people pronounce MIRI? To rhyme with Siri?
yes
Amanda Knox and evolutionary psychology - two of LessWrong's favorite topics, together in one news article / opinion piece.
The author explains the anti-Knox reaction as essentially a spandrel of an ev. psych reaction. Money quote:
I'm skeptical of the ev. psych because it seems to require a fairly strong form of group selection pressure. But I thought folks might find it interesting.
The phenomenon of altruistic punishment itself is apparently not just a matter of speculation. Another quote from Preston's piece:
He links to this PNAS paper which uses a computer simulation to model the evolution of altruistic punishment. (I haven't looked at it in detail.)
Whatever the explanation for their behavior (and it really cries out for one), the anti-Knox people are truly disturbing, and their existence has taught me some very unpleasant but important lessons about Homo sapiens.
(EDIT: One of them, incidentally, is a mathematician who has written a book about the misuse of mathematics in trials -- one of whose chapters argues, in a highly misleading and even disingenuous manner, that the acquittal of Knox and Sollecito represents such an instance.)
Skimming the PNAS paper, it appears that the conclusion is that evolved group co-operation is not mathematically stable without evolved altruistic punishment. I.e. populations with only evolved co-operation drift towards populations without any group focused evolved traits, but altruistic punishment seems to exclude enough defectors that evolved co-operation maintained frequency in the population.
Which makes sense, but I'm nowhere close to qualified to judge the quality of the paper or its implications for evolutionary theory.
Request for a textbook (or similar) followup to The Selfish Gene and/or The Moral Animal. Preferably with some math, but it's not necessary.
Buss's Evolutionary Psychology is good if you are specially looking for the evolutionary psychology element not so sure about general evolutionary biology books. Also we have a dedicated textbook thread.
I could swear Zach Weiner reads this forum.
He's been asked before and denied it, IIRC.
I have noticed an inconsistency between the number of comments actually present on a post and the number declared at the beginning of its comments section, the former often being one less than the latter.
For example, of the seven discussion posts starting at "Pascal's wager" and working back, the "Pascal's wager" post at the moment has 10 comments and says there are 10, but the previous six all show a count one more than the actual number of visible comments. Two of them say there is 1 comment, yet there are no comments and the text "There doesn't seem to be anything here" appears. These are meetup announcements that I would not expect anyone to be posting banworthy comments to.
There is no sign of comments having being deleted or banned, and even if something of the sort is what has happened, I would expect the comment count displayed on a page to agree with the number of accessible comments.
On the Discussion page itself, the comment count displayed for each post agrees with the comment count displayed within the post.
A short while ago, spam comments in French were posted to a bunch of discussion threads. All of these were deleted. I'm guessing this discrepancy is a consequence of that.
I am aware that there have been several discussions over to what extent x-rationality translates to actual improved outcomes, at least outside of certain very hard problems like metaethics. It seems to me that one of the best ways to translate epistemic rationality directly into actual utility is through financial investment/speculation, and so this would be a good subject for discussion (I assume it probably has been discussed before, but I've read most of this website and cannot remember any in depth-thread about this, except for the mention of markets being at least partially anti-inductive).
Partially the reason for my writing this is that I have been reading about neuroeconomics and doing some academic research of my own (as in actually running experiments), and I am shocked by how near-universal irrational behavior displayed is (and therefore, exploitable by more rational agents). Even professional traders behavior is swayed by things like fluctuating testosterone levels. (Not that I know how to compensate for this!)
On a related note I've also been thinking about:
1) Applications for machine learning/narrow AI to finance.
2) Economic irrationality invalidating the libertarian free-market ideas, and possibly libertarianism in general, seeing as personal decisions can often be conceptualized economically. (I should point out that libertarianism used to appeal to me, and I find this line of reasoning mildly disturbing)
3) Gender relations, and the possibility that men are on average better at maths then women has been discussed here, and so discussion of the possibility that women are generally better at finance (see link above) could be beneficial, both in the context of pointing out opportunities to female rationalists, and to help dispel any appearance of misogyny that this community may have.
Again, I can't remember these being discussed here, and (1) seems very relevant to this community, although (2) is probably mind-killing and not very productive, unless any of us actually have the power to influence politics.
Apologies if this all has been already discussed in-depth somewhere.
Toying around with the Kelly criterion I get that the amount I should spend on insurance increases with my income though my intuition says that the higher your income is the less you should insure. Can someone less confused about the Kelly criterion provide some kind of calculation?
For anyone asking, I wondered if, given income and savings rate how much should be invested in bonds, stocks, etc. and how much should be put into insurance, e.g. health, fire, car, etc. from a purely monetary perspective.
The Kelly criterion returns a fraction of your bankroll; it follows that for any (positive-expected-value) bet whatsoever, it will advise you to increase your bet linearly in your income. Could this be the problem, or have you already taken that into account?
That aside, I'm slightly confused about how you can use the Kelly criterion in this case. Insurance must necessarily have negative expected value for the buyer, or the insurer makes no profit. So Kelly should be advising you not to buy any. How are you setting up the problem?
Well that is exactly the point. It confuses me that the richer I am the more insurance I should buy, though the richer I am the more I am able to compensate the risk in not buying any insurance.
Yes and no. The insurer makes only a profit if the total cost of insurance is lower than the expected value of the case with no insurance. What you pay the insurer for is that the insurer takes on a risk you yourself are not able to survive (financially), that is catastrophically high costs of medical procedures, liabilities or similar. It is easily possible for the average Joe to foot the bill if he breaks a $5 mug but it would be catastrophic for him if he runs into an oil tank and has to foot the $10,000,000 bill to clean up the environment. (This example is not made up but actually happened around here.)
It is here where my intuition says that the richer you are, the less insurance you need. I could also argue that if it was the other way around, that you should insure more the richer you are, insurance couldn't exist, seeing as the insurer is the one who should buy insurance from the poor!
You can use the Kelly criterion in any case, either negative or positive expected value. In the case of negative value it just tells you to take the other side of the bet or to pay to avoid the bet. The latter is exactly what insurance is.
I model insurance from the point of view of the buyer. In any given time frame, I can avoid the insurance case with probability q, saving the cost of insurance b. Or I could lose and have to pay a with the probability p = 1-q. This is the case of not buying insurance, though it is available. So if f = p/a - q/a is negative I should insure, if f is positive, I should take the risk. This follows my intuition insofar that catastrophic but improbable risk (very high a, very low p) should be insured but not probable and cheap liabilities (high p, low a).
The trick is now that f is actually the fraction of my bankroll I have to invest. So the richer I am the more I should insure absolutely but my intuition says I should by less insurance. I know I have ignored something fundamental in my model. Is it the cost of insurance? Is it some hidden assumption in the formulation of the Kelly criterion as applied to bets? Did I accidentally assume that someone knows something the other party doesn't? Did I ignore fixed costs? This eats me up.
Edit: Maybe the results have to be interpreted differently? Of course if I don't pay the insurance, Kelly still says to invest the money somehow, maybe in having a small amount always at hand as a form of personally organized insurance. Intuition again says that this pool should grow with my wealth, effectively increasing the amount of insurance I buy, though not from an insurer but in opportunity cost.
The Kelly formula assumes that you can bet any amount you like, but there are only so many things worth insuring against. Once those are covered, there is no opportunity to spend more, even if you're still below what the formula says.
In addition, what is a catastrophic loss, hence worth insuring against, varies with wealth. If the risks that you actually face scale linearly with your wealth, then so should your expenditure on insurance. But if having ten times the wealth, your taste were only to live in twice as expensive a house, drive twice as expensive a car, etc. then this will not be the case. You will run out of insurance opportunities even faster than when you were poorer. At the Jobs or Gates level of wealth, there are essentially no insurable catastrophes. Anything big enough to wipe out your fortune would also wipe out the insurance company.
You have it backwards. The bet you need to look at is the risk you're insuring against, not the insurance transaction.
Every day you're betting that your house won't burn down today. You're very likely to win but you're not making much of a profit when you do. What fraction of your bankroll is your house worth, how likely is it to survive the day and how much will you make when it does? That's what you need to apply the Kelly criterion to.
I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.
What if we're putting off buying a bitcoin because we, uh, don't want to?
Ok... Well... If that's the case, and if you can tell me why you feel that way, I might have a response that would modify your preference. Then again, your reasoning might modify my own preference. Cryptic non-argument isn't particularly interesting, or helpful for coming to an Aumann Agreement.
Edit: Here is my response.
1) I am not at all convinced that investing in bitcoins is positive expected value, 2) they seem high-variance and I'm wary about increasing the variance of my money too much, 3) I am not a domain expert in finance and would strongly prefer to learn more about finance in general before making investment decisions of any kind, and 4) your initial comment rubbed me the wrong way because it took as a standing assumption that bitcoins are obviously a sensible investment and didn't take into account the possibility that this isn't a universally shared opinion. (Your initial follow-up comment read to me like "okay, then you're obviously an idiot," and that also rubbed me the wrong way.)
If the bitcoin situation is so clear to you, I would appreciate a Discussion post making the case for bitcoin investment in more detail.
The standard advice is that normal people should never try to beat the market by picking any single investment, but rather put their money in index funds. The best publicly available information is already considered to be reflected in the current prices: if you recommend in buying a particular investment, that implies that you have knowledge that the best traders currently on the market do not have. As a friend commented:
So if you think that people should be buying Bitcoins, it's up to you to explain why the standard wisdom on investment is wrong in this case.
(For what it's worth, personally I do own Bitcoins, but I view it as a form of geek gambling, not investment. It's fun watching your coins lose 60% in value and go up 40% from that, all within a matter of a few days.)
Bitcoins are more like investing in a startup. The plausible scenarios to bitcoins netting you a return commensurate with the risk involve it disrupting several 100 billion+ markets (paypal, western union). I think investing in startups that have plausible paths towards such disruptions are worthy of a small portion of your portfolio.
This has most likely been mentioned in various places, but is it possible to make new meetup posts (via the "Add new meetup" button) to only show up under "Nearest Meetups", and not be in Discussion? Also, renaming the link to "Upcoming Meetups" to match the title on that page, and listing more than two - perhaps a rolling schedule of the next 7 days.
Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.
... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)
I don't quite see what you mean here. Do you know that each post has its own comments RSS feed?
Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)
Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.
More Right
Edit: We reached our deadline on May 1st. Site is live.
Some of you may recall the previous announcement of the blog. I envisioned it as a site that discusses right wing ideas. Sanity but not value checking them. Steelmanning both the ideas themselves and the counterarguments. Most of the authors should be sympathetic to them, but a competent loyal opposition should be sought out. In sum a kind of inversion of the LessWrong demographics (see Alternative Politics Question). Outreach will not be a priority, mutual aid on an epistemically tricky path of knowledge seeking is.
The current core group working on making the site a reality consists of me, ErikM, Athrelon, KarmaKaiser and MichaelAnissimov and Abudhabi. As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me. If you are interested in the hard to discuss subjects or the politics and want to join as a coauthor or approved commenter (we are seeking more) send me a PM with an email adress or comment here.
This is a great idea. We should create rationalist blogs for other political factions too, such as progressivism, feminism, anarchism, green politics and others. Such efforts could bring our programme of "raising the sanity waterline" to the public policy sphere -- and this might even lay some of the groundwork for eventually relaxing the "no politics at LW" rule.
As I wrote before:
I don't expect LessWrong itself to become a good venue to discuss politics. I do think LessWrong could keep its spot at the center of a "rationalist" blogosphere that may be slowly growing. Discussions between different value systems part of it might actually be worth following! And I do think nearly all political factions within such a blogosphere would find benefits in keeping their norms as sanity friendly as possible.
James Goulding aka Federico formerly of studiolo has joined us as an author.
"Approved Commenter" sounds pretty thought police-ey
so sign me up!
That would seem to fit with the theme rather well.
I hold more liberal than conservative beliefs, but I'm increasingly reluctant to identify with any position on the left-right "spectrum". I definitely hold or could convincingly steelman lots of beliefs associated with "conservativism", especially if you include criticism of "liberal" positions. Would this be included in the sort of demographic you're seeking?
Sometimes, success is the first step towards a specific kind of failure.
I heard that the most difficult moment for a company is the moment it starts making decent money. Until then, the partners shared a common dream and worked together against the rest of the world. Suddenly, the profit is getting close to one million, and each partner becomes aware that he made the most important contributions, while the others did less critical things which technically could be done by employees, so having to share the whole million with them equally is completely stupid. At this moment the company often falls apart.
When a group of people becomes very successful, fighting against other people within the group can bring higher profit than cooperating against the environment. It is like playing a variant of a Prisonner's Dilemma where the game ends at the first defection and the rewards for defection are growing each turn. It's only semi-iterated; if you cooperate, you can continue to cooperate in the next turn, but if you manage to defect successfully, there may be no revenge, because the other person will be out.
Will something like this happen to the rationalist community one day (assuming the Singularity will not happen soon)? At this moment, there are small islands of sanity in the vast oceans of irrationality. But what if some day LW-style rationality becomes popular? What are the risks of success analogical to a successful company falling apart?
I can imagine that many charismatic leaders will try to become known as the most rational individual on the planet. (If rationality becomes 1000× more popular than it is today, imagine the possible temptations: people sending you millions of dollars to support your mission, hundreds of willing attractive poly partners, millions of fans...) There will be honest competition, which is good, but there will also be backstabbing. Some groups will experiment with mixing 99% rationality and 1% applause lights (or maybe 90% rationality and 10% applause lights), where "applause lights" will be different for different groups; it could be religion, marxism, feminism, libertarianism, racism, whatever. Or perhaps just removing the controversial parts, starting with many-worlds interpretation. Groups which optimize for popularity could spread faster; the question is how quickly would they diverge from rationality.
Do you think an outcome like this is likely? Do you think it is good or bad? (Maybe it is better to have million people with 90% of rationality, than only a thousand with 99% of rationality.) When will it happen? How could we prevent it?
People competing to be known as the most rational?
Er... what's the downside again?
It's much easier to signal rationality than to actually be rational.
True. It's harder to fake rationality than it is to fake the things that matter today, however (say, piety). And given that the sanity waterline has increased enough that "rational" is one of the most desirable traits for somebody to have, fake signaling should be much harder to execute. (Somebody who views rationality as such a positive trait is likely to be trying to hone their own rationality skills, after all, and should be harder to fool than the same person without any such respect for rationality or desire to improve their own.)
Faking rationality would be rather easy: Criticize everything which is not generally accepted and always find biases in people you disagree with (and since they are humans, you always find some). When "rationality" becomes a popular word, you can get many followers by doing this.
Here I assume that the popularity of the word "rationality" will come before there are millions of x-rationalists to provide feedback against wannabe rationalists. It would be enough if some political movement decided to use this word as their applause light.
Do you see any popular people here you'd describe as faking rationality? Do we seem to have good detectors for such behavior?
We're a pretty good test case for whether this is viable or not, after all. (Less so for somebody co-opting words, granted...)
The community here is heavily centered around Eliezer. I guess if someone started promoting some kind of fake rationality here, sooner or later they would get into conflict with Eliezer, and then most likely lose the support of the community.
For another wannabe rationalist guru it would be better to start their own website, not interact with people on LW, but start recruiting somewhere else, until they have greater user base than LW. At the moment their users notice LW, all they have to do is: 1) publish a few articles about cults and mindkilling, to prime their readers, and 2) publish a critique of LW with hyperlinks to all currently existing critical sources. The proper framing would be that LW is a fringe group which uses "rationality" as applause lights, but fails horribly (insert a lot of quotations and hyperlinks here), and discussing them is really low-status.
It would help if the new rationalist website had a more professional design, and emphasised its compatibility with mainstream science, e.g. by linking to high-status scientific institutions, and sometimes writing completely uncontroversial articles about what those institutions do. In other words, the new website should be optimized to get 100% approval of the RationalWiki community. (For someone trying to do this, becoming a trusted member of RationalWiki community could be a good starting point.)
I'm busy having pretty much every function of RW come my way, in a Ponder Stibbons-like manner, so if you can tell me where the money is in this I'll see what I can come up with. (So far I've started a blog with no ads. This may not be the way to fame and fortune.)
The money or lack thereof doesn't matter, since RW is obviously not an implementation of Villam's proposed strategy: it fails on the ugliness with its stock MediaWiki appearance, has too broad a remit, and like El Reg it shoots itself in the foot with its oh-so-hilarious-not! sense of humor (I dislike reading it even on pages completely unrelated to LW). It may be successful in its niche, but its niche is essentially the same niche as /r/atheism or Richard Dawkins - mockery of the enemy leavened with some facts and references.
If - purely hypothetically speaking here, of course - one wished to discredit LW by making the respective RW article as negative as possible, I would expect it to do real damage. But not be any sort of fatal takedown that set a mainstream tone or gave a general population its marching orders, along the lines of Shermer's 'cryonics is a scam because frozen strawberries' or Gould's Mismeasure of Man's 'IQ is racist, involved researchers like Merton faked the data because they are racist, and it caused the Holocaust too'.
Accomplishment is a start. Do the claims match the observable results?
Yeah, because true rationality is going to be supporting something like cryonics that you personally believe in.
In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
I've always felt that Atlas Shrugged was mostly an annoying ad nauseum attack on the same strawman over and over, but given the recent critique of Google, Amazon and others working to minimize their tax payments, I may have underestimated human idiocy:
On the other hand, these are people wearing their MP hats, they probably sing a different tune as board members. Or maybe Britain is overdue for another Thatcher.
To quote (apparently) Arthur Godfrey,
North Korea is threatening to start a nuclear war. The rest of the world seems to be dismissing this threat, claiming it's being done for domestic political reasons. It's true that North Korea has in the past made what have turned out to be false threats, and the North Korean leadership would almost certainly be made much worse off if they started an all out war.
But imagine that North Korea does launch a first strike nuclear attack, and later investigations reveal that the North Korean leadership truly believed that it was about to be attacked and so made the threats in an attempt to get the U.S. to take a less aggressive posture. Wouldn't future historians (perhaps suffering from hindsight bias) judge us to be idiots for ignoring clear and repeated threats from a nuclear-armed government that appeared crazy (map doesn't match territory) and obsessed with war.
Why do we care what they think, and can you name previous examples of this?
As someone who studies lots of history while often thinking, "how could they have been this stupid didn't they know what would happen?", I thought it useful to frame the question this way.
Hitler's professed intentions were not taken seriously by many.
Taken seriously... when? Back when he was a crazy failed artist imprisoned after a beer hall putsch, sure; up to the mid-1930s people took him seriously but were more interested in accommodationism. After he took Austria, I imagine pretty much everyone started taking him seriously, with Chamberlain conceding Czechoslovakia but then deciding to go to war if Poland was invaded (hardly a decision to make if you didn't take the possibilities seriously). Which it then was. And after that...
If we were to analogize North Korea to Hitler's career, we're not at the conquest of France, or Poland, or Czechoslovakia; we're at maybe breaking treaties & remilitarizing the Rhineland in 1936 (Un claiming to abandon the cease-fire and closing down Kaesŏng).
One thing that hopefully the future historians will notice is that when North Korea attacks, it doesn't give warnings. There were no warnings or buildups of tension or propaganda crescendos before bombing & hijacking & kidnapping of Korean airliners, the DMZ ax murders, the commando assault on the Blue House, the sinking of the Cheonan, kidnapping Korean or Japanese citizens over the decades, bombing the SK president & cabinet in Burma, shelling Yeonpyeong, the attempted assassination of Park Sang-hak... you know, all the stuff North Korea has done before.
To the extent that history can be a guide, the propaganda war and threats ought to make us less worried about there being any attack. When NK beats the war drums, it want talks and concessions; when it is silent, then that is when it attacks. Hence, war drums are comforting and silence worrisome.
Certainly the consequences of us being wrong are bad, but that isn't necessarily enough to outweigh the presumably low prior probability that we're wrong. (I'm not taking a stance on how low this probability is because I don't know enough about the situation.) Presumably people also feel like there are game-theoretic reasons not to respond to such threats.
All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.
Thus, we need 50 shades of Grey Matter.
As well as good marketing designs of things that attract women into rationality.
Which are the bestselling books if you only consider women? What about the best movies for women?
Reminds me of this
We can't afford not to do both
I am hoping for someone to write Anita Blake, Rational Vampire Hunter.
Or the rationalist True Blood (it already has "True" in the title!)
Is anyone working on rationalist stand-alone fiction?
Actually, what I meant was "Is anyone in this community working on rationalist stand-alone fiction?".
Not that I've seen. It'd be cool though. I think maybe you can see traces in people like Peter Watts, but if you take HPMOR as the defining example, I can't think of anything.
I'm always found Stross (and to a lesser extent, Scalzi) to be fairly rationalist - in the sense that I don't see anyone holding the idiot ball all that frequently. People do stupid things, but they tend not to miss the obvious ways of implementing their preferences.
Fanfiction readers tend to be female. HPMoR has attracted mostly men. I'm skeptical that your strategy will influence gender ratio.
Possible data point: are Luminosity fans predominantly female?
Wait, the question isn't in HPMoR attracted more women than men, it's if it the women to man ratio is higher than other things that attracts people.
I'm not sure that's true. When I looked in the 2012 survey, I didn't see any striking gender disparity based on MoR: http://lesswrong.com/lw/fp5/2012_survey_results/8bms - something like 31% of the women found LW via MoR vs 21% of the men, but there are just not that many women in the survey...
That does not factor the main point " that would not otherwise have become rationalist" There are loads of women out there on a certain road into rationalism. Those don't matter. By definition, they will become rationalists anyway.
There are large numbers who could, and we don't know how large, or how else they could, except HPMOR
Leaving aside gwern's rudeness, he is right - if MoR doesn't entice more women towards rationality than the average intervention, and your goal is to change the current gender imbalance among LW-rationalists, then MoR is not a good investment for your attention or time.
I'm sorry, I was just trying to interpret the claim in a non-stupidly unverifiable and unprovable sense.
It is not a claim, it is an assumption that the reader ought to take for granted, not verify. If I thought there were reliable large N data of a double blind on the subject, I'd simply have linked the stats. As I know there are not, I said something based on personal experience (as one should) and asked for advice on how to improve the world, if the world turns out to correlate with my experience of it.
Your response reminds me of Russell's joke about those who believe that "all murderers have been caught, since all muderers we know have been caught"...
The point is to find attractors, not to reject the stats.
ಠ_ಠ All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Baby Eaters in "Three Worlds Collide".
Thus, we need 50 Shades of Cooked Babies.
As well as good marketing designs of things that attract women into rationality.
Does this strike you as dubious? Well, it is not a claim, it is an assumption that the reader ought to take for granted, not verify!
Isn't that a tautology?
Edit: missed this subthread already discussing that; sorry.
In a few places — possibly here! — I've recently seen people refer to governments as being agents, in an economic or optimizing sense. But when I reflect on the idea that humans are only kinda-sorta agents, it seems obvious to me that organizations generally are not. (And governments are a sort of organization.)
People often refer to governments, political parties, charities, or corporations as having goals ... and even as having specific goals which are written down here in this constitution, party platform, or mission statement. They express dismay and outrage when these organizations act in ways that contradict or ignore those stated goals.
Does this really make sense?
It seems to me that just as the art or science of acting like you have goals is "instrumental rationality", it may be that the art or science of causing organizations to act like they have goals is called "management".
Who is the best pro-feminist blogger still active? In the past I enjoyed reading Ozy Frantz, Clarisse Thorn, Julia Wise and Yvain, but none of them post regularly anymore. Who's left?
Yvain still posts regularly (Google slate star codex), but he is not pro-feminist, he is anti-bias.
If you liked Ozy, you might like Pervocracy too.
Is there a secret URL to display the oldest LW posts?
Are you a guy that wants more social interaction? Do you wish you could get complimented on your appearance?
Grow a beard! For some reason, it seems to be socially acceptable to compliment guys on a full, >1", neatly trimmed beard. I've gotten compliments on mine from both men and women, although requests to touch it come mostly from the latter (but aren't always sexual--women with no sexual attraction to men also like it). Getting the compliments pretty much invariably improves my mood; so I highly recommend it if you have the follicular support.
Because of differences in local culture, please list what country you live in, and perhaps what region.
I wrote something on Facebook recently that may interest people, so I'll cross-post it here.
Cem Sertoglu of Earlybird Venture Capital asked me: "will traders be able to look at their algorithms, and adjust them to prevent what happened yesterday from recurring?
My reply was:
Considering making my livejournal into something resembling the rationality diaries (I'd keep the horrible rambling/stupid posts for honesty/archival purposes). I can't tell if this is a good idea or not; the probability that it'd end like everything else I do (quietly stewing where only I bother going) seems absurdly high. On the other hand, trying to draw this kind of attention to it and adding structure would probably help spawn success spirals. Perhaps I should try posting on a schedule (Sunday/Tuesday/Thursday seems good, since weekends typically suck and probably will motivate me to post, but holding off on that until Monday could keep me in a negative mindset that could delay rebounding). I suppose I'll have an answer (to the question that no one asked) by Sunday, then, unless someone convinces me one way or the other before then.
I started browsing under Google Chrome for Android on a tablet recently. Since there's no tablet equivalent of mouse hovering, to see where a link points without opening it I have to press and hold on it. For off-site links in posts and comments, though, LW passes them through api.viglink.com, so I can't see the original URL through press-and-hold. Is there a way to turn that function off, or an Android-compatible browser plugin to reverse it?
(Edit: Posted and discussed here.)
Some folks here might want to know that the Center for Effective Altruism is recruiting for a Finance & Fundraising Manager:
What happened to that article on cold fusion? Did the author delete it?
No, I didn't delete it. It went down to -3 karma, which apparently hides it on the discussion page. That's how I'm assuming it works anyway, given that it reappeared as soon as it went back up to -2. Incidentally, it now seems to be attracting random cold fusion "enthusiasts" from the greater internet, which was not my intention.
The hide / not hide can be set individually by clicking Preferences next to one's name. I think you are seeing the result for the default settings - I changed mine a while ago and don't remember what the default is.
Is there anyway to see authors classified by h-index? Google scholar seems not to have that functionality. And online lists only exist of some topics...
Lewis Dennett and Pinker for instance have nearly the same h-index.
Ed Witten's is much larger than Stephen Hawkings..... etc........
If you know where to find listings of top h-indexes, please let me know!
Art Carden, guest blogger at Econlog, advocates Bayes theorem as a strategy for maintaining serenity here.