Once again I am deeply impressed how Yvain can explain things that I have vaguely felt for a long time but couldn't quite put into words.
Specifically the concept of the "safe spaces" and whether only some groups deserve them and other groups don't. And more generally, whether only members of some groups have feelings and can be hurt (or perhaps whether only feelings and pain of some groups matter) or whether we all are to some degree fragile and valueable.
And how the "safe space" of one group sometimes cannot be a "safe space" of another group, and it's okay to simply have both of them. And as a consequence how by insisting that every place must be a "safe space" of group X we de facto say that the group Y should have no "safe space", ever.
A few months ago, I re-read HPMOR in its entirety, and had an insight about the Hermione / feminism issue that I'd previously missed when I wrote this comment. I never got around to saying it anywhere, so I'm saying it here:
I'd previously written:
HPMOR kinda feels off because canonically, Hermione is unambiguously the most competent person in Harry's year, and has a good chance of growing up to be the most competent person in the 'verse. Harry is kept at the center of the story by his magical connection to Voldemort. In HPMOR, in contrast, Harry is kept at the center of the story by competence and drive. It's going to be very hard to do that without it feeling like Hermione is getting shafted.
But actually, HPMOR closely parallels cannon on this point: Methods!Hermione got just as much of an intelligence upgrade as Methods!Harry did, so she's still unambiguously more competent than him, at least before repeated use of his mysterious dark side gave him a mental age-up. This is more or less explicitly pointed out in chapter 21:
...She'd done better than him in every single class they'd taken. (Except for broomstick riding which was like gym class, it didn't count.) She'd gotten real
I'm planning to meet with my local Department of Services for the Blind tomorrow; the stated purpose of the meeting is to discuss upcoming life changes/needs/etc. This appears to be exactly what I need at the moment, but I'm concerned that I'm not going to be optimally prepared, so I'd like to post some details here to increase the chances of useful feedback.
(For transparency's sake: I'm legally blind, unemployed, living with my parents until they take the necessary steps to get me moved into the place I own, with student loan payments outpacing my SSI benefits by over $200/month, and stuck in the bible belt.)
I was absurdly lucky: the counselor I spoke to is new and motivated to put in the necessary effort for everything, and went to high school with my stepmother; it also turns out that the in-state training center has a thirty-day trial period, during which commitment is a non-issue. They also offered to provide any required technology, be it laptops or note takers or whatever. It could start as early as the first week of February, which is early enough that I wouldn't need to worry about security at my property. So on the whole, a surprisingly good day.
If you have not dealt with something the DSB before, you're probably drastically overestimating how much mental effort they are willing to expend to help you. (I dealt with a similar agency, the California Department of Rehabilitation, many years ago.)
Although it is of course good for you to try to estimate how much mental effort they are willing to make in real time during the interview, I suggest the plan you go into the meeting with assume it is low. E.g. you might consider just asking for a notetaker over and over again.
Try to appear a little dumber than you actually are.
I would not risk alienating your parents to try for a deeper conversation with DSB staff.
After doing a large amount of research, I feel fairly confident saying that high-dose Potassium supplementation was the initial trigger that pushed me into two-year nightmare struggle with migraines which I am still dealing with. I didn't do anything beyond the recommendations that you can find on gwern's page and gwern doesn't really recomend anything that is technically unsafe, but the fact is that (apparently!) some people are migraine prone and these people should probably definitely not do what I did. (To be clear, I'm not blaming gwern in any way, that's merely a "community reference" that a lot of folks refer to.)
All the productivity posts on LW that I've read, I found mildly disturbing. They all give a sense of excessive regimentation, as well as giving up enjoyable activity - sacrificing a lot for a single goal (or a few goals). I'm sure it's good for getting work done, but there's more to life than work - there's actually enjoying life, having fun, etc.
I think you're talking about So8res's recent posts, but I think they're exceptional. Most productivity posts are about avoiding spending time web surfing, particularly during time that has been budgeted for work. They do this partly because fragmenting time is bad and partly because there are better ways to have fun.
My experience is the opposite; productivity generally feels awesome, sitting around doing nothing or wandering around the internet is generally depressing. (This is insufficient as a motivator for behavior.)
If you're expecting the singularity within a century, does it make sense to put any thought into eugenics except for efforts to make it easy to avoid the worst genetic disorders?
I don't see any discussion about this blog post by Mike Travern.
His point is that people trying to solve for Friendly AI are doing so because it's an "easy", abstract problem well into the future. He contends that we are already taking significant damage from artificially created human systems like the financial system, which can be ascribed agency and it's goals are quite different from improving human life. These systems are quite akin to "Hostile AI". This, he contends, is the really hard problem.
Here is a quote from the blogpost (which is from a Facebook comment he made):
I am generally on the side of the critics of Singulitarianism, but now want to provide a bit of support to these so-called rationalists. At some very meta level, they have the right problem — how do we preserve human interests in a world of vast forces and systems that aren’t really all that interested in us? But they have chosen a fantasy version of the problem, when human interests are being fucked over by actual existing systems right now. All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure.
It's a short post, so you can read it quickly. What do you think about his argument?
It's a short post, so you can read it quickly. What do you think about his argument?
I think it's silly. I suspect MIRI and every other singulatarian organization, and every other individual working on the challeges of unfriendly AI, could fit comfortably in a 100-person auditorium.
In contrast, "trying to fix industrial capitalism" is one of the main topics of political dispute everywhere in the world. "How to make markets work better" is one of the main areas of research in economics. The American Economic Association has 18,000 members. We have half a dozen large government agencies, with budgets of hundreds of millions of dollars each, to protecting people from hostile capitalism. (The SEC, the OCC, the FTC, etc etc, are all ultimately about trying to curb capitalist excess. Each of these organizations has a large enforcement bureaucracy, and also a number of full-time salaried researchers.)
The resources and human energy devoted to unfriendly AI are tiny compared to the amount expended on politics and economics. So it's strange to complain about the diversion of resources.
Excellent point. I'm surprised this did not occur to me. This reminds me of Scott Aaronson's reply when someone suggested that quantum computational complexity is quite unimportant compared to experimental approaches to quantum computing and therefore shouldn't get much funding:
I find your argument extremely persuasive—assuming, of course, that we’re both talking about Bizarro-World, the place where quantum complexity research commands megabillions and is regularly splashed across magazine covers, while Miley Cyrus’s twerking is studied mostly by a few dozen nerds who can all fit in a seminar room at Dagstuhl.
Finally have a core mechanic for my edugame about Bayesian networks. At least on paper.
This should hopefully be my last post before I actually have a playable prototype done, even if a very short one (like the tutorial level or something).
i plan to quit my job and move to an Eastern European country with small costs of living in march. Because of this I am looking for any job that I can do online for around 20 hours a week. I am looking for recommendations on where to look, where to ask, who to contact that might help me, etc. Any help will be appreciated.
In light of gwern's good experiences with one, I too now have an anonymous feedback form. You can use it to send me feedback on my personality, writing, personal or professional conduct, or anything else.
(My thoughts are still not sufficiently organized that I’m making a top level post about this, but I think it’s worth putting out for discussion.)
A couple of years ago, in a thread I can no longer find, someone argued that they valued the pleasure they got from defecation, and that they would not want to bioengineer away the need to do so. I thought this was ridiculous.
At the same time, I see many Lesswrongers view eating as a chore that they would like to do away with. And yet I also find this ridiculous.
So I was thinking about where there difference lay for me. My working hypothesis is that there are two elements of pleasure: relief and satisfaction. Defecation, or a drink of water when you’re very thirsty bring you relief, but not really satisfaction. Eating a gourmet meal, on the other hand, may or may not bring relief, depending on how hungry you are when you eat it, but it’s very satisfying. The ultimate pleasure is sex, which culminates in a very intense sense of both relief and satisfaction. (Masturbation, at least from a male perspective, can provide the relief but only a tiny fraction of the satisfaction – hence the difference in pleasure from sex.)
I can understand...
I'm having some trouble keeping myself from browsing to timesink websites at work(And I'm self-employed, so it's not like I'm even getting paid for it). Anyone know of a good Chrome app for blocking websites?
What are you supposed to do when you've nailed up a post that is generally disliked? I figured that once this got to -5 karma it would disappear from view and be forgotten. But it just keeps going down and it's now at -12. This must mean that someone saw the title of it at -11 karma and thought "Sounds promising! Reading this now will be a good use of my time." And then they read it and went: "Arrgh! This turned out to be a disappointing post. Less like this, please. I'd better downvote it to warn others."
What does etiquette suggest I do here? Am I supposed to delete the post to keep people from falling into the trap of reading it? But I like the discussion it spawned and I'd like to preserve it. I'm at a loss and I can't find relevant advice at the wiki.
if we don't have downvoted topics some of the time it means we are being too conservative about what we judge will be useful to others. Only worry if too large a fraction of your stuff gets downvoted.
This must mean that someone saw the title of it at -11 karma and thought "Sounds promising! Reading this now will be a good use of my time." And then they read it and went: "Arrgh! This turned out to be a disappointing post. Less like this, please. I'd better downvote it to warn others."
Not necessarily. Seeing a heavily downvoted post seems to trigger some kind of group-norm-reinforcement instinct in me: I often end up wanting to read it in the hopes of it being just as bad as the downvotes imply, so that I could join in the others in downvoting it. And I actually get pleasure out of being able to downvote it.
I'm not very proud of acting on that impulse, especially since I'm not going to be able to objectively evaluate a post's merit if I start reading it while hoping it to be bad. But sometimes I do act on it regardless. (I didn't do that with your post, though.)
What are you supposed to do when you've nailed up a post that is generally disliked?
Grin and say "Fuck 'em!"
Eh, if someone clicks on an article at -11, then feels reading it was a waste of time, he should blame himself, not you.
I see from time to time people mention a 'rationalist house' as though it is somewhere they live, and everyone else seems to know what they're talking about. What are are they talking about? Are there many of these? Are these in some way actually planned places or just an inside joke of some kind?
Every single time the subject of overpopulation comes up and I offer my opinion (which is that in some respects the world is overpopulated and that it would benefit us to have a smaller or negative population growth rate), I seem to get one or two negative votes. The negative karma isn't nearly as important to me as the idea that I might be missing some fundamental idea and that those who downvote me are actually right.
Especially, this recent thread: http://lesswrong.com/r/discussion/lw/jgg/we_need_new_humans_please_help/ has highlighted this issue for me again.
So, I'm opening my mind, trying to set aside my biases, and hereby asking all those who disagree with me to give me a rational argument for why I'm wrong and why the world needs more people. If I stray from my objective and take a biased viewpoint, I deserve all the negative karma you can throw at me.
Well, let's try to be a bit more specific about this.
First, what does the claim that "the world is overpopulated" mean? It implies a metric of some sort to which we can point and say "this is too high", "this is too low", "this is just right". I am not sure what this metric might be.
The simplest metric used in biology is an imminent population crash -- if the current count of some critters in an ecosystem is pretty sure to rapidly contract soon we'd probably speak of overpopulation. That doesn't seem to be the case with respect to humans now.
Second, the overpopulation claim is necessarily conditional on a specific level of technology. It is pretty clear that the XXI technology can successfully sustain more people than, say, the pre-industrial technology. One implication is that future technological progress is likely to change whatever number we consider to be the sustainable carrying capacity of Earth now.
Third, and here things get a bit controversial, it all depends (as usual) on your terminal goals. If your wish is for peace and comfort of Mother Gaia, well, pretty much any number of humans is overpopulation. But let's take a common (thoug...
I don't recall downvoting you, but I think that there is a very high chance technology makes the problem moot - either by killing us or by alleviating scarcity until a superintelligence happens.
(Reposted from the bottom of the last open thread.)
Two unrelated things (should I make these in separate posts or...?):
1.) Given recent discussion on social justice advocates and their... I don't know the best way to describe this, sometimes poor epistemological habits? I thought I would post this
http://geekfeminism.wikia.com/wiki/Concern_troll
Is this it just me, or is this, like, literally the worst concept ever? It literally just means "someone slightly to the right of me" or "someone does anything that could be considering cheering for the other side", backed with a dubious claim tha...
Concern trolling is a widespread phenomenon, not specific to feminist communities. The definition given in the first two sentences of that article is the exact concept that the phrase was coined to name:
A concern troll is a person who participates in a debate posing as an actual or potential ally who simply has some concerns they need answered before they will ally themselves with a cause. In reality they are a critic.
The article does then go on to broaden the concept to the point where it can be used as a club to invalidate anyone:
Concern trolls are not always self-aware, they may also view themselves as potential allies
Well, no. The whole point of the concept is that a concern troll is lying. They are, in fact, an enemy deliberately, consciously, intentionally, posing as a friend in order to undermine discourse. Someone who is actually a friend with genuine questions that they actually want to be constructively discussed is not a concern troll, even if those who do not wish the questions to be raised at all call them that.
I think there's a Poe's law type thing going on here: looking at behavior alone, it's very difficult to tell the difference between a concern troll and a tentative ally with the right ideological background. That's probably especially true for cultures like social justice that use a lot of endogenous concepts and terminology: within those movements, any concerns that don't speak the language are going to pattern-match to "enemy" on linguistic grounds and suffer from the corresponding horns effect.
With that in mind, I suspect they exist but are pretty rare.
Concern trolling in the false flag political operation sense is a thing that happened
An example of this occurred in 2006 when Tad Furtado, a staffer for then-Congressman Charles Bass (R-NH), was caught posing as a "concerned" supporter of Bass' opponent, Democrat Paul Hodes, on several liberal New Hampshire blogs, using the pseudonyms "IndieNH" or "IndyNH". "IndyNH" expressed concern that Democrats might just be wasting their time or money on Hodes, because Bass was unbeatable.[37][38] Hodes eventually won the election.
"Concern trolling is frequently banned in feminist communities."
It may help if you consider the possibility that some feminist communities do not exist for the sake of rational dispassionate and balanced discussion of feminism. Rather, a feminist community may be a meeting place for the members of a feminist movement of some kind, which exists to achieve its goals. Like any other political movement.
TL;DR. LW is not the real world. In the real world, arguments are always soldiers (even if you pretend them not to be), discussion requires resources, and resources are finite.
http://mikhailvladimirovich.tumblr.com/post/72908158199/polytheism-as-a-guide-to-morality Some thoughts I had on polytheism as a human-implementable moral system rather than as a factual question.
I just set up the Anki Beeminder plugin + Beeminder on my Android smartphone. It all automatic software and should I forget to do Anki enough the smartphone app will bug me.
I think for anyone doing Anki there no reason not to go down that road. If you want to make sure you can even add a commitment contract to Beeminder.
My friends keep posting videos of Jacob Barnett, a child genius (TEDx video; YouTube channel) on facebook. I'd like to have your opinion about what kind of a genius precisely he is.
From my short googling, seems to me that the kid has an Asperger syndrom, he probably enjoys reading a lot about maths and physics, he probably does it most of his day, and he seems to have some kind of photographic memory, so he remembers a lot and then goes to impress people. His mother is doing a very good marketing campaign for him. There are videos of him talking about quan...
Less Wrong contains so much advice that it's impossible to follow even a large fraction of it. How should I decide what advice to actually follow?
Recently attempted to read Julian Barbour's The End of Time, primarily on Eliezer's recommendation and found myself stalling out because it wasn't presenting any information which felt new to me. I am currently weighing whether it is worth pushing onward in the hopes of finding meatier material later.
Has anyone else read it after having read the Quantum Physics sequence, and what were their thoughts?
My friend will have one month of unemployment in the SF Bay Area and is looking for projects, experiences, and ideas to make zirself awesome. My friend works in the biological sciences, but plans to apply to medical school. Traits include being multilingual (english, mandarin, french), very limited spanish, cooking, sub-power user competent technology use. Not widely read, not x-rational, difficulties with akrasia, drive, self-confidence, public speaking, making friends. No significant knowledge of coding, math beyond calculus II, philosophy, sociology, po...
Is there a reason why a second account I made recently is unable to post comments? The top-level comment box and the reply buttons on comments are missing. I hope this isn't affecting all new users.
I started reading the Culture novels by Iain M. Banks and can't get over the biggest plot hole: where's the flood of people wanting to immigrate into the Culture, and what happens to them?
This Washington Post piece discusses motivated reasoning, and how given a grouping of the exact same reforms, you can strongly influence whether or not people think it is a good policy by changing the affiliation of the group that endorses it.
Ergo: 5 reforms, labeled blue solutions to green problems, blues like, greens don't. Same 5 reforms labeled green solutions to blue problem, greens like them and blues don't.
What if lesswrong.org hosted individual users' blogs? They would live on the user profile page, as a separate tab (perhaps the default one). That (and an RSS feed) would be the primary way to get to them. Public homepage would not aggregate from them, Main and Discussion remaining as they are.
Has this been discussed before? Pros/cons? Would you use this mechanism if it were available?
(technically, under the hood, they'd be easy to implement as just separate individual subreddits, I guess)
A query about threads:
I posted a query in discussion because I didn't know this thread exists. I got my answer and was told that I should have used the Open Thread, so I deleted the main post, which the FAQ seems to be saying will remove it from the list of viewable posts. Is this sufficient?
I also didn't see my post appear under discussion/new before I deleted it. Where did it appear so that other people could look at it?
A while back there was a post linking to videos and a paper about an AI which which can play arbitrary NES games. Since then two more videos about the AI have been uploaded by the author:
Also in the second video the Author briefly addresses concerns about the AI turning into skynet.
I am considering a possibly risky financial move, and not sure that it's a good idea.
I am planning on going back to school full time next year (getting a BS Computer Science so I expect to have a big pay raise), and I am considering purchasing a 4br house and renting out two of the bedrooms to cover mortgage payment + maintenance costs. I can do this at well under-market rent pricing (ideally offered to friends or romantic partners), so I don't feel like it'd be taking advantage of people (rent in my city is vastly overpriced and home prices are very che...
You may be underestimating the amount of work involved in being a landlord. If you find financially stable, reliable, non-destructive people, it's a nice income stream. Even so, they will expect you to fix problems sooner than you might get around to doing it for yourself. You probably shouldn't count on having your rooms rented for every month, you might lose a tenant and not have one ready to replace them immediately.
All this is general knowledge. There's probably information somewhere about the expected costs to being a landlord.
It still might make sense to buy the house.
Has there been any update on the Less Wrong survey/census? The original post mentioned something about a "MONETARY REWARD" but it didn't say when to check back for results/etc.
Is anyone aware of research into long-term comas as a potential alternative to cryonics? There are small numbers of examples of people in unresponsive comas for over a decade who then awake and are at least basically functional. It seems like it might be possible with perhaps cooling (lowering the body temperature to reduce metabolism and perhaps disease progression) with heart-lung machines to keep one's body alive for an indefinite period if normal life was otherwise about to end.
tl;dr, how long can people just stay on life support?
It seems far more li...
The waterbear, a multicellular organism with neurons-- it can be frozen and revived.
Any thoughts about genetic engineering to make cryonics easier?
Again frustrated with being unable to type properly while standing, and remembering how my (no longer usable) braille devices made doing so trivial, I wrote a comment praising the utility of braille input. Then I realized this was dumb, and did an experiment to put my braille typing speed against my qwerty typing speed, using a braille keyboard simulator.
I found that my qwerty speed was over 100WPM; there were no typoes in the test, but I've been known to double-capitalize, drop 'e's, and misplace 'h's quite frequently in the wild.
My braille typing speed w...
This is a request for information. We all know about the force of a first impression on other people, but here's something I'm extremely confused about: how easy is it to spoil somebody's impression of you when you have already known them for a bit? I'm asking this from a male perspective, but with respect to both inter- and intra-gender interactions. I'd appreciate both scientific studies (I'm not aware of any) and personal experience, because I really have no clue. My past interactions with people have been extremely high-variance in this respect, I don'...
I've been lurking here for a while, but I'd like to get more actively involved.
By the way, are there any other Yale students here? If so, I'd be interested in founding a rationalist group / LW meetup on campus.
The standard advice for starting a physical group is to just pick a timeframe and a nice location, then show up with a good book and stay for the duration. Either other people show up and you've got your meetup, or else you spend a couple hours with a good book.
PM me if you want to talk about founding a group. I ran the Boston community for a while, and it was one of the most rewarding things I've ever done.
English is for my a second language but I probably wrote more words in it than in my native one.
In the last months I frequently found myself forgetting "'s" after "there" or "ït". It not an issue that I remember being there a year ago. Has anyone observed similar things or knows of research that might describe processes like this?
The only explanation I can think of is having reread Korzybski's arguments against the "is of identity".It would be interesting if my unconscious is so opposed to "is" that it censors me from using it whenever I don't pay attention.
In the last months I frequently found myself forgetting "'s" after "there" or "ït". It not an issue that I remember being there a year ago.
I like how you do what you describe with the very next word after the description of the problem.
Looking for advice on cheap / enjoyable caffeine sources.
I currently have a 2-3 energy drink per day caffeine habit, which is a bad thing due to the expense if nothing else. A couple months ago I tried to switch to making my own coffee, but it turns out that's harder than it seems and the drip coffee maker in the App Academy office makes pretty weak coffee that I don't trust to stave off withdrawal symptoms (which I really don't want to have affecting my productivity right now).
So now taking recommendations for caffeine pills / coffee machines / tea brands / whatever else.
For my high-school Chemistry course I need to interview an individual involved with the sciences in some professional capacity. Anyone interested?
You're trying to solve a puzzle. Maybe it's a jigsaw puzzle, maybe it's a Sudoku puzzle, maybe it's an interesting math problem. In any case, it's one of those puzzles where you know a solution when you see it, and once it's almost solved, everything falls into place.
At the moment, you're kind of stumped. You've been unable to figure out any more facts using deductive reasoning, so now it's time to resort to trial and error. You have three independent hypotheses about the puzzle. Hypothesis A seems to have an 80% chance of being right, hypothesis B a 50% c...
Assuming the average person's utility function is concave with respect to money and given the current income distribution the simplest and highest utility change is to take a fixed amount from high income people and give it to low income people. This follows from simple economics as the people on the lower end of the distribution know best what it is they need. GiveDirectly is the charity that pioneers this exact scheme and that is why I donate to them.
On the other end of the spectrum, the high income countries, the best people could do is eat healthier an...
Assuming the average person's utility function is concave with respect to money and given the current income distribution the simplest and highest utility change is to take a fixed amount from high income people and give it to low income people.
When you consider second order consequences, such as the creation and elimination of certain incentives, the effect of currency transfers on utility is not quite so straightforward. Even without those consequences, it is far from obvious that the statement
This follows from simple economics as the people on the lower end of the distribution know best what it is they need.
holds.
There is also the issue of side effects. Forcibly equalizing income (or wealth) has been tried. Many times.
I don't think he advocates equalizing. It's more an argument for unconditional basic income policies. Even Milton Friedman made proposals that went in that direction.
I am trying to formalize what I think should be solvable by some game theory, but I don't know enough about decision theory to come up with a solution.
Let's say there are twins who live together. For some reason they can only eat when they both are hungry. This would work as long as they are both actually hungry at the same time, but let's say that one twin wants to gain weight since that twin wants to be a body builder, or one twin wants to lose weight since that twin wants to look better in a tuxedo.
At this point it seems like they have conflicting goal...
Is anybody interested in enactivism? Does anybody think that there is a cognitivist bias in LessWrong?
Anyone else sees has a problem with this particular statement taken from Cryonics institute FAQ?
One thing we can guarantee is that if you don't sign up for cryonics is that you will have no chance at all of coming back.
I mean, marketing something as one shot that might hopefully delay (or prevent) death, is hard to swallow, but I can cope with that, but this statement reads like cryonics is the one and only possible way to do that.
If it's worth saying but not worth its own thread even in discussion it goes here.