If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Open thread, 11-17 March 2014
New Comment
227 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

With humans like this, we don't need the Unfriendly AI:

Could we condemn criminals to suffer for hundreds of years? Biotechnology could let us extend convicts' lives 'indefinitely'

Last year, a team of scientists led by Rebecca Roache began exploring technologies that could keep prisoners in an artificial hell. Turning to human engineering as a possible solutions, Dr Roache looks at the idea of life span enhancements so that a life sentence in prison could last hundreds of years. Another scenario being explored by the group is uploading the criminal's mind to a digital realm to speed up the 1,000 year sentence.

She teaches ethics, bioethics, and rationality. Perhaps she could write for us a new Sequence on Hell Theory.

Did you know that Rebecca Roache used to work at the FHI and even co-authored a paper with Nick Bostrom? Makes you wonder what the rest of humanity is like, doesn't it?

ETA: Although I don't think "artificial hell" is a fair description of her actual proposal, which just involves extended sentences without making them literally hell-like.

2XiXiDu
What do you think are the reasons that humanity has largely started to treat their prisoners and foes in ways that do not involve horrors such as torture? Would it happen again if civilization collapsed, or is there a chance that even an educated civilization like ours could stick to a treatment of people as known from the dark ages?
7Lumifer
Assumption not in evidence. The West started to treat prisoners and foes in a kinda-sorta decent manner as long as it doesn't matter. When people thought it's important, torture (e.g. waterboarding) and assassination (e.g. by drone) appeared in a blink of an eye.
8fubarobfusco
Assumption not in evidence. In some of the most egregious cases of torture under the recent administration — Abu Ghraib under Charles Graner and Lynndie England — there was no evidence that "people thought it's important". Therefore, that was not a requirement for the withdrawal of the restriction against torture. Therefore, the actual practice of torture (as opposed to the legal theory presented by, e.g. John Yoo) under the recent U.S. administration, appears to be better explained in terms of dehumanization of the victims as discussed by Rorty — not the "ticking time bomb" scenario of the legal theory. In gist, the culture or doctrine of the torturers declared that the victims were outside the moral consideration accorded to human beings; or even that their well-being was morally negative — that there was an obligation to cause them suffering. The torturers tortured not because they had weighed the consequences and judged that there was a positive expected outcome, but because they did not assign moral significance (or, indeed, assigned negative significance) to some of the humans involved in the outcome. They weren't running the Trolley Problem. They were running a variant where you get to push a horrible mockery of humankind in front of the trolley — and who cares if it saves real humans? (Why some cases of torture were later prosecuted and others have not yet been is a different question.)
0Lumifer
Sure, but now ask yourself "why?" Why did dehumanization of the victims suddenly become acceptable? I'm not talking about a reasoned weighing of pros and cons about the necessity of torture -- that did not happen. What happened was that it was decided (and I am deliberately using the passive voice here) that it's OK to declare some people non-humans and accept that laws, not to mention things like decency, do no apply any more.
7fubarobfusco
Why did dehumanization of Bosniaks "suddenly" become acceptable to Bosnian Serbs after the breakup of Yugoslavia? Dehumanization seems to run on tribal, emotional levels — on sentiment, as Rorty puts it — and not on consequentialism.
-3Eugine_Nier
Compare that with any group besides "The West". They would do much worse things and not even bother angsting about it.
0Chrysophylax
Counterexample: most Buddhists. Your enemies (and, you know, the rest of humanity) are not innately evil: there are very few people who will willingly torture people. There are quite a lot of people who will torture horrible mockeries of humanity / the Enemy, and an awful lot of people who will torture people because someone in authority told them to, but very few people who feel comfortable with torturing things they consider people. The Chinese governement does some pretty vile things; I nevertheless doubt that every Party bureaucrat would be happy to be involved in them.
-1Eugine_Nier
Look at what warfare was like in China or Japan before major Western influences (not that is was much better after Western influences).
0Chrysophylax
Vastly inferior to, say, warfare as practiced by 14th-century England, I'm sure. I also point you towards the Rape of Nanking. You are comparing modern westerners with historical Buddhists. Try considering contemporary Buddhists (the group it is blindingly obvious I was referring to, given that the discussion was about the present and whether contempary non-western groups all lack moral qualms about torture). I observe that you are being defensive.
0Eugine_Nier
I meant the modern west. However, Which was committed by people who were (at least theoretically) Buddhists. We were? In the comment of mine that started this discussion I wasn't just referring to contemporary groups. However, let's restrict to contemporary states. I take it you count the historically Buddhist countries that are currently under communist regimes as "not really Buddhist" since communism is officially atheist. That leaves, Japan, South Korea, Taiwan, Singapore, Cambodia, Thailand, Burma, and Sri Lanka. Japan was rather nasty until being beaten by the West, and there are signs it'll become nasty again given the chance to the extent it doesn't that's clearly due to western influences. South Korea and Taiwan are too weak to do much externally, but are admittedly nice places to live, also highly westernized. Dido for Singapore, although it has very strict laws (I approve of them but suspect you might not). Cambodia is somewhat of a mess even if you discount the Khmer Rouge as not Buddhist. Thailand is ok although not powerful enough to do much externally, also rather westernized. Burma is in the running for most oppressive government on the planet. Sri Lanka is dealing with its Tamil minority in a somewhat nasty manner.
-4ChristianKl
Quite a few adults in the West still advocate corporal punishment to educate their own children.
2Chrysophylax
If we define all deliberate infliction of pain as torture then we lose the use of a useful concept. You are not cutting reality at the joint.
0ChristianKl
I'm not. I'm defining using physical pain as a means of punishment as torture. That's even fairly conservative. Plenty of people also consider activities such as female circumcision for religious purposes torture.
3Eugine_Nier
That's at the very least non-central fallacy. Ok, that's just expanding the definition of torture to "anything I disapprove of".
0ChristianKl
What's your preferred definition of torture?
3ChristianKl
Our current relationship to a concept like torture comes out of dualism. It's not cruel to put someone decades into prison for a nonviolent drug offense but it is cruel to inflict physical pain because that violates the sanctity of the body. It's also bad to practice euthanasia for terminally ill patients that are in a lot of pain and suffer a lot. Depending on what memes are around when civilization get's rebuild it not clear that we will get the same position and the same social consensus will arise. It also possible that over time the use of capital punishment of a country like Singapore will spread even without straight collapse.
2DanielLC
I've heard tortue has inconsistent results. It will work fine for some people, but won't have much of an effect on others. It might be more simple. If torture doesn't work, you're back to where you were before. If prison doesn't work, at least they're off the streets for a while. I don't see why imprisonment is considered less horrifying than torture. It doesn't suck nearly as bad per unit of time, but it lasts longer. It's just a less dense version of torture.
0asr
The history of this is relatively tangled. In medieval European warfare, knights were routinely taken prisoner, and were treated well. Often they could go home under parole -- basically a promise not to take up arms again until formally exchanged. (Barbara Tuchman has a long discussion of this in A Distant Mirror). Upper-class combatants were basically a transnational military caste who extended professional courtesy to each other.This only applies to nobility of course. Peasant combatants could be slaughtered out of hand. Likewise today, different prisoners are treated differently. We have some pretty dreadful prisons in the United States. In summary, there's an incredible diversity of how prisoners are treated and there always has been. There may have been a general rise in standards, but it hasn't been systematic and I don't think it's been monotonic.

I initially doubted that the cited individual was actually advocating this atrocity, but from this post on her blog, it sounds like she's at least seriously considering doing so:

As I say at the end of the blog, it is debatable what constitutes humane treatment in relation to such technologies: perhaps it will turn out that, on reflection, some of the techniques I have suggested are inhumane, in which case I do not advocate their implementation. (But I do advocate the debate about them.)

Shades of Banks' Surface Detail ...

2AndekN
This seems to be just another case of journalists exaggerating and misrepresenting a scientists point in order to create attention-grabbing headlines, at least according to Anders Sandbergs blog post about the issue.

Here's a marvellous memetic hazard that you should avoid clicking on. The 2048 game. Just in case you thought you were getting any work done today. HN discussion. (And HN on its memetically hazardous nature. Creator: "I've been playing this all day today. I basically created my own demise.")

(Someone has, of course, written an AI to try to beat it. HN discussion, though try to figure it out yourself first.)

[-]Metus270

Take the warning seriously.

3TylerJay
I beat it the other day after seeing it on HN. It was really fun and it was incredibly easy to get into flow. Fortunately, it was time I could afford to burn, but I did spend a solid 2 hours on it.
0Stabilizer
This is not a joke.
6NancyLebovitz
Definitely fascinating. It's a shame it doesn't keep track of number of moves-- the fewest number of moves needed to clog up the board might also be interesting.
8Richard_Kennaway
The total of all the numbers on the board goes up by 2 on every move and starts at 4. Therefore the number of moves you've made is (current total - 4)/2. To reach 2048 requires at least 1022 moves. As a rough rule of thumb, look at the biggest number on the board: that's a ballpark estimate of how many moves you've played. I realised this shortly after getting into the flow of my first game, and it entirely extinguished my desire to play a second. My memetic immune system is strong.
2David_Gerard
Actually, I think 90% of the time it adds a 2, 10% it adds a 4.
2Gunnar_Zarncke
Set a time/alarm. I set mine to 5min. That time was enough to get a feel for how to win (I played up to 512). Initial strategy: 1. Xrrc n ynetre ahzore zber va gur pbearef guna fznyyre ahzoref. 2. Xrrc n ynetre ahzore zber ba gur fvqr guna fznyyre ahzoref. 3. Oevat gur fnzr ahzoref arkg gb rnpu bgure. Xrrcvat zrnaf qba'g zbir njnl sbe n fnsr cynpr hagvy arrqrq.
0[anonymous]
I've won the game several times (created a 2048 tile). As far as I can tell, this is the right strategy, which doesn't win every time but gives you a decent chance: 1) The biggest tile stays in the top right corner. 2) Other big tiles stay in the top row, waiting to be merged with the biggest. They should be roughly sorted from left to right, but that will probably happen by itself anyway. 3) Never press down. Learn to predict and avoid the situation where the top three rows are filled and you have to press down. If you get into that situation, you've lost. 4) If you have no choice except moving the biggest tile out of its corner by pressing left, then put it back in the corner on the very next move. If you can't, you've lost. 5) When there are empty spaces in the top row, get them filled as soon as possible, before doing more horizontal merges. Only when the top row is completely filled and all four tiles are different, you're completely free to play in the bottom three rows. 6) Avoid having bigger tiles below smaller ones. If you have a small tile stuck in the top row with a bigger tile below it, you're in danger of losing.
[-]eeuuah150

I tried the whole Up-Goer Five thing. Here is my attempt at explaining the idea behind spaced repetition. Do you think it comes through clearly?

1witzvo
I think it was clear and good.
[-]Emily140

Is http://lesswrong.com/r/discussion/new/ doing something strange for anyone else? As of today, every time I go there, the page changes automatically to http://lesswrong.com/r/discussion/top/ after a few seconds.

4John_Maxwell
Created an issue and emailed Matt Fallshaw.
1jackk
This was an A/B test gone awry. It has since been turned off.
0Emily
This is now a problem on my end, not the site end, I assume, but it still hasn't gone away for me on Firefox. Clearing the cache has stopped it on Chromium but didn't work on Firefox. :(
0jackk
Can you additionally try clearing cookies and see if that helps?
0Emily
Since that last comment it stopped by itself... I guess something was still cached somewhere. Thanks for the suggestions!
0Emily
Cool, thanks for the update / fix.
0Stabilizer
Yes! I was just going to post the same thing!
0Richard_Kennaway
I saw that as well, but it seems to have stopped now.
0Richard_Kennaway
And now it's started again.
0Stabilizer
It's still happening for me.
0Richard_Kennaway
Clear cache? Try different browser?
0Emily
Still doing it for me too - on both Firefox and Chromium. OK on Chrome on my phone though. shrug

I have a medical issue I'm hoping I can self-medicate with and not have to go to a doctor and get prescribed pills to treat.

When I was in the military, I had my gall bladder removed. It turned out that I actually didn't need my gall bladder removed because the pain that caused me to go to the doctor in the first place persisted after its removal. Since it was during the military there's nothing I can do about this misdiagnosis and now not having a gall bladder has led to another inconvenience: If I'm hungry for too long I get really bad stomach cramps and/or "the runs". I've tried to prevent this by self-medicating with ginger ale or Tums but those don't seem to work; the only thing that does seem to work consistently is drinking liquor before I eat if I've been hungry for an extended period of time.

I don't want to have to drink in order to not get these stomach aches, and I have no clue why drinking liquor would work. Is there something else I could use that would have the same calming effect on my stomach?

6khafra
Even if the VA didn't want to give you partial disability, have you talked with veterans' advocates? They might be able to help you get compensation.
3TylerJay
If ginger-ale doesn't work, maybe try a teaspoon of dried ginger root powder dissolved in hot water. Tastes pretty good, but its more effective for nausea than cramps. For the runs, try immodium over tums. Tums will only help with acid reflux, and they don't even do that very well. Immodium will harden your stool. I'd bet that the liquor helps for its muscle relaxant properties. It doesn't meet the no doctor requirement, but a good friend of mine with the same problem of horrible stomach cramps if he doesn't eat for too long was solved really well with a drug called "Donnatal". It's very low-dose phenobarbital with belladonna alkaloids. It's like a miracle cure for him. He takes it when he feels like it might be starting and it goes away. No side effects for him except slight dry mouth. Also makes it stop after it's started. Might want to try it out.
0JQuinton
When does he take the Donnatal? Is it with meals or any time he is getting hungry?
0TylerJay
He's had it for so long, he's typically able to tell when its starting, but before it gets very bad. He'll just take it at that point and it stops it from happening. I suppose it would be possible to take it if you were just hungry, but it seems to me like a better option in that case would be just to carry a snack bar or a protein bar around and just use the Donnatal if you feel like your cramps are coming on. A doctor will of course be able to give you real medical advice. This is just anecdotal, but the no eating causing stomach/intestinal cramps sounds very similar.
2NancyLebovitz
Any thoughts about arranging things so that you can eat often enough?
2JQuinton
The problem consistently happens at breakfast, since sleeping 8 hours is 8 hours of me not eating. The only way this would work is if I wake up in the middle of the night and eat a full meal.
0NancyLebovitz
It might be worth doing some research at Chris Kresser's blog-- he's got respect for science and for individual variation, and a huge commenter base-- that last increases the odds of someone with the same or a similar problem.
0SarahSrinivasan
My wife has this exact symptom set with her removed gallbladder. She has not found any reliable treatments/habits. Not helpful but FYI.
-12ChristianKl

I've started a blog, and I'm kind of unreasonably shy about it. Especially given that it's, you know, a blog.

http://www.somnicule.com/

0Richard_Kennaway
Is there an RSS feed?
2Slackson
Yup. http://www.somnicule.com/feed/
[-]Jaime100

Question: what are the good ways to help a person in a stressful situation (work/relationships/life in general) ? What help would rationalist prefer, and how does that differ from someone who may be less rational in times of emotional turmoil? Thanks!

Letting someone know you like them and that they're cared for is a surprisingly powerful gesture. It's also something people are inclined to lose sight of when they're going through a tough time.

It probably needs tweaking to the specific social circumstances, but simply saying something to the effect of "you are awesome and people care about you; don't forget that" goes a lot further than you might expect.

5cousin_it
There might be a "five love languages" thing going on. Words of appreciation don't do anything for me and can even make me more sad, but any kind of surprise gifts make me happy for a long time, even if they're really crappy. Maybe it's a good idea to ask the person "what kind of caring do you appreciate the most?" (words, gifts, time together, helpful actions, physical contact, what else?) and then try to give them that.
[-]maia110

Have you tried asking them if there's any way you can help, and/or expressing generic sympathy?

"Hey, you seem to be going through a lot lately, are you holding up okay? Anything I can do?"

9[anonymous]
Active listening is by far the best skill you can learn to help someone through a stressful or highly emotional period.. A "rationalist" might be more receptive to solutions than others, but will probably still appreciate the emotional catharsis of a good listener.
7Kaj_Sotala
One important thing to remember when being a listener: it's very easy to make the mistake that you're supposed to solve the other person's problems. It might be that the other person isn't actually looking for advice, but rather just sympathy and a reassurance that there's someone who will listen to them. Try to figure out which one they are after, and remember that they may shift from one mode to another and not be fully aware of which one they want, themselves. (I wouldn't recommend explicitly asking them unless they are very, very Tell, for the above reasons as well as for some other reasons of which I have an intuitive hunch but am having difficulty formulating explicitly.) Some caution is warranted even if they are looking for advice - typical mind fallacy means that it's easy to come up with a theory of what's wrong with them and how they should fix it that's vastly overconfident. If you suggest something that you think might help them and they disagree, then even if you were right, getting into an argument over it isn't the way you want to proceed. Instead of trying to come up with a perfect solution, it can be better to just make lots of sympathetic comments and ask clarifying questions that are aimed both at giving you a better understanding of the problem, as well as helping them to think more clearly about it and resolve things where they've gotten stuck. Do offer suggestions, but offer most of them in the spirit of "here's an idea for you to evaluate and think about, that might or might not be useful". I often figure that even if I can't say anything that would help directly, just helping them dissect the problem better and offering them new ways to think about their issue can be just as useful. Recently I had a phone call with my friend where we talked about my problems in general, and afterwards I knew exactly what it was that was the main cause of many of my problems, even though we never said anything about that cause during the conversation. But

One important thing to remember when being a listener: it's very easy to make the mistake that you're supposed to solve the other person's problems. It might be that the other person isn't actually looking for advice, but rather just sympathy and a reassurance that there's someone who will listen to them.

I would add that even if the other person is looking for advice, leading them to a place where they themselves come up with an idea about how to change can often be more effective than giving them a solution from the outside.

If you are talking to a depressed person who would probably benefit from going to the gym, telling him to the gym might not be effective because he can't see himself following through.

Asking him about his relationship to sport and to his own body might bring him further.

1[anonymous]
Yes, the main idea of active listening is to echo back the content, emotions, intent, or identity of the person that they're revealing through the conversation. It's definitely not to try and solve their problems for them... although often times it can help them solve their own problems by giving their thoughts and emotions clarity. A key distinction I would make here in terms of language is that you're not trying to be sympathetic when you are active listening. Sympathy is showing that yes, you do indeed feel sorry for them/their circumstances. Rather, I'd say that truly great active listening is about empathy. That is, showing that you can understand and feel their emotions as they do.
0D_Malik
A rationalist would realize that emotions don't necessarily have any deeper meaning, and are often best fixed through mundane non-emotional interventions. For instance, if you're constantly in stressful situations, you might want to try an adaptogen such as rhodiola rosea. You can fix mild depression by taking cold showers. Cut back on the caffeine, get better sleep, get more exercise. And so on. Personally, I suspect, based on armchair evopsych speculation, that softer interventions (e.g. showing people that you care and are sympathetic) are counterproductive, if your goal is for the other person to stop being in "emotional turmoil".
0ChristianKl
On of the things that separates good therapists from one's that don't achieve results is their level of empathy.

Is there an explanation of why games like 2048 are so addictive? I want to say superstimulus but its not obviously substituting for something in the ancestral environment, its not quite a classical skinner box setup either

3Nornagest
I was wondering about that myself after I lost a couple of hours to the damn thing. I think it's mostly that it hits exactly the right balance of complication/reward to get people into a flow state and keep them there, although its exceptional interface qualities probably help: simple, intuitive moves, immediate feedback, no menus or other abstraction layers to get between you and the game.

Request for some career advice:

I am planning on pursuing computer science as a double major (along with art). I'm doing this mainly for practical reasons - right now I feel like I don't really care about money and would rather enjoy my life than be upper-class, but I want to have an option available in case these preferences change. I enjoyed CS classes in high school, but since coming to college, I have found CS classes, while not profoundly unpleasant, to basically be a chore. In addition to this, my university is making it needlessly difficult for me t... (read more)

Your list of preferred careers reminds me of something, maybe relevant for you.

I used to teach in a high school for gifted children, when there were children with high intelligence but different skills. (As opposed to e.g. math-specialized high schools, where even without the IQ test you also get children with high intelligence, but their skills are very similar.) In this school a new computer game programming competition was started, with rules different than usual. In a typical programming competition, the emphasis is completely on the algorithm. It is a competition of students good at writing algorithms. But this competition, called Špongia, was different in two aspects: (1) it was a competition of teams, not individuals, and (2) the games were rated not only by their algorithm, but also by playability, easthetics, etc. Which in my opinion better corresponds to a possible success in the market.

I mention this, because there was an opportunity for people with various skills to participate in creating the computer game; and they did. Some of them even didn't know programming, but they composed the game music, painted pictures, writed texts, or invented the ideas. Sometimes the most... (read more)

-1gothgirl420666
This is sort of what I am doing right now, I'm working with two people who are focusing on the programming side of a game while I'm essentially designing it, only it's an unrealistically big project and the other two people don't seem to grasp the idea that if you want to make something it won't magically make itself and you actually have to push yourself to work on it. I realize that I could make a dumb two-week iPhone game if I wanted, only this doesn't really appeal to me at all, to the point where I don't think I could find the motivation to do it. I think what I will do is I will probably wait for the current big project to eventually fall apart, work on a medium sized one until it falls apart, at which point my brain will realize that I actually need to start small.
1Lumifer
That sounds exactly like akrasia and this forum is chock-full of techniques and tools to deal with it.
[-]maia130

Consider reading some of Cal Newport's writing on careers. Here's a possible starting point.

A lot of what he writes boils down to: "Do what you love to do" is a bit of a fallacy. Getting really good at something pretty much always involves putting in a ton of work, not all of which will be pleasant. But if you do that and get extremely good at what you do, then you'll get lots of jobs you'll enjoy, because 1) being good at what you do is fun and 2) if you provide lots of value to other people, they will provide it back.

IOW, just going after what is the most "fun" when you start doing it probably isn't the best idea. I wouldn't take the fact that your CS courses are a bit drudge-y as a slamdunk indicator that you shouldn't do CS by a long shot.

Also, you may have heard this before, but the video game industry for programmers is kind of a shitshow, because lots of people want to do it, enough so that they're willing to be paid less and endure crappy conditions. Being an indie developer might be a better bet, if you can make it work; I have no idea what the odds of success there are.

4gothgirl420666
I did not know that, thanks. Anyway, I would rather be involved on the artistic side, but I don't really know anything about that career path either, so.... ¯|_(ツ)_/¯

It's no better in the art department. In fact it's worse because there are fewer career paths out of the industry.

It works out for some people, but you have to be willing to accept relatively low pay and work a TON at the expense of pretty much every other part of your life - exercise, social time, proper sleep, hobbies, meals away from your desk...

I was a programmer in the game industry for 3.5 years and quit just over a year ago. It was exciting, but it wasn't worth it. I'm much happier now. Let me know if you have questions about my experience.

5maia
The iconic "working in video games is awful" story: EA Spouse

If you want to make games, start doing it now. It's entirely possible for a single person to make great indie games. Working on that would also build skills that are useful for all 4 of the preferred careers you named.

It's okay if you find CS classes boring; the real test is whether you find working on real projects (such as your own indie games) boring.

Having lots of portfolio pieces will also help with finding a job.

2gothgirl420666
Yeah, I am already working on my own games. I worked on one for two hours earlier today. My eternal problem is that I can only think big. When I was a little kid I would constantly envision these 1000-page epics I was going to write, type about seven pages or so, and then get bored and start a new project the next week. I constantly try to come up with ideas for small, fun little games that I could realistically make by myself in a few months but I can never come up with anything that appeals to me even a little. My current project seems like it will take a few years to complete and it will in all probability never see the light of day since I have never actually completed a game before. This is the most irrational habit I have and I hate it but I don't know how to stop. EDIT: I typed this out in the hopes that somehow the act of writing it down and LW users commenting on it would kick my brain into realizing how irrational it was being, and it worked exactly as planned. I will start working on a small project starting tomorrow. Thanks guys.

Hi, I worked in the game industry for a while. I worked on AAA titles, indie stuff and semi-indie. I'm not a designer though.

I would say that the best way to become who you want to be is to make many of your own excellent SMALL indie stuff and work your way up from there. Fortunately you're in the right double major! Build your own games, from scratch, over and over again until you produce something really good. Make little 24 or 48-hour games for hackathons, ludum dare, global game jam, etc. I can't give you better advice than to simply scale down your ambitions a lot. If you've never finished anything then that's your major problem and you desperately need to leverage some success spirals before you can dive into a bigger idea.

If you have a giant idea that you want to implement but it's too big, bite off a tiny chunk. Maybe it's a gameplay mechanic, maybe an art style. If you demonstrate a kernel of something that seems good, then you will be encouraged spend more time improving it. I think there are good subreddits for indie games where you can get feedback online.

Another way that an artist friend got into the industry was by taking a QA job at an AAA studio. Then he spent a ton... (read more)

2gothgirl420666
I know, I know, I know. I know all of this rationally. I just can't make my brain realize this. All the small ideas I come up with fail to motivate me even a little bit. My current plan is to wait for my current unrealistically big project to inevitably fall apart and then hopefully my brain will finally get the message.
5Kaj_Sotala
Have you considered the possibility that you don't actually want to make games, but do want to think of yourself as a game-maker? Asking this because I have a bit of a "I want to be a fiction writer but don't actually want to write" issue, and what you're describing sounds familiar.
0gothgirl420666
No, I don't think so. Game makers don't really have enough status in society for this to be a problem, I think. Or at least, they don't have the romantic imagery of writers, painters, poets, musicians, etc.
3Kaj_Sotala
Like Viliam Bur pointed out, the general status in society isn't that important: "wanting to be a video game maker" could plausibly follow from just having liked video games enough at some specific age, for example. If you strongly feel like this isn't your issue, I won't argue... but I would point out that if someone did want to have an identity as a game-maker but didn't have an interest in actually making games, then the pattern of "avoids doing small realistic projects, keeps starting big projects and then quickly gives up on them before doing much concrete work" seems almost perfectly optimized for the goal of maintaining the identity with the least amount of effort.
2Viliam_Bur
That's irrelevant. The important thing is whether you, for completely personal reasons, want to have "game maker" a part of your identity. If you could just snap your fingers, and the game would magically appear already completed, according to your specification, with your name on it... how would you feel? If Omega would predict that you will never make a game, or participate in creating one... how would you feel? There may be a big difference between these two feelings, and yet you may dislike programming the game (or even effectively managing the game programming team).
0gothgirl420666
I'm not exactly sure what you're trying to say. Good, obviously. Isn't this every creator's dream, to have their vision realized down to the exact detail without having to put in any of the work? Bad, but I would get over it and find something else to do.
7Viliam_Bur
I'm trying to explain Kaj Sotala's comment, because I think you misunderstood it. * You want to have your game done. * You don't like making the game. Both these statements can be true at the same time, and this seems like a frequent problem. People can like something in far mode, and dislike it in near mode. They can make it a part of their identity, and avoid doing it. (The status in society is irrelevant.)
2gothgirl420666
Oh, okay, I see what you're saying. I don't know. For literally all my life I've felt compelled to work on creative projects in a variety of fields (most of which have never lead anywhere, but only really exceptional people have completed major creative projects on their own volition by their freshman year of college so I don't feel that upset about it). While working on most of these projects, I would say that I am just "grinding" a majority of the time and in a state of flow a sizeable minority of the time. The best part is the feeling you get when you complete something and can look upon your work with satisfaction. I think game making probably has the best flow to frustration ratio of any creative endeavor I've done, followed by visual arts, then writing, then music. The one time I more-or-less completed a game for a month-long open-ended class project I absolutely loved doing it and was in a state of flow almost the whole time. If it turns out that I "don't actually enjoy" game making then I have absolutely no idea what I "actually enjoy".
2Viliam_Bur
Seems to me that you "actually enjoyed" working on this specific game... ...but didn't "actually enjoy" working on these other projects. What specifically made those experiences different? (Maybe the difference was in your mind, how you approached these projects, not in the projects themselves.) If you find out, you could try doing more of the former type. Yeah, but the problem is how to get to this place. :D Off-topic: Do you have some kind of documentation about those projects you have completed? Like a photo and a short description, somewhere on the web. Such things could be useful later in job search.
5Lumifer
LOL. I see the source of some of your problems :-) The answer to your question is "No, it is not".
0Kaj_Sotala
Not necessarily. For many, the actual fun is in the creating: that the act of creating happens to also produce an actual work is only a nice bonus, and something that could be dispensed with. If this seems counter-intuitive, consider e.g. the more story-focused variants of tabletop role-playing games, where the participants create a story together: but the story is almost always ephemeral, and no recording of it survives afterwards. But that's fine, because the actual fun was in the creation. That said, this is certainly not a requirement for being an artist: plenty of creators also find large parts of whole creative process tedious, and are focused on just the end product.
2NancyLebovitz
Also, creators don't necessarily have a complete dream at the beginning. As Tolkien said, "The tale grew in the telling".
2PECOS-9
I recommend you read The Motivation Hacker for techniques to get yourself to do what you know you should be doing, but can't bring yourself to do. I especially recommend Beeminder, especially this approach to using it.
0Emile
Seconded - I also spent years in the game industry, and know some people who transitioned from QA to what interested them, and others who had a neat indy portfolio to show.
6chaosmage
Can you contribute art to other people's game projects? Battle for Wesnoth proved big games can be made the open source way, so there got to be projects like that out there, and art isn't a skill many programmers have. You'll build portfolio, and maybe gain allies for future bigger projects of yours.
9solipsist
Have you looked into user experience design? They're in quite high demand these days. Try making an interface (just the facade, nothing functioning) of an iPhone or Android app. You'll learn a lot and have an immediately marketable skill. Are you good with people? Technical people who communicate effectively with clients are beloved by all. Many of the best and most lucrative jobs in tech are bridging the gap between programmers and other people.
5Kaj_Sotala
I think this is realistic rather than idealistic. College typically offers plenty of chances to procrastinate and fail courses, and if you're not genuinely motivated to study something, forcing yourself to do so anyway over a period of several years seems like a recipe for either failure or burnout. Note that often people have difficulties making themselves study even the topics that they do genuinely enjoy and find interesting, once those topics get challenging enough. Of course, it's always possible that the field does start feeling more interesting to you once you get more into it.

This is really frustrating because I feel like the culture is constantly spamming two contradictory memes. Lumifer even explicitly gave me both of them upthread.

  1. Don't do something you don't truly enjoy, follow your dreams
  2. Don't do something that isn't practical, whatever you do, don't end up working at McDonalds

But in my case (and probably a substantial majority of people) I honestly think that the venn diagram between one and two might have literally zero overlap. Like, isn't the whole point of a job that it isn't fun, and that's why they have to pay you to do it? I tried to compromise by double majoring in something I am genuinely passionate about (art) and something practical (comp sci), but I feel like this is still not enough somehow...? Sometimes I think the only winning move is to get lucky and be born the type of person who has a natural burning desire to become an engineer.

7Viliam_Bur
Sometimes I feel similarly; except that I used to be passionate about programming. I suspect doing programming for a job somehow beat that out of me. (If you do something as a job, every time there is a problem at job, it is a negative reinforcement towards what you used to like. You get used to getting paid, and somehow other positive reinforcements are rare; maybe this is a cultural thing.) So I didn't start in this situation, but I ended there anyway. These days, I do what I don't enjoy. Not necessarily. It could be something that is fun for you, but for whatever reason other people can't do it, so they have to pay you. For example, if the work requires a long period of learning, so when someone wants to have it done now, it is too late for them to start learning now. Or the costs of learning are so high that it wouldn't make sense to learn it only to do it once, so someone has to specialize. Either way, the idea is to become an expert in what you do. Other people are not paying you only for doing it now, but also for being prepared to do it. By the way, you say you are passionate about art, so... how much art do you do? Are you trying new techniques? Do you have an online gallery? Is someone using your art for something useful (even if they don't pay you yet)? I am asking this because there are people who make money doing art. But it seems like they have to actively advertise themselves. I also know people who have some artistic talent, but they make money doing something else, because (this is just my guess) they don't try new things, don't learn new techniques, don't expand their comfort zones. So they must make money doing something else. For example I know a girl who has a talent for many kinds of visual things: she can draw nicely, make costume jewelry, take interesting photos, many other things. Her boyfriend is doing web pages; and those pages often have some photo at the header; he buys the photos from other people or uses some free templates. So I s
3Kaj_Sotala
I don't have a good answer for you: I struggle with this problem myself. The best I can suggest is to try out a lot of things, to see if you'd find something that was practical and which you did enjoy. Not necessarily: even if a job was fun, you would still need money to live, so your employer would still need to pay you.
0maia
Cal Newport's 'solution' to this is basically: Get good at something and then you'll enjoy it; expecting to enjoy anything that you are not yet good at is unrealistic. I think this probably isn't the entire story, because natural aptitude and enjoyment are real things that can cause you to like things more or less initially... But for me at least, this does explain a lot of my enjoyment of things. I find that there are some programming tasks I used to really hate doing, which I now dig into feeling fine, because I've gotten good at them. It probably depends on your personality and how you react to different incentives, as well.
5ChristianKl
I think if you want to have a career in art, being able to program will help a lot. Few artists can program and good art is always about moving forward. The kind of people you want to impress as an artists generally don't know how to program. That means that it's relatively easy to impress people. If you know how to sew, creating clothing that uses an arduino lilypad to do something shiny is relatively straightforward. If you walk dressed like that into art auctions people will start to notice you and ask you about it and then you can tell them about your art as an upcoming artist. That said, you don't need to major in computer science to be able to program. If you do a bunch of projects with arduino, the coding isn't that complicated and you can put up the code on github to show that you can program. As far as game programming goes, in the LW community we have people like Kaj_Sotala who works on creating a game that teaches bayesian updating. He might benefit from someone doing the necessary artwork for the game for him.
2pianoforte611
Given that I am also pretty young, I'm not exactly qualified to give career advice. That said my sort of Gladwellian position on the matter is that you don't get to be someone like Hideo Kojima or Steve Jobs, or a more local example Yudkowsky by working your way up and taking what people give you. You need to 1) Have some level of natural ability. 2) Choose the right thing to focus on that has a good chance of success. 3) Be willing to fight for your dream. This is the hardest part, because anyone who enjoys self-made success has had to overcome numerous failures, false starts, and periods of time where it just seemed too difficult or hopeless. You have to be persistent, have endurance and never be willing to settle for "just good enough". More specifically to you, you don't seem to want to be a programmer - the guy who contributes to the behind the scenes work of a video game and perhaps creates the AI. You want to be someone whose creative vision for the game turns it into something that lots of people will enjoy. And it doesn't even have to be video games specifically. Is that about right? In that case, start now. Don't focus on making a super fun video game. Just figure out how to make anything. Maybe start with a pong variant. Get in lots of practice. Start getting publicity for your work through maybe newgrounds. Aggressively publicize your work once you feel confident in your abilities. Develop connections with other people in the industry if you can. All of this generalizes to art in general not just video games. Truth be told though, there are many people who share your dream. It's not easy, you can do everything right and still not succeed. Having a job as a programmer or graphics designer is a reasonable "safe" fallback. But if you are serious about desiring greatness it will take a lot of effort and initiative.
2kalium
I expect that if you take enough in-major classes to learn the right skillset the lack of a degree in the field won't be a huge obstacle. Though if your major is totally non-technical this might not hold. Anyway, studying CS but not getting a full second major is an option worth considering.
0gothgirl420666
Thanks for the advice, but it doesn't work that way. The problem is that, for some indiscernible reason, in order to get the art degree I need to take 48 credits in the school of arts and sciences (i.e. an entire year and a half). So it would be much easier to have a second major in the arts and sciences (e.g. math or economics) instead of the school of engineering.
2Lumifer
A CS degree and the ability to write code are different things. Good employers will ask you to show them the code you've written and will be entirely indifferent to your college major.
2gothgirl420666
Can you elaborate on this? Where does "the code you've written" come from? Do you produce it in school projects? Or is it from jobs you might take on the side? Are you expected to be passionate enough about programming to have a bunch of code that you wrote for fun and practice lying around? Is it a mix of all three? What should I be doing with my time?
5Lumifer
Yes, and more than that. You are expected to have written code for fun and personal use and maybe for profit and maybe just to help friends. Your code isn't supposed to be lying around but rather be in a place like GitHub. It is good if you have contributed to an open-source project, preferably a high-profile one. It is even better if you have your own open-source project, especially if it looks cool and attracted other developers. Employers want to look at your code because of two angles. One is that good programmers enjoy what they are doing. They like to program. People who like to program do program and not just on the job because they are paid for it. Two is the ability to code. College degrees are not necessarily indicative of the actual ability to write good code. But being able to show directly that yes, you have written good code, is. Fair warning: the next two pieces of advice contradict each other :-) Piece one says that you don't seem to enjoy coding. If you don't you are not going to enjoy a job as a programmer. This means you will not be a good programmer and might end up being miserable in a job which consists entirely of doing what you don't like. Find something that you enjoy doing. Piece two says that you need to find something besides your future art degree. Something that is called a marketable skill (BFA isn't it) which will allow you to become employed after graduation. I know a girl who graduated from an Ivy League school with an art degree last summer. Guess what she is doing now? She is a waitress in a local pizza joint.
0Eugine_Nier
Depends on the employer. There's a lot of demand for programmers who aren't Google-quality. Granted, you'll likely be a corporate code monkey maintaining an accounting system somewhere, but it's a living.
0gothgirl420666
I don't know if this is really true about me. Sometimes I love it and sometimes I hate it, to be honest. I've pretty much hated it in college, but this might just be because of the way the courses are taught. What are other examples of marketable skills to you? As an aside, while I know and accept the fact that statistically BFA pays pretty poorly and has relatively high unemployment, I don't understand it. Every company in the world needs a designer in some form or another. Who needs an anthropologist, a philosopher, a historian, a sociologist, a psychologist, etc.? And yet we are told that getting a college degree is definitely a good idea. Maybe there are a whole pool of white-collar jobs that have nothing to do with any particular major, but are only available to people who can signal their intelligence in a way that art majors can't?
3Eugine_Nier
People remember that it was good for them, and don't realize that tuition has gone up while quality of instruction has gone down since they graduated. Google "higher education bubble" to see that not all people are saying that anymore.
0jamesf
This was sort of my experience. Buy the right books and build interesting projects in the time you would be spending on classes, and you'll probably enjoy it a lot more. You don't need a degree in computer science to get a job as a software engineer; some experience/projects and the broad, shallow knowledge required to do well in typical interviews (and all those other interviewing skills I suppose) are enough. You sound like you might enjoy Hacker School, by the way.
0Lumifer
An easy test. Do you code on your own, not because something external (like homework) requires it, but on your own volition, because it's a natural thing to do? Do you get into flow state while coding? In your context just look up post-graduation employment rates by college major. Engineers and accountants will do well. Women Studies majors, not so much. Most companies need a designer only occasionally and that does not justify keeping one on payroll. If a company needs a new logo it can hire a design company or a freelancer. Yes, they are typically called "administrative assistant" or some other variety of a junior paper-shuffler. They are rarely satisfying or lead to a career.
0gothgirl420666
No Yes Yeah, I already did this. Science has always been far and away my least favorite subject in school, so science and engineering are definitely out. Math and economics seem to be the next best things after computer science, but neither of these, while interesting to a certain extent, exactly seem like buckets of fun.
0Lumifer
You need to find something that satisfies three criteria: * You like it * You are good at it * People are willing to pay you money to do it It's really up to you to figure out what "it" is.
0gothgirl420666
What if "it" doesn't exist?
1ChristianKl
That means you have to change one of the things. Changing what you like is basically about discovering new aspects of an activity. Changing what you are good at is straightforward. It's about learning skills. Changing what people are willing to pay you money for is a lot about going out and meeting the right people. You also don't have to limit yourself to things that other people have as established career paths. There less competition if you use your creativity to go to a path that has no one else on it.
0Lumifer
Then you have to put on your big-boy pants, suck it up, and deal with it. Note that (1) is adjustable by you, within limits. Note that (2) is also adjustable by you, also within limits.
2roystgnr
Fun, and practice, and there's always someone on github who could use help with their open source thing. Source: myself, and everyone else on github who could use help with our open source things. ;-)
4JayDee
Any chance you could point me at one or two? Background: I enjoy coding, but run into problems with high-level motivation. Point me at something to do, I'll do it (and likely enjoy myself) but when it comes to doing the pointing myself I draw a blank. Most of the code I've written in the last year has come from frustration with inadequate tools at work, which is productive for learning but not for sharing. I'm currently most proficient with Python, have dabbled in C++, and commit to spending an hour each with the first two open source things anyone points me at. (2x 25 minute pomodoros, this weekend.)
0roystgnr
Some of my stuff would be hard to contribute to without a basic background in something like chemical kinetics or partial differential equations, but my main project also kind of has the opposite problem: libMesh has a pretty dated and incomplete unit test suite, and an atrociously dated Debian package, in part because anyone with enough finite elements experience to hear about the project tends to perpetually have more urgent work occupying their time than tedious unit test and dpkg writing. I'm not sure "want to help me write tedious stuff?" is a good solution to your motivation problem, though. If I was looking for something to jump into for fun, I might try MineTest, a Minecraft clone in C++/Lua which is surprisingly complete but still has a lot of serious limitations. If "most proficient with Python" is the deciding factor, maybe take a look at Matplotlib? A friend of mine is one of the major developers there, and I've been impressed by how fast it tends to supplant gnuplot/matlab/etc as the scriptable-graph-generator of choice for researchers who play with it.
0bramflakes
yes
2NancyLebovitz
The ease of getting programming jobs seems to vary, or at least I know people who had a hard time during the recession. On the other hand, I don't know what would be better advice for staying employed during a recession.
1[anonymous]
How soon do you consider yourself getting into the industry? Programming is unique from other high paying jobs like doctor or lawyer, in that it doesn't take a ton of time or money to get a good degree. Couple that with the fact that it's becoming an expected skill, and that there are increasingly more avenues for learning programming at increasingly younger ages, AND that it's a job that lends itself well to outsourcing. You have a recipe for an over-saturated market and declining pay as the next generation enters the workforce.
3Viliam_Bur
This assumes that enough people can learn to program well. Just because they are expected to learn and have many different textbooks and learning websites available, doesn't mean that enough of them will succeed. Maybe only some fraction of population is able to master the necessary skills. Maybe we are already using a significant part of this fraction, so we get diminishing returns on trying to make more people IT-skilled. The field of IT keeps growing, both in scope and in complexity. Twenty years ago, making a static HTML page was a good way to make tons of money; these days everyone wants interaction and database and whatever. Twenty years ago many people didn't know internet even existed; some of them are willing to pay for a website now. Maybe ten or twenty years later they will pay you to create a better algorithm for their vacuum cleaner or refrigerator. Smartphones opened a new platform for making programs; another hardware may open another space tomorrow. Thirty years ago, when you turned on the computer, you were invited by a command line. You had to type a command, to do anything. The inferential distance from typing commands to creating simple programs was extremely short. Also, every computer supported some kind of programming language (e.g. Basic) out of the box. You didn't have to install anything, you had the programming language ready, and it was the same language and the same version as your neighbors had, assuming you had the same kind of the computer. With ownership of computers, programming came relatively easily. These days, the gap between using your computer (clicking on icons, various mouse operations, multimedia support, etc) and programming (typing text) is greater, and the transition is less natural. Beginning programmers these days have a large inferential distance to cross. I am not going to predict which direction the market pay for programmers will go; I just wanted to provide an evidence for the opposite direction. In some aspect
0[anonymous]
Do you have an example of another industry that was high paying, well respected, and cheap to learn, that DIDN'T decline in pay and opportunities? If so, that would allow me to give more credence to your arguments. In my career coaching work, one of the things I try to teach is how to spot these patterns of which way a market is going. This has some classic signs, and I can give plenty of examples of other industries in which this same pattern took place.
4Viliam_Bur
In such case, I guess you are more likely to be correct about this than me. Only the "cheap to learn" part feels wrong to me. I mean, the financial costs of learning programming are already literally zero in the recent years, and somehow still most people don't learn one of the highest paying professions. Why? If they didn't do it during the recent five years, why should they do it during the next twenty? Maybe an ability is the problem, not the financial costs of learning. I suspect that those other industries either employed less people than IT, or that they were easier to learn. On the other hand, IT has its own specific risk -- a possibility to work remotely, so it is easier to outsource.
0Lumifer
Because they can't. Go talk to someone from the lower half of the IQ distribution, see if they strike you as someone whose attempts to code will not result in a disaster. Learning to program "Hello, world" is easy. Learning to write good (clear, concise, maintainable, elegant, effective, bug-hostile) code is pretty hard.
2Barry_Cotter
Examples would be appreciated. But this seems to be a case of trying to time the market and the usual objection applies; if you can time the market to within a year you can make huge piles of money. One of the contributors on HN, lsc of prgrmr.com talks about how he was calling the property bubble in the Bay area for years before it popped, and how if he had just got in at the frothy height of the dotcom bubble like everyone else, he'd still be ahead now on property, very far ahead.
0[anonymous]
As maia said, it's not really about trying to time the market down to the year (or even trying to pinpoint it within 5 years)... but rather picking up on trends and trying to invest your time in the right places. I started teaching the basic concepts after working with so many clients who had painted themselves into a corner by creating a great career in a dying industry. Some examples of industries for which the writing was on the wall: -Print Journalism -Almost any US manufacturing job, especially textiles. -Projectionists I suppose those are all jobs/industries which declined due to technology, although the example of technical manufacturing of computer hardware shares many similarities to programming/coding jobs today. Here are a few jobs which have declined due to commoditization(is that a word?) of the knowledge: -Typists -Data entry specialists -Computer Operators Those are just the ones that have recently (past 15 years) continued to decline as skills have gone from specialist to commodity. If you go further back, you'll find similar examples for most new technologies that initially have high pay for specialists who operate it, but which becomes very cheap to learn. And here are some of the jobs and industries which, if my clients insist on taking them, I recommend they leverage to another job title or industry as soon as possible: -Social Media/Community Manager -Programmer -Anything print journalism
0maia
I suspect that predicting trends in the pay for a certain career path doesn't need to be that precise in order to be useful. If you can predict the year in which it'll happen, you make huge piles of money. If you can predict the decade in which it'll happen, maybe you can't do that as well, but you could still make a choice to do something else.
0ChristianKl
It's cheap to learn if you are intelligent and are already good at abstract thinking. A lot of Indian programmers get payed quite poorly because they don't have the hacker mindset. Teaching the hacker mindset is not straightforward because there often a lot of culture in the way. There are Indians who manage to learn to become good programmers but most people who learn programming at an Indian university don't.

Long shot:

I'm moving to NYC. Any LW NYCers have a room available for <$1,000 per month that I (a friendly self-employed 23-year-old male) might be able to move into within a week or two? Or leads on a 1br/studio for <$1200? I could also go a bit above those prices if necessary.

PM me if so and I'll send more details about myself.

After recommending a couple of Chrome extensions in this comment I realized it may be useful to have a dedicated thread for Chrome extensions (a quick Google search revealed no such threads in the archives).

The extensions I currently use are listed here, with my favorites boldfaced. I'm curious to see what extensions others use.

2primality
I quite like Dictionary of Numbers. It provides comparisons for e.g. lengths and amounts of money. Example: I found $200 on the street --> I found $200 [ ≈ Low-end bicycle] on the street.
0witzvo
I use a subset of the extensions you mentioned. I also use this bookmarklet to hide nested comments in long threaded lesswrong pages like the open thread; then I open only the interesting threads selectively to limit distractions.

On baseline of my opinion on LW topics I closed with the lemma:

''Arguments for the singularity are also (weak) arguments for theism.''

I'd want to know whether there is anything wrong with the following reasoning:

If the singularity is likely, then it is (somewhat less) likely that it is used to run what Bostrom calls ancestor simulation. As we cannot know the difference it follows that it is likely that we already are in a simulation.

If we are in simulation, then the physical parameters don't necessarily follow simple rules (occams razor) but may be altered... (read more)

7Squark
It is not strictly meaningful to ask "are we in a simulation" since there are copies of us both inside simulations and outside of them. However, if it is possible to demonstrate decision problems in which the optimal decision depends on whether the problem is nested in a simulation, then it is meaningful to ask how to make the decision. If all the copies of you that exist in a simulation either exist in complex universes (compared to the universe in which you are not in a simulation) or very late in time (so that they are strongly affected by the temporal discount in the utility function), you should behave as if you are not in a simulation.
0Gunnar_Zarncke
Take my usage to mean: "is our most outer copy in a simulation?"
2Squark
The most outer copy is never in a simulation, since the content of any simulation exists in the Tegmark IV multiverse as a universe in itself.
0Gunnar_Zarncke
I'm not sure if you intentionally misunderstand me - possibly to put me on a more abstract track. Or if you see that I'm discussing the issue on the same level as the simulation argument does. Namely in a szenario where we do have nested structures (simulations, simulated 'sub universes'; of course all just part of some large mathematical structure ) and certain probability relations between these nestings hold.
1Squark
I'm saying that the simulation argument is wrong because it follows from a mistaken epistemic framework (SIA). Once you switch to the correct epistemic framework (UDT) the argument dissolves.
0Gunnar_Zarncke
You might have indicated that you want to apply another framework than implied by my reference to the simulation argument. I might agree with yours reasoning, but need more input on this: Can you give me a ref for this? I don't see how it ovbiously follows. But going back one step: Would you agree that my argument is valid in the 'wrong' framework I used?
1Squark
The best ref I could find is this Roughly speaking, UDT says you show make decisions as if you decide for all of your copies. So, if there are copies of you inside and outside simulations, you should take all of them into account. Now, if all the copies inside simulations are located in the far future wrt the copy outside simulations (e.g. because those copies were created by a post-human civilization) you can usually disregard them because of the temporal discount in the utility function. On the other hand, you can consider the possibility that all copies are inside simulation. But, once you go the the Tegmark IV multiverse it is not a real possibility since you can always imagine a universe in which you are not inside a simulation. The only question is the relative weight ("magic reality fluid") of this universe. Since weight is 2^{-Kolmogorov complexity}, if the simplest hypothesis explaining your universe doesn't involve embedding it in a simulation, you should act as if you're not in a simulation. If the simplest hypothesis explaining your universe does involve embedding it in a simulation (e.g. because the Creator just spoke to you yesterday), you should behave as if you're in a simulation. So Egan's law is intact. I think Occam's razor still applies even if we are in a simulation. It's just more difficult to apply. It would probably involve something like trying to guess the motivation of the Creators and update on that.
0Gunnar_Zarncke
That thread in inconclusive. It basically urges for an explanatory post too. But thanks for giving it. As that is building on the conclusion I take it to mean that you basically agree. What does follow from this result for e.g. acting depends on lots of factors I don't want to discuss further on this thread. Tag out.
0D_Malik
Occam still applies to the parent universe (I think). And predictions about the parent universe imply predictions about its child simulations. So a variant of Occam (or at least, a prior over universes) still applies to the simulation. There are 2^100 more possible universes of description length 200 than of description length 100, so each 100-length universe is more probable than each 200-length universe, if the simulators are equally likely to simulate each length of universe. This fails if e.g. the simulators run every possible universe of length <300. It also fails if they try to mess with us somehow, e.g. by only picking universes that superficially look like much simpler universes.
0RowanE
Many would claim that these types of god and the supernatural don't count - a lot of definitions of "supernatural" preclude things that can be reduced and follow natural laws even if those aren't our natural laws.
0Gunnar_Zarncke
If you cannot observably distinguish between these, then the difference how you represent them is kind of academical, isn't it?

Edit: Problem solved. The confirmation e-mail had to be done.

I received word (see below) from someone who's struggling to comment. Any ideas?

Hi, sorry to bother you. I'm a new user and cannot for the life of me figure out how to comment on a thread (for instance, the welcome thread you posted). I've read the faq which says there should be a comment box below the text of each article but I don't see that. I also don't see the "Reply" button on any comments.

I'd normally try to handle this on my own, but I've already spent an hour or two reading

... (read more)

Genome sequencing for the masses is not quite here yet :-(

A Stanford study reported that at the moment a full sequencing costs about $17,000, requires more than 100 man-hours of analysis per genome and still is "associated with incomplete coverage of inherited disease genes, low reproducibility of detection of genetic variation with the highest potential clinical effects, and uncertainty about clinically reportable findings."

7[anonymous]
Yeah, sequencing is tough, especially for creatures such as we with 3 billion mostly-repetitive nucleotides. The high-throughput methods basically throw the genome into a blender and read out billions of individual ~50-100 base pair reads in one reaction with a high enough error rate such that you need about 10x coverage of the genome before you are sure you catch most sites with enough reads to make sure you aren't making a couple million mistakes. The short read length means that repetitive sequences are particularly hard to sequence because if the read is shorter than the size of the repeat you don't know where to map your read to. Hence why in our lab (and most labs that are doing something other than cataloguing natural variation and only deal with a few kilobases at a time) we still use the old-school Sanger sequencing, because it produces 800 base pair reads one at a time for on the order of $2 each. The highest-throughput method, Illumina, also produces many terabytes of image data per run from tiny CCDs inside the sequencer that needs to go through some epic processing into the actual sequence data. More importantly, finding a rare or unique variant via sequencing that is something other than 'this vital gene is broken and won't make a protein at all' doesn't necessarily tell you all that much. Every one of us has about 100 new mutations that were not in our parents and a mutation is likely to have many small impacts rather than one large impact. While we know that, say, height is something like 80% heritable, the best genetic screens so far have found several hundred loci that collectively account for something like 15% of the variation. There is a LOT going on, most individual differences have tiny effects, and our methods thus far only really can find common variants with relatively large impacts. Hence 23andme using microarrays that specifically find known variants rather than actually sequencing.
0NancyLebovitz
Thanks for the details-- very helpful in keeping the goshwow under control.

I'm looking for a simple an aesthetic symbol for humanism and humanity, from our ancestors looking at the stars and wondering why, and telling each other stories, and caring for each other in the distant past, to the invention of agriculture, democracy, civilization, the Enlightenment and the Renaissance, the improvement in the human condition, technology and knowledge and truth.

I think some of you know what I mean. Humanism Pt. 3 style chills.

Ideas I've thought of: hands, sails, brains, seeds, eyes, sprouts, flames. I was looking at getting symbols of bot... (read more)

2polymathwannabe
I like to fantasize that the future One World State will have Da Vinci's Vitruvian Man on its flag.
1TsviBT
I dunno if that meme package is standard enough that you could activate it with one symbol. Maybe a gif would give a little more room to work with?
0dunno
An upward arrow shaped like a human looking up?
0Slackson
I'm not sure I can visualize that very well?
[-][anonymous]60

I'm curious what others thoughts are on Black Swan Theory/Knightian Uncertainty vs. pure Bayesian Reasoning. Do you think there are things in which bayesian prediction will tend to do more harm than good?

BrienneStrohl posted something on her facebook that said she thought the phrase "Knightian Uncertainity" had negative information value, and an interesting conversation ensued between Brienne, myself, Eliezer,Kevin Carlson, and a few others. It's of particular interest to me because Black Swan Theory is so central to how I view the world. If it... (read more)

5Lumifer
I think Knightian uncertainty is a very useful concept. Sometimes "I don't know" is the right answer. I can't estimate the probabilities, I have no evidence, no decent priors -- I just do not know. It's much better to accept that than to start inventing fictional probabilities. Black Swan isn't a theory, it's basically a correct observation that statistical models of the world are limited in many important ways and depend on many implicit and explicit assumptions (a typical assumption is the stability of the underlying process). When an assumption turns out to be wrong the model breaks, sometimes in a spectacular way. Nassim Taleb tried to make a philosophy out of that observation. I am not particularly impressed by it.

The trouble, of course, is that "I don't know" is not an action. If "I don't know means" "don't deviate from the status quo," that can be a bad plan if the status quo is bad.

0Lumifer
Yes, and why is this "trouble"?

The only point of probabilities is to have them guide actions. How does the concept of Knightian uncertainty help in guiding actions?

0hamnox
More concretely than Lumifer's answer, it would encourage you to diversify your plans, and try not to rely on leveraging any one model or enterprise. It also encourages you to play odds instead of playing it safe, because safe is rarely as safe as you think it is. Try new things regularly, since cost of doing them is generally linear while pay-off could easily be exponential. That's what I got out of it, anyways.
0AlexSchell
I'm not actually sure the concept can do all that work, mostly because we don't have plausible theories for making decisions from imprecise probabilities (with probability we have expected utility maximization). See e.g. this very readable paper.
-2Lumifer
I don't agree with that (a quick example is that speculating about the Big Bang is entirely pointless under this approach), but that's a separate discussion. It allows you to not invent fake probabilities and suffer from believing you have a handle on something when in reality you don't.
4ShardPhoenix
Such speculation may help guide actions regarding future investments in telescopes, decisions on whether to try to look for aliens, etc.
0AlexSchell
OK, I'll give you that we might non-instrumentally value the accuracy of our beliefs (even so, I don't know how unpack 'accuracy' in a way that can handle both probabilities and uncertainty, but I agree this is another discussion). I still suspect that the concept of uncertainty doesn't help with instrumental rationality, bracketing the supposed immorality of assigning probabilities from sparse information. (Recall that you claimed Knightian uncertainty was 'useful'.)
0[anonymous]
When I mention black swan theory, I guess I'm talking more about Talebs thoughts on the consequences of the fact you mentioned above (mostly mentioned in [This Wiki Page[(http://en.wikipedia.org/wiki/Black_swan_theory). Basic Tenets as I understand them: 1. Most historical change is driven by black swans which were unpredictable in their nature or magnitude. 2. IN hindsight, it seems like black swans could have been predicted... but they could not have. 3. The best way to deal with black swans is to : 3a. Build systems that can handle and in fact gain from disorder. 3b. Invest in systems in such a way that you would gain dispropoartionately from a black swan event. 3c. Avoid investing in systems in such a way that you would lose disproportainately from a black swan event.
2Lumifer
I would like to see evidence for (1) which goes beyond "future is uncertain and large-impact events are important". (2) is just part of the definition of what a black swan is. (3a) is Taleb's idea of antifragility. I am not sure it's practical. For any system that you can build I can imagine an improbable event which will smash it. As to (3b) Taleb ran a hedge fund for a while, if I recall correctly. It did badly. Taleb doesn't like to mention it. (3c) is just good risk management and again, see (3a). I don't know what are the practical suggestions beyond diversification. Hedging against disaster (typically by buying volatility or selling short) implies losses if the disaster does not happen.
0ChristianKl
Have you read the book?
0Lumifer
I have read The Black Swan, I have not read Antifragile.
0ChristianKl
Wikipedia says: Without having the number for 2001 to 2004 it's hard to say how badly it run. Universa the hedge fund Taleb is currently advicing seems to run well enough to have $6 billion in assets under management but I can't easily find numbers of return.
0NancyLebovitz
I think anti-fragility makes sense if you think of it as existing over a range of stressors rather than being an absolute quality.
3Shmi
Scott Aaronson has a concrete example of Knightian uncertainly in his paper The Ghost in the Quantum Turing Machine.
0Strilanc
I associate Knightian Uncertainty with Eliezer's description of Expected Creative Surprises. That is to say, I am uncertain what a rival company will do, but I know they will try to achieve a goal. When achieving that goal involves surprising me, I should expect them to surprise me even though I'm using the best model I can to model them.
0ChristianKl
There only one way to find out. Write your predictions down and calibrate yourself.
0[anonymous]
This actually assumes that the Bayesian model is accurate. Under Black Swan Theory, you can't use past correct predictions to predict future correct predictions. For example, most of the variance in the stock market is distributed over a few days in history. I could have calibrated on every day leading up to one of those days and felt confident in my ability to predict the stock market... but just one of those days could have wiped out my portfolio. Calibration actually makes you necessarily overconfident in the Black Swan view of the world.
2ChristianKl
You don't need to assume that things are normally distributed to be a Bayesian. People who calibrate themselves usually don't get more confident through the process but less confident. Don't calibrate on a single variable.
0[anonymous]
But you do need to assume that somehow you can predict novel events based on previous data Just going back to my stock market example, what variables would I have calibrated on to predict 9/11 and it's effects on the stock market?
2ChristianKl
I'm not arguing that you can predict the stock market. What you can do is calibrate yourself enough to see that it's frequently doing things that you didn't predict.

Funny: the NSA Deputy Director Richard Ledgett's TED interview in response to Edward Snowden's unannounced interview a few days ago is rated over 50% "unconvincing" on TED's talk rating system.

This is by far the most unconvincing talk TED ever had, I think. For comparison, pastor Rick Warren's extremely controversial talk stands at only 12% unconvincing.

Public Service Announcement:

Searching for "MIRI" in Bing image search may be NSFW. That is all.

0Richard_Kennaway
A charming young woman, I'm sure. The pictures even show up with SafeSearch set to Strict. I suppose that stripping down no further than one's underwear is what passes for modesty these days. And none of them show up on Google Images. Very odd.
[-][anonymous]40

Question for philosophers: Is it not so that the set of possible actions a Kantian could perform is a subset of the set of possible actions a Utilitarian could perform? If this is true, could not a Utilitarian decide that Kantian behavior is optimal for maximizing utility, and thus emulate a Kantian's behavior in any given situation (similar to Rule Utilitarianism)? Of course, the reverse is not possible: a Kantian would never decide to emulate Utilitarian behavior.

1tut
"Act so as to maximize total utility" (for any specific definition of utility) is just one of the maxims a Kantian could in principle follow.
0TylerJay
This is correct. The Universal Law Formulation of Kant's Categorical Imperative states, roughly: In considering the maxim "Act so as to maximize total utility" (for some specific definition of utility), a Kantian would attempt to universalize it and ask "what would the world look like if everybody followed this maxim?" The merits of the maxim would then be judged against other possible maxims (or combinations thereof) by comparing the resulting worlds. As a contrived example: A Kantian might consider the problem of "lawn crossing". Specifically, is it right or permissible to walk directly across the grass instead of taking the path around it? If we universalize this, then we have a world where everyone crosses the grass so the grass gets trampled and dies. The Kantian might then conclude that this world is inferior to the one where everybody takes the path around, therefore the Kantian would conclude that it is wrong for anyone to cross the grass.
1[anonymous]
It depends on how you want to describe actions. So, on the one hand a good Kantian will never lie while a good utilitarian might, on the other hand a good utilitarian will never minimize utility, whereas a good Kantian might. Kant and your average utilitarian will disagree not only about on-the-ground ethical questions, but about questions like 'what is an action' and 'how are actions individuated' and 'what constitutes the "consequences" of an action'. This makes translation between the two theories difficult. Absolutely, though it's hard to see what sort of utility calculation would conclude that Kantianism is going to optimize for utility in any given situation. Kant is explicit that the actual consequences of an action are totally irrelevant to its moral value. So it would be one heck of a coincidence. Needless to say, the Kant-emulating utilitarian would never be fulfilling her moral obligations in the Kant's eyes, regardless of how complete the emulation is. For a Kant, it's important that actions be motivated (or at least constrained) by a respect for the moral law; returning the jacket out of respect for the moral law and returning it in order to maximize utility don't even count as the same action so far as Kant is concerned, since the maxims differ.
0[anonymous]
Thank you for the cogent response. I believe that answers it quite well.

Anybody else taking HarvardX's Immunity to Change MOOC? (It's still open for enrollment BTW.)

3zedzed
Yes. Study partner?

What are some effective ways to treat internet addiction? Some sort of allowance system or cold turkey? I'm just strictly talking about the goofing off type of internet.

2bramflakes
Try the LW study hall and/or find a pomodoro-esque technique that works for you. The trick is to work with your desire to check reddit rather than against it - if you have a 5 minute block where goofing off is a Totally Okay Thing That You Don't Feel Guilty About, you'll be able to get back into the flow of work once your goof-off time is up. Use the various different browser extensions to curtail your access to time-sink websites.

Yet another karma query: yesterday my karma was 37. Today my karma is 12 and I am at -2 karma for the last 30 days. What's going on here?

1gjm
Most likely: Someone has taken exception to something you wrote and decided to go back and downvote a load of your old comments. It's a thing that happens. Victims complain about it from time to time. Eliezer says he's asked someone to look at the LW database for evidence of mass-downvoting but they haven't found anything useful; make of that what you will. There are some grounds for thinking that people who make comments supporting (in rough terms) feminist views, and more generally but also more weakly "non-Reactionary" views, are particularly likely to attract the attention of mass-downvoters. (One might respond to this by avoiding such comments, or by defiantly making more of them.) Less likely but not impossible: Someone has disliked something you wrote, gone back through your comment history, independently assessed the quality of each comment, and happens to have found ~25 comments that they think deserved downvoting on their own (de)merits. [EDITED to add: Hello, downvoters! If you think something is bad about the above, please do let me know what. Otherwise, my working hypothesis is that the downvotes on this comment come from mass-downvoters who don't like being talked about. and I'll adopt my usual heuristic of responding to what look like mass-downvotes by posting more.]
7Oscar_Cunningham
This could make it sound like LessWrong is antifeminist. I'll just clarify that this is (very?) false. It's just that the people doing karmassassination are possibly antifeminist.
4gjm
For the avoidance of doubt: Yes, I agree. LW as such has no position on these issues; such evidence as I've seen suggests that the median LW participant is favourably disposed towards feminism[1] and politically liberal; one could easily get a different impression from reading LW, a fact with a variety of possible explanations. [1] The definition of "feminist" and the question of who is entitled to call themselves feminist are subject to some controversy, hence my cautious wording.
1Lumifer
Heh. On LW you can discuss whether feminism makes sense and how much -- that by itself is enough to label it as "antifeminist" in certain circles.
1Chrysophylax
Thank you. I'm still confused, though, because I started out at 0 karma for the month, making the changes in the numbers non-equal. I'm now on {16, 2}, which is consistent with {12,-2}, though.
4gjm
Remember that the change in karma-for-the-month is the sum of two things: changes now and changes a month ago. So it can jump down even when nothing interesting just happened, if you got a bunch of upvotes 30 days ago.
-2Chrysophylax
But the big jump was in karma, not karma-for-the-month. My karma-for-the-month went down by two and my karma went down by 25. I'm now on {20, 5}, which is inconsistent with the {12, -2} and {16, 2} from earlier today.
0Richard_Kennaway
Karma-for-the-month is karma on your last month's postings, not the last month's votes on all your postings. If your total plummets while KftM hardly changes, it means a bunch of old posts got downvoted.

I'm wanting to apply to be a conversation notes writer at Givewell, as they have an open position for it. The application seems quite straightforward, but I'm wondering if there is anything I should consider, because I would love to be hired for this job.

Do you have any suggestions for how I could improve an application?

For the application, I must submit a practice transcription of a Givewell conversation. I'm wondering, specifically, if there are any textbooks, guides to style, or ways of writing I should consult in preparation. Obviously, I must write the transcription myself, and not plagiarize, or whatever.

Is this guy a crank? He seems to be claiming that he has found the E=mc^2 for intelligence, artificial or otherwise.

http://www.exponentialtimes.net/videos/equation-intelligence-alex-wissner-gross-tedxbeaconstreet

My alarm bells are going off but I am interested to hear peoples' thoughts.

7Douglas_Knight
previous discussion also. He has been mentioned several other times without much discussion.
2dv82matt
Articles: http://phys.org/news/2013-04-emergence-complex-behaviors-causal-entropic.html http://www.newyorker.com/online/blogs/elements/2013/05/a-grand-unified-theory-of-everything.html http://www.bbc.com/news/science-environment-22261742 Paper: http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf
2Richard_Kennaway
I'm sure he's not a crank. Which leaves the important question: is he right? I don't know, but if he is, it's highly relevant to the question of FAI, and suggests that the MIRI approach of considering an AI as a logical system to be designed to be safe may be barking up the wrong tree. From an interview with Wissner-Gross: ... But as I said on a previous occasion when this came up, the outside view here is that so far it's just a big idea and toy demos.
0brazil84
Thank you for your response. Having thought about it for a while, I think he is wrong. (Whether he is a crank is a different issue, probably not worth worrying about) I think it can be illustrated with the following example: Suppose you are writing a computer program to find the fastest route between two cities and the computer program must select between two possibilities: Take the express highway or take local roads. A naive interpretation of Wissner-Gross' approach would be to take the local roads because that gives you more options. However this would not seem to be the more intelligent choice in general. So a naive interpretation of the Wissner-Gross approach appears to be basically a heuristic -- useful in some situations but not others. But is this interpretation of Wissner-Gross's approach correct? I expect he would say "no," that taking the express highway actually entails more options because you get to your destination quicker, resulting in extra time which can be used to pursue other activities. Which is fine, but it seems to me that this is circular reasoning. Of course the more intelligent choice will result in more time, money, energy, health, or whatever, and these things give you more options. But this observation tells us nothing about how to actually achieve intelligence. It's like the investment guru who tells us to "buy low sell high." He's stating the obvious without imparting anything of substance. I admit it's possible I have misunderstood Wissner-Gross' claims. Is he saying anything deeper than what I have pointed out?
0Manfred
My thoughts: Yeah he's wrong. And he got a paper on this junk published in PRL? Sheesh. He demos a program maximizing some entropy function, and claims intelligent behavior. Well, he could just as easily have made the program try to move everything to the left, and claimed intelligent behavior from that, too. The intelligence was not because of what he maximized, but because of a complex set of behaviors he paid someone to program into the agent but then glossed over.

A couple of months ago I missed out on going to Brazil because I didn't get my visa in time. I got my passport back with the Brazilian visa attached, but I didn't use it. I now have another opportunity to go to Brazil in two months and I'm wondering if the visa that I didn't get to use in November is still valid for me to use in May.

Anyone have any information or know where I can go to get a good answer?

2polymathwannabe
Doesn't your visa specify an expiration date? You can call/visit/browse the Brazilian embassy in your country for that information.
0Lumifer
Your visa itself (that piece of paper that you got) should have the expiration date (often expressed as "valid until").
[-]plex20

Have the issues around logical first movers (brought up in Ingredients of Timeless Decision Theory) been discussed/solved somewhere I've not managed to track down with Google? I've been thinking it over and have some possibly useful things to add, but that discussion is ancient and it seems likely that it's been solved more thoroughly somewhere in the last five years. I've found the posts about Masquerade which seems related, but only relevant to the special case of full source code disclosure.

2Manfred
We've had some attempts to hash it out (for example, comparing round-robin games of chicken to evolutionary equilibria, with niches for a whole spectrum of agents that win against more-cooperative agents but lose against more-defecting agents). Or talking about Schelling points. But I don't think there's been anything really definitive - you should share :)

A while ago I read through a lot of Gallup's Strengths and Wellbeing books. They looked good to me, but I don't know enough statistics to understand their technical reports, so I couldn't even begin to assess how accurate they were. Also, I've never studied positive psychology on anything more than a popular book level, so I can't bring sophisticated domain knowledge to bear either. Could someone look over the following and comment?

StrengthsFinder 2.0 (StrengthsFinder test is available for $10 here. They also have an entrepreneurial-focused version at the ... (read more)

0curiousepic
I took StrengthsFinder 2.0 soon after a new manager was hired for my office. I was skeptical of it, but not negative. The Strengths it gave me were unsurprising. The most use I got out of the exercise was from insights gleaned from a roundtable discussion about these strengths from the outside view of coworkers who had known me for a few months to more than a year.
0iarwain1
Were your colleagues able to understand you better because of the assessment, or was it just the fact that you were discussing each others' strengths the important part and it had little to do with the assessment per se? When I took the assessments I too found that it didn't tell me all that much about myself that I didn't already know. But it did help me in three ways: 1) I was able to express myself better and more precisely when talking about my strengths with others. 2) It turned vague notions in my head into more precise formulations that I could think about more constructively on my own. 3) Perhaps the most useful part was getting other people to take the test and then discussing their strengths with them. That was a real eye-opener. In many cases I simply could not imagine that someone else could view things so differently than me. So for me the assessment functioned as a terrific antidote to the Typical Mind Fallacy.
1curiousepic
The value was mostly due to hearing others' opinions and perception of me, where you don't usually get that kind of feedback. The assessment really only provided the framework and context. While I didn't really utilize them myself, I'd agree with those benefits.

I was surprised to see that this recent post was voted up even though I pointed out in the comments that it linked to something we had seen before. Someone suggested that reposts of old things are worthwhile so that more people see them. But to me the idea of using reposting to expose people to old stuff seems inelegant. Does anyone have any thoughts?

6David_Gerard
It's weeks of casual reading to go through every posting in Main since 2007 (I know, I did it), and approximately nobody is going to go through every posting in Discussion. When I've done this and had it pointed out, I tend to leave the post up but link to the previous discussion.
5Slackson
Perhaps digests of the most-upvoted posts in a particular time period? Top from week x, top from month y, top from whichever time period? People can archive-binge to the degree that they find most comfortable.
2[anonymous]
The nature of the discussion section and curated blog is to reward recency. Thus, reposts actually are the most elegant way to make sure interesting information is shared in those contexts. There's also the wiki, which is not organized by recent (and thus doesn't allow duplicate posts). Anything that's interesting enough to get popular twice in the discussion section should probably be posted to an "interesting rationality links" page on the wiki.