All of Solvent's Comments + Replies

I have made bootleg PDFs in LaTeX of some of my favorite SSC posts, and gotten him to sign printed out and bound versions of them. At some point I might make my SSC-to-LaTeX script public...

I feel exactly the same way about the controversial opinions.

I used to work at App Academy, and have written about my experiences here and here.

You will have a lot of LW company in the Bay Area (including me!) There will be another LWer who isn't Ozy in that session too.

I'm happy to talk to you in private if you have any more questions.

Zipfian Academy is a bootcamp for data science, but it's the only non web dev bootcamp I know about.

I work at App Academy, and I'm very happy to discuss App Academy and other coding bootcamps with anyone who wants to talk about them with me.

I have previously Skyped LWers to help them prepare for the interview.

Contact me at bshlegeris@gmail.com if interested (or in comments here).

I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.

That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.

Three somewhat disconnected responses —

For a moral realist, moral disagreements are factual disagreements.

I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.

It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Mora... (read more)

I interpreted that bit as "If you're the kind of person who is able to do this kind of thing, then self-administered CBT is a great idea."

Of the people who graduated more than 6 months ago and looked for jobs (as opposed to going to university or something), all have jobs.

About 5% of people drop out of the program.

You make a good point. But none of the people I've discussed this with who didn't want to do App Academy cite those reasons.

3Jiro
I think this falls into the category of not assuming everyone talks like a LW-er. Someone who has moved in the past or known someone who has moved might not remember (at least without prompting) each of the individual items which make moving cost. They may just retain a generalized memory that moving is something to be avoided without a good reason. But guess what? When it comes to making decisions that should take into account the cost of moving, remembering "moving should be avoided without a good reason" will, if their criteria for "good reason" are well-calibrated, lead to exactly the same conclusion as having a shopping list of moving costs in their mind and knowing that the movers are $500 and the loss of social links is worth 1000 utilons etc. even if they can't articulate any numbers or any specific disadvantages of moving. Just because the people didn't actually cite those reasons, and wouldn't be able to cite those reasons, doesn't mean that they weren't in effect rejecting it for those reasons. And yes, this generalizes to people being unable to articulate reasons to avoid other things that they've learned to avoid.

I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.

If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.

2Jiro
If their point about accreditation was meant to indicate that they are skeptical that the plan leads to useful skills or to getting a job, then having them bring it up when you cite the job statistics is entirely expected. They brought up evidence against getting a job when you gave them evidence for getting one. (And if you're thinking that job statistics are such good evidence that even bringing up something correlated with lack of jobs doesn't affect the chances much, that's not true. There are a number of ways in which job statistics can be poor evidence, and those people were likely aware that such ways exist.)

I suspect that most people don't think of making the switch.

Pretty much all of them, yes. I should have phrased that better.

My experience was unusual, but if they hadn't hired me, I expect I would have been hired like my classmates.

0Said Achmiz
Out of curiosity, why did you take the TA job? Does it pay more than $90k a year?

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Their figures are telling the truth: I don't know anyone from the previous cohort who was dissatisfied with their experience of job search.

0Said Achmiz
Indeed it does not. I don't count your experience as an example of the OP. That's... an awfully strange phrasing. Do you mean they all found a web development job as a result of attending App Academy? Or what?

They let you live at the office. I spent less than $10 a day. Good point though.

5Jiro
Moving to San Francisco has a lot of expenses other than housing expenses, including costs for movers, travel costs (and the costs of moving back if you fail), costs to stop and start utilities, storage costs to store your possessions for 9 weeks if you live in the office, and the excess everyday costs that come from living in an area where everything is expensive. It's also a significant disruption to your social life (which could itself decrease your chances of finding a job, and is a cost even if it doesn't.)
1Said Achmiz
... huh. Could you elaborate on this, please? How's that work? Do they have actual housing? What is living at the office like?

ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.

Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.

I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart f... (read more)

0ChristianKl
What does almost mean in percentages? How many people drop out of the program and how many complete it?
2V_V
This is the first time I hear about this training program, but my impression (as somebody living outside the US) is that at the moment there is a shortage of programmers in the Silicon Valley, and therefore it is relatively easy, at least for people with the appropriate cognitive structure (those who can "grok" programming), to get a relatively high-paying programming job, even with minimal training. I suppose this is especially true in the web app/mobile app industry, since these tend to be highly commodified, non-critical products, which can be developed and deployed incrementally and have often very short lifecycles, hence a "quantity over quality" production process is used, employing a large number of relatively low-skilled programmers (*). Since the barriers to entry to the industry are low, evaluating the effectiveness of a commercial training program is not trivial: just noting that most people who complete the program get a job isn't great evidence. You would have to check whether people who complete the program are more likely to get a job, or get higher average salaries, than people who taught programming themselves by reading a few tutorials or completed free online courses like those offered by Code.org, Coursera, etc. If there was no difference, or the difference was not high enough to pay back the training program cost, then paying for it would be sub-optimal. (* I'm not saying that all app programmers are low-skilled, just that high skill is not a requirement for most of these jobs)
0Richard_Kennaway
Any comment on this? (News article a couple of days ago on gummint regulators threatening to shut down App Academy and several similar named organisations.)
1Jack
App Academy was a great decision for me. Though I just started looking for work, I've definitely become a very competent web developer in a short period of time. Speaking of which if anyone in the Bay Area is looking for a Rails or Backbone dev, give me a shout. I don't know if I agree that my decision to do App Academy had a lot to do with rationalism. 4//40 is a high percentage but a small n and the fact that it was definitely discussed here or at least around the community pretty much means it isn't evidence of much. People in my life I've told about it have all been enthusiastic, even people who are pretty focused on traditional credential-ism.
4ChrisHallquist
I'm one of Solvent's App Academy grads here. Unclear to me whether this is indicative of LWer's superior rationality, and to what extent it's because word about App Academy has gotten around within the LessWrong community. For me, the decision process went something like: 1. Luke recommended it to me. 2. I asked Luke if he knew anyone who'd been through it who could vouch for the program. He didn't, but could recommend someone within the LessWrong community who'd done a lot of research into coding bootcamps. 3. I talked to Luke's contact, everything checked out. 4. After getting in, I sent the contract to my uncle (a lawyer) to look at. He verified there were no "gotcha" clauses in the contact. So I don't know how much of my decision was driven by superior rationality and how much was driven by information I had that others might not (due in large part to the LessWrong community.) Though this certainly played a role. (EDIT: And in case anyone was wondering, it was a great decision and I'd highly recommend it.)
1Jiro
Don't dismiss what non-LWers are trying to say just because they don't phrase it as a LWer would. "Didn't offer real accreditation" means that they 1) are skeptical about whether the the plan teaches useful skills (doing a Bayseian update on how likely that is, conditional on the fact that you are not accredited), or 2) they are skeptical that the plan actually has the success rate you claim (based on their belief that employers prefer accreditation, which ultimately boils down to Bayseianism as well). Furthermore, it's hard to figure the probability that something is a scam. I can't think of any real-world situations where I would estimate (with reasonable error bars) that something has a 50% chance of being a scam. How would I be able to tell the difference between something with a 50% chance of being a scam and a 90% chance of being a scam?
2Aleksander
I've wondered why more people don't train to be software engineers. According to wikipedia, 1 in 200 workers is a software engineer. A friend of mine who teaches programming classes estimates 5% of people could learn how to program. If he's right, 9 out of 10 people who could be software engineers aren't, and I'm guessing 8 of them make less in their current job than they would if they decided to switch. One explanation is that most people would really hate the anti-social aspect of software engineering. We like to talk a lot about how it's critical for that job to be a great communicator etc., but the reality is, most of the time you sit at your desk and not talk to anyone. It's possible most people couldn't stand it. Most jobs have a really big social factor in comparison, you talk to clients, students, patients, supervisors, etc.
3Said Achmiz
Unrelatedly to my other response: uh, move to San Francisco? That... costs a lot of money. Even if only for nine weeks. Where did you live for the duration?
0Said Achmiz
You have, I take it, already gotten a job as a result of finishing App Academy?

I'm a computer science student. I did a course on information theory, and I'm currently doing a course on Universal AI (taught by Marcus Hutter himself!). I've found both of these courses far easier as a result of already having a strong intuition for the topics, thanks to seeing them discussed on LW in a qualitative way.

For example, Bayes' theorem, Shannon entropy, Kolmogorov complexity, sequential decision theory, and AIXI are all topics which I feel I've understood far better thanks to reading LW.

LW also inspired me to read a lot of philosophy. AFAICT, ... (read more)

The famous example of a philosopher changing his mind is Frank Jackson with his Mary's Room argument. However, that's pretty much the exception which proves the rule.

7Protagoras
Jackson is the first example I thought of. As I understand it, he came to be convinced, particularly by the arguments of David Lewis, that rejecting physicalism made it harder, rather than easier, to explain what was going on. But calling it "the exception that proves the rule" seems lazy and unhelpful, especially in light of other examples people have mentioned here.

Not only do I use that, it means that your comment renders as:

Hermione's body should now be at almost exactly five degrees Celsius [≈ recommended for keeping food cool] [≈ recommended for keeping food cool].

to me.

0DanielLC
You forgot that I use it too. That means that your comment looks like For everyone not using dictionary of numbers, that looks like

Basically, the busy beaver function tells us the maximum number of steps that a Turing machine with a given number of states and symbols can run for. If we know the busy beaver of, for example, 5 states and 5 symbols, then we can tell you if any 5 state 5 symbol Turing machine will eventually halt.

However, you can see why it's impossible to in general find the busy beaver function- you'd have to know which Turing machines of a given size halted, which is in general impossible.

Are you aware of the busy beaver function? Read this.

Basically, it's impossible to write down numbers large enough for that to work.

0[anonymous]
Interesting article! What do you mean, though? Are you saying that Knuth's triple up-arrow is uncomputable (I don't see why that would be the case, but I could be wrong.)?

The most upvoted post of all time on LW is Holden's criticism of SI. How many pageviews has that gotten?

9Kaj_Sotala
Google Analytics says that it got 9,334 unique pageviews in 2012. (Though for some reason GA also seems to me giving slightly different numbers than it gave me when I composed the post yesterday, so I'm not sure if they're completely reliable? But the difference is pretty slight, doesn't change the relative rankings of the different posts.)

It's a kind of utilitarianism. I'm including act utilitarianism and desire utilitarianism and preference utilitarianism and whatever in utilitarianism.

0Eugine_Nier
Ok, what is your definition of "utilitarianism"?

What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.

Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?

0Eugine_Nier
Well, even Eliezer's version of consequentialism isn't simple utilitarianism for starters.

I edited my comment to include a tiny bit more evidence.

This seems like it has makings of an interesting poll question.

I agree. Let's do that. You're consequentialist, right?

I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."

How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."

I'll make a Discussion post about this after I get your refinement of the question?

2ArisKatsaris
I'd suggest the following two phrasings: * I place terminal value to retribution (inflicting suffering on the causers of suffering), at least for some of the most egregious cases. * I do not place terminal value to retribution, not even for the most egregious cases (e.g. mass murderers). I acknowledge that sometimes it may have instrumental value. Perhaps also add a third choice: * I think I place terminal value to retribution, but I would prefer it if I could self-modify so that I wouldn't.

Here's an old Eliezer quote on this:

4.5.2: Doesn't that screw up the whole concept of moral responsibility?

Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).

The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even A

... (read more)
0Eugine_Nier
What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings. That is most definitely not the main point of that post.
0ArisKatsaris
[citation needed]
0buybuydandavis
Thank you, that's a good start. Yes, I had concluded that EY was anti retribution. Hadn't concluded that he had carried the day on that point. I don't think vengeance and retribution are "ideas" that people had to come up with - they're central moral motivations. "A social preference for which we punish violators" gets at 80% of what morality is about. Some may disagree about the intuition, but I'd note that even EY had to "renounce" all hatred, which implies to me that he had the impulse for hatred (retribution, in this context) in the first place. This seems like it has makings of an interesting poll question.

Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.

I disagree. It seems to me that individual spells and magical items work in the way neurotypical humans expect t... (read more)

0Qiaochu_Yuan
We have some weak evidence, namely McGonagall asserts that new charms and whatnot are created on a regular basis, which puts an upper bound on how difficult the process can be. But point taken.

Yeah, I'm pretty sure I (and most LWers) don't agree with you on that one, at least in the way you phrased it.

-2buybuydandavis
You think they'd prefer that the guy that caused everyone else in the universe to suffer didn't suffer himself?

The author doesn't want to write sports stories. The girls get comic stories about relationships, but the boys don't get comic stories about Quidditch.

This is a very good point. As a reader, I think those 'silly young boy' conversations would probably get old for me faster than the girl ones.

4Bakkot
They'd get old really fast for me, considering that there isn't a good way for sports stories to even be about main characters.

I'm pretty sure we exactly agree on this. Just out of curiosity, what did you think I meant?

0DaFranker
I believe I went with the wrong interpretation of "solve". I read it as something much more open, in the sense of "figuring out the problem's key elements", which would mean "needing to solve" google-scale projects and then them being "reasonable to solve" is equivalent to doing the research and experimentation required to figure out all the important structural elements of the problem, and what kind of team exactly would be working on which problems in order to implement the full solution. If I interpret "solve" as in "fully functional" (coded, runs, performs the desired operation without bugging for the standard test cases), then what you said and what I said reduce to approximately the same thing.

I mostly agree with ShardPhoenix. Actually learning a language is essential to learning the mindset which programming teaches you.

I find it's easiest to learn programming when I have a specific problem I need to solve, and I'm just looking up the concepts I need for that. However, that approach only really works when you've learned a bit of coding already, so you know what specific problems are reasonable to solve.

Examples of things I did when I was learning to program: I wrote programs to do lots of basic math things, such as testing primality and approxi... (read more)

0DaFranker
This struck me as slightly odd. In my experience, people who do not have at least a decent grasp of the concepts involved in programming will not even be able to imagine the kinds of problems that are not reasonable to solve. They will, on occasion, think up things that they believe is "simple", but would in practice require the equivalent of a whole Google department working on it for years before it can get done. If that's what you meant by knowing what problems are reasonable to solve, then that's fine. However, even such large, Google-scaled projects could still present a good way to motivate yourself and start looking up and learning stuff about coding.

It depends on how much programming knowledge you currently have. If you want to just learn how to program, I recommend starting with Python, or Haskell if you really like math, or the particular language which lets you do something you want to be able to do (eg Java for making simple games, JavaScript for web stuff). Erlang is a cool language, but it's an odd choice for a first language.

In my opinion as a CS student, Python and Haskell are glorious, C is interesting to learn but irritating to use too much, and Java is godawful but sometimes necessary. The ... (read more)

2fubarobfusco
For Haskell, I'd strongly suggest Hutton's Programming in Haskell as an introductory text for folks with no functional-programming experience. First, it is slightly cheaper; and second, it has the word λ inscribed in large friendly letters (well, letter) on the cover. But seriously — it presents basic Haskell in a gentle but mathy way, without a lot of the gushing of some of the more recent texts. It's written as a textbook, with topics presented sequentially; with academic exercises rather than pseudo-industrial ones. Rather impressively, it presents Parser and IO monads before introducing the Monad abstraction by name, so one actually has worked some examples of monads before trying to puzzle out what a monad is.

I love what this poll reveals about LW readers. Many sympathise with Batman, because of his tech/intellectual angle. The same with Iron Man, but he's a bit less cool. Then two have heard of superman, and most LWers are male. And most of us don't care.

It would be lovely if you'd point that kind of thing out to the nerdy guy. One problem with being a nerdy guy is that a lack of romantic experience creates a positive feedback loop.

So yeah, it's great to point out what mistakes the guy made. See Epiphany's comment here.

(I have no doubt that you personally would do this, I'm just pointing this out for future reference. You might not remember, but I've actually talked to you about this positive feedback loop over IM before. I complimented you for doing something which would go towards breaking the cycle.)

How many people actually have that?

Wouldn't that be a lack of regulation on emigration, not immigration?

-5A1987dM
0OnTheOtherHandle
At a guess, I would say: looking for recurring patterns in fiction, and extrapolating principles/tropes. It's a very bottom-up approach to literature, taking special note of subversions, inversions, aversions, etc, as opposed to the more top-down academic study of literature that loves to wax poetic about "universal truths" while ignoring large swaths of stories (such as Sci Fi and Fantasy) that don't fit into their grand model. Quite frankly, from my perspective, it seems they tend to force a lot of stories into their preferred mold, falling prey to True Art tropes.

I wonder why it is that so many people get here from TV Tropes.

Also, you're not the only one to give up on their first LW account.

3A1987dM
Because it uses as many examples from HP:MoR as it possibly could?
7shokwave
Possibly: TV Tropes approaches fiction the way LessWrong approaches reality.

You're right. My mistake. The standard "that doesn't really apply for real world situations" argument of course applies, with the circular preferences and so on.

3prase
I am not sure. Quite a realistic, although a bit different situation may be this: There are three candidates - White, Gray and Black. White and Black are opposed to each other while Gray is somewhere inbetween. Thus the preferences of White supporters are W > G > B and the preferences of Black supporters are B > G > W. The Grays are split equally between G > B > W and G > W > B. Now suppose that the distribution of supporters is 40 for White and 30 - 30 Gray and Black. You are a White supporter. If you vote according to you real preferences, i.e. first W, second G, you make it likely that Gray makes it to the second round where he wins due to the transferred Black votes. So you should instead vote tactically first B, second W, which would help Black into the second round where he will be eliminated by White who has stronger overall support.

I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I'll leave what I wrote above there for reference of people who don't know.

4Decius
No problem. You clearly communicated what you intended to, which is never a problem. From the link, though: 'Die trying', is one moral answer. 'Gain permission from the child' is another. 'Perform an immoral act for the greater good' is a third answer. I choose not to make the claim "In some cases you should non-consensually kick a small child in the face because hurting people is bad."

In case you're wondering why everyone is downvoting you, it's because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don't think there's much of a difference between killing someone and letting them die. See this fantastic essay on the topic.

(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you'll get a more nuanced view.)

-2Decius
I'm aware that my position is unpopular. What proportion of consequentialist LWers have donated a kidney?
6Solvent
I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I'll leave what I wrote above there for reference of people who don't know.

Do these systems avoid the strategic voting that plagues American elections? No. For example, both Single Transferable Vote and Condorcet voting sometimes provide incentives to rank a candidate with a greater chance of winning higher than a candidate you prefer - that is, the same "vote Gore instead of Nader" dilemma you get in traditional first-past-the-post.

In the case of the Single Transferable Vote, this is simply wrong. If my preferences are Nader > Gore > Bush, I should vote that way. If neither Bush nor Gore have a majority, and N... (read more)

9prase
What about this situation: You and your friend are the last people to vote (having the same preferences for Nader over Gore over Bush) while the standings are: * 1,000,001 votes Gore first, Bush second * 1,000,002 votes Bush first, Nader second * 1,000,003 votes Nader first, Gore Second Giving your two votes to Nader first + Gore second would mean that Gore is eliminated and his votes now support Bush, which gets Bush elected. If you instead vote Gore first and Nader second, Bush is eliminated and his votes are transferred to Nader who gets elected, which is much better outcome regarding your preferences.

You're confusing a few different issues here.

So your utility decreases when theirs increases. Say that your love or hate for the adult is L1, and your love or hate for the kid is L2. Utility change for each as a result of the adult hitting the kid is U1 for him and U2 for the kid.

If your utility decreases when he hits the kid, then all we've established is that -L2U2 > L1U1. You may love them both equally, but think that hitting the kid messes him up more than it makes the adult happy, you'd still be unhappy when the guy hits a kid. But we haven't estab... (read more)

1asparisi
I am actually using James' definition of hate, which is "When their utility function goes up, mine goes down." I suppose that, trivially, this is not entirely accurate of me and Person X. If Person X eats a sandwich and enjoys it, I don't have a problem with that. But if "hate" is unilateral in that fashion, no one loves or hates anyone: I have yet to encounter any individual who would, for instance, feel worse because someone else is enjoying a tasty sandwich. So instead, I used a more loosely defined variation on their definition, where "hate" can be allowed to occur on one axis of a person's life and not another. Under this variation, I can hate this person for hitting kids and not along other aspects of their life, which is normal. But hating that person isn't evil, which is part of what I was getting at. I don't feel happier if Person X gets utility from hitting kids, even if I would otherwise value Person X. And I don't think it is evil to hate someone who gets their utility in a really messed-up way. What might make this more difficult is that I am using a colloquial version of 'evil' but James' particular formulation of 'hate,' which may make things confusing since I don't think James' definition of hate maps onto what we normally refer to as hate.

What are you trying to do with these definitions? The first three do a reasonable job of providing some explanation of what love means on a slightly simpler level than most people understand it.

However, the "love=good, hate=evil" can't really be used like that. I don't really see what you're trying to say with that.

Also, I'd argue that love has more to do with signalling than your definition seems to imply.

-1James_Miller
Evil, I believe, is taking pleasure in other peoples' pain. I would exclude signaling concerns when deciding whether someone acted out of love.

He used the opening paragraph as one of the example strings for something you were testing your regular expressions on.

This might be a really good idea.

I don't mean attractiveness just in the sense of physical looks. I mean the whole thing of my social standing, confidence and perceived coolness.

But thanks for the advice.

Load More