Zipfian Academy is a bootcamp for data science, but it's the only non web dev bootcamp I know about.
I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me.
That's a moral disagreement, not a factual disagreement. Alicorn is a deontologist, and you guys probably wouldn't be able to reach consensus on that no matter how hard you tried.
Three somewhat disconnected responses —
For a moral realist, moral disagreements are factual disagreements.
I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.
It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Mora...
I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.
If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.
ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.
Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.
I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart f...
I'm a computer science student. I did a course on information theory, and I'm currently doing a course on Universal AI (taught by Marcus Hutter himself!). I've found both of these courses far easier as a result of already having a strong intuition for the topics, thanks to seeing them discussed on LW in a qualitative way.
For example, Bayes' theorem, Shannon entropy, Kolmogorov complexity, sequential decision theory, and AIXI are all topics which I feel I've understood far better thanks to reading LW.
LW also inspired me to read a lot of philosophy. AFAICT, ...
The famous example of a philosopher changing his mind is Frank Jackson with his Mary's Room argument. However, that's pretty much the exception which proves the rule.
Basically, the busy beaver function tells us the maximum number of steps that a Turing machine with a given number of states and symbols can run for. If we know the busy beaver of, for example, 5 states and 5 symbols, then we can tell you if any 5 state 5 symbol Turing machine will eventually halt.
However, you can see why it's impossible to in general find the busy beaver function- you'd have to know which Turing machines of a given size halted, which is in general impossible.
Are you aware of the busy beaver function? Read this.
Basically, it's impossible to write down numbers large enough for that to work.
What do you mean by "utilitarianism"? The word has two different common meanings around here: any type of consequentialism, and the specific type of consequentialism that uses "total happiness" as a utility function. This sentence appears to be designed to confuse the two meanings.
Yeah, my mistake. I'd never run across any other versions of consequentialism apart from utilitarianism (except for Clippy, of course). I suppose caring only for yourself might count? But do you seriously think that the majority of those consequentialists aren't utilitarian?
This seems like it has makings of an interesting poll question.
I agree. Let's do that. You're consequentialist, right?
I'd phrase my opinion as "I have terminal value for people not suffering, including people who have done something wrong. I acknowledge that sometimes causing suffering might have instrumental value, such as imprisonment for crimes."
How do you phrase yours? If I were to guess, it would be "I have a terminal value which says that people who have caused suffering should suffer themselves."
I'll make a Discussion post about this after I get your refinement of the question?
Here's an old Eliezer quote on this:
...4.5.2: Doesn't that screw up the whole concept of moral responsibility?
Honestly? Well, yeah. Moral responsibility doesn't exist as a physical object. Moral responsibility - the idea that choosing evil causes you to deserve pain - is fundamentally a human idea that we've all adopted for convenience's sake. (23).
The truth is, there is absolutely nothing you can do that will make you deserve pain. Saddam Hussein doesn't deserve so much as a stubbed toe. Pain is never a good thing, no matter who it happens to, even A
Harry's failing pretty badly to update sufficiently on available evidence. He already knows that there are a lot of aspects of magic that seemed nonsensical to him: McGonagall turning into a cat, the way broomsticks work, etc. Harry's dominant hypothesis about this is that magic was intelligently designed (by the Atlanteans?) and so he should expect magic to work the way neurotypical humans expect it to work, not the way he expects it to work.
I disagree. It seems to me that individual spells and magical items work in the way neurotypical humans expect t...
I mostly agree with ShardPhoenix. Actually learning a language is essential to learning the mindset which programming teaches you.
I find it's easiest to learn programming when I have a specific problem I need to solve, and I'm just looking up the concepts I need for that. However, that approach only really works when you've learned a bit of coding already, so you know what specific problems are reasonable to solve.
Examples of things I did when I was learning to program: I wrote programs to do lots of basic math things, such as testing primality and approxi...
It depends on how much programming knowledge you currently have. If you want to just learn how to program, I recommend starting with Python, or Haskell if you really like math, or the particular language which lets you do something you want to be able to do (eg Java for making simple games, JavaScript for web stuff). Erlang is a cool language, but it's an odd choice for a first language.
In my opinion as a CS student, Python and Haskell are glorious, C is interesting to learn but irritating to use too much, and Java is godawful but sometimes necessary. The ...
It would be lovely if you'd point that kind of thing out to the nerdy guy. One problem with being a nerdy guy is that a lack of romantic experience creates a positive feedback loop.
So yeah, it's great to point out what mistakes the guy made. See Epiphany's comment here.
(I have no doubt that you personally would do this, I'm just pointing this out for future reference. You might not remember, but I've actually talked to you about this positive feedback loop over IM before. I complimented you for doing something which would go towards breaking the cycle.)
In case you're wondering why everyone is downvoting you, it's because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don't think there's much of a difference between killing someone and letting them die. See this fantastic essay on the topic.
(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you'll get a more nuanced view.)
Do these systems avoid the strategic voting that plagues American elections? No. For example, both Single Transferable Vote and Condorcet voting sometimes provide incentives to rank a candidate with a greater chance of winning higher than a candidate you prefer - that is, the same "vote Gore instead of Nader" dilemma you get in traditional first-past-the-post.
In the case of the Single Transferable Vote, this is simply wrong. If my preferences are Nader > Gore > Bush, I should vote that way. If neither Bush nor Gore have a majority, and N...
You're confusing a few different issues here.
So your utility decreases when theirs increases. Say that your love or hate for the adult is L1, and your love or hate for the kid is L2. Utility change for each as a result of the adult hitting the kid is U1 for him and U2 for the kid.
If your utility decreases when he hits the kid, then all we've established is that -L2U2 > L1U1. You may love them both equally, but think that hitting the kid messes him up more than it makes the adult happy, you'd still be unhappy when the guy hits a kid. But we haven't estab...
What are you trying to do with these definitions? The first three do a reasonable job of providing some explanation of what love means on a slightly simpler level than most people understand it.
However, the "love=good, hate=evil" can't really be used like that. I don't really see what you're trying to say with that.
Also, I'd argue that love has more to do with signalling than your definition seems to imply.
I have made bootleg PDFs in LaTeX of some of my favorite SSC posts, and gotten him to sign printed out and bound versions of them. At some point I might make my SSC-to-LaTeX script public...