In response to Identity map
Comment author: Manfred 15 August 2016 08:34:06PM 0 points [-]

I don't think this makes sense as a "problem to solve." Identity is a very useful concept for humans and serves several different purposes , but it is not a fundamental facet of the world, and there is no particular reason why the concept or group of heuristics that you have learned and call "identity" is going to continue to operate nicely in situations far outside of everyday life.

It's like "the problem of hand" - what is a hand? Where is the dividing line between my hand and my wrist? Do I still have a left hand if I cut off my current left hand and put a ball made of bone on the end of my wrist? Thoughts like these were never arrived at by verbal reasoning, and are not very amenable to it.

This is why we should eventually build AI that is much better at learning human concepts than humans are at verbalizing them.

Comment author: Bound_up 28 July 2016 08:55:34PM 1 point [-]

The mainstream LW idea seems to be that the right to life is based on sentience.

At the same time, killing babies is the go-to example of something awful.

Does everyone think babies are sentient, or do they think that it's awful to kill babies even if they're not sentient for some reason, or what?

Does anyone have any reasoning on abortion besides, Not sentient being, killing it is okay QED (wouldn't that apply to newborns, too?)?

Comment author: Manfred 28 July 2016 09:34:43PM *  2 points [-]

I don't think this is quite the LW norm. We might distinguish several different meanings of right to life:

1: The moral value I place on other peoples' lives. In this sense "right to life" is just the phrase I use to describe the fact that I don't want people to kill or die, and the details can easily vary from person to person. If LW users value sentience, this is a fact about demographics, not an argument that should be convincing. This is what we usually mean when we say something is "okay."

2: The norms that society is willing to enforce regarding the value of a life. Usually fairly well agreed upon, though with some contention (e.g. fertilized ova). This is the most common use of the word "right" by people who understand that rights aren't ontologically basic. Again, this is a descriptive definition, not a prescriptive one, but you can see how people might decide what to collectively protect based compromises between their own individual values.

3: Something we should protect for game-theoretic reasons. This is the only somewhat prescriptive one, since you can argue that it is a mistake in reasoning to, say, pollute the environment if you're part of a civilization of agents very similar to you. Although this still depends on individual values, it's the similarity of peoples' decisions that does the generalizing, rather than compromise between different people. Values derived in this way can be added to or subtracted from values derived in the other ways. It's unclear how much this applies to the case of abortion - this seems like an interesting argument.

Comment author: MrMind 26 July 2016 07:46:43AM *  1 point [-]

How would you write a better "Probability theory, the logic of science"?

Brainstorming a bit:

  • accounting for the corrections and rederivations of Cox' theorem

  • more elementary and intermediate exercises

  • regrouping and expanding the sections on methods "from problem formulation to prior": uniform, Laplace, group invariance, maxent and its evolutions (MLM and minxent), Solomonoff

  • regroup and reduce all the "orthodox statistics is shit" sections

  • a chapter about anthropics

  • a chapter about Bayesian network and causality, that flows into...

  • an introduction to machine learning

Comment author: Manfred 26 July 2016 10:18:02PM 1 point [-]

My perspective on anthropics is somewhat different than many, but I think that in a probability theory textbook, anthropics should only be treated as a special case of assigning probabilities to events generated by causal systems. Which requires some familiarity with causal graphs. It might be worth thinking about organizing material like that into a second book, which can have causality in an early chapter.

I would include Savage's theorem, which is really pretty interesting. A bit more theorem-proving in general, really.

Solomonoff induction is a bit complicated, I'm not sure it's worthwhile to cover at a more than cursory level, but it's definitely an important part of a discussion about what properties we want priors to have.

On that note, a subject of some modern interest is how to make good inferences when we have limited computational resources. This both means explicitly using probability distributions that are easy to calculate with (e.g. gaussian, cauchy, uniform) , and also implicitly using easy distributions by neglecting certain details or applying certain approximations.

Comment author: Manfred 21 July 2016 09:48:54PM 12 points [-]

Oh my gosh, the negative utilitarians are getting into AI safety. Everyone play it cool and try not to look like you're suffering.

Comment author: polymathwannabe 19 July 2016 03:48:58PM 0 points [-]

Added to my Amazon wish list. Do you know of any other books one should be aware of?

Comment author: Manfred 20 July 2016 03:30:14AM 2 points [-]

There's probably some books by Dan Dennett that the LW articles that deal with philosophy of mind drew from, but I've mostly been exposed to Dennett through articles, like Intentional Systems and Eliminate the Middletoad!

On evolution, essential reading is The Selfish Gene.

On heuristics and biases, Thinking Fast and Slow (the first half, at least), and Dan Ariely's books are good reads.

I'm not aware of anything similar to the sequences in terms of intersection of Bayesianism, heuristics and biases, and trying to teach how to think about confusing things. Unfortunately.

Comment author: Manfred 18 July 2016 11:53:55PM 8 points [-]

Yes, it's entitled "Good and Real." The shadowy cabal behind LessWrong wrote it under one of their other pseudonyms, "Gary Drescher."

Comment author: Manfred 18 July 2016 11:56:09PM *  8 points [-]

(Note: this is not actually true. But Good and Real recapitulates most of the points you'll see in the philosophy-related sequences, with less focus on the basics and more on elaborating philosophical arguments. If this is the content you want to share, it might be a good choice.)

Comment author: blf 18 July 2016 09:36:32PM 4 points [-]

Does there exist a paper version of Yudkowsky's book "Rationality: From AI to Zombies"? I only found a Kindle version but I would like to give it as a present to someone who is more likely to read a dead-tree version.

Comment author: Manfred 18 July 2016 11:53:55PM 8 points [-]

Yes, it's entitled "Good and Real." The shadowy cabal behind LessWrong wrote it under one of their other pseudonyms, "Gary Drescher."

Comment author: root 18 July 2016 02:59:31PM 3 points [-]

What are the differences between the 'big names' of higher education, in comparison to other places?

For example, I often hear about MIT, Oxford, and to a lesser extent, Cambridge. Either there's some sort of self-selection, or do graduates from there have better prospects than graduates of 'University of X, YZ'?

In a little bit of unintended self-reflection I noticed that I have a strange binary way of thinking of higher education. It feels that if I don't go to one of the top n, my effort is wasted. Not sure why.

I'm just becoming somewhat paranoid regarding the real world after reading HPMOR because I always get a 'how much do I really know?' feeling. I'm not sure how my impressions were formed and I better double-check how well does the ideas in my mind reflect the real-world truth but at the same time I'm not even sure what's a reliable indicator.

Post-high education LWers, do you think the place you studied at had a significant effect on your future prospects?

Comment author: Manfred 18 July 2016 04:47:50PM 4 points [-]

There's a lot of self-selection, and the classes and extracurricular resources are therefore allowed to be geared towards smarter students, and that's nice. You'll also get more opportunities to learn about current research in your chosen field, which improves your grad school chances.

A lot of the value is if you plan to get a job straight out of college, going to a top n school will have a name brand advantage (not without reason).

However, controlling for smartness and research experience, I think that where you did your undergrad doesn't matter all that much for grad school.

Comment author: Manfred 30 June 2016 06:14:40PM 0 points [-]

Fun post, thanks!

Comment author: Daniel_Burfoot 29 June 2016 01:22:44PM *  4 points [-]

This comment got 6+ responses, but none that actually attempted to answer the question. My goal of Socratically prompting contrarian thinking, without being explicitly contrarian myself, apparently failed. So here is my version:

  • Most startups are gimmicky and derivative, even or especially the ones that get funded.
  • Working for a startup is like buying a lottery ticket: a small chance of a big payoff. But since humans are by nature risk-averse, this is a bad strategy from a utility standpoint.
  • Startups typically do not create new technology; instead they create new technology-dependent business models.
  • Even if startups are a good idea in theory, currently they are massively overhyped, so on the margin people should be encouraged to avoid them.
  • Early startup employees (not founders) don't make more than large company employees.
  • The vast majority of value from startups comes from the top 1% of firms, like Facebook, Amazon, Google, Microsoft, and Apple. All of those firms were founded by young white males in their early 20s. VCs are driven by the goal of funding the next Facebook, and they know about the demographic skew, even if they don't talk about it. So if you don't fit the profile of a megahit founder, you probably won't get much attention from the VC world.
  • There is a group of people (called VCs) whose livelihood depends on having a supply of bright young people who want to jump into the startup world. These people act as professional activists in favor of startup culture. This would be fine, except there is no countervailing force of professional critics. This creates a bias in our collective evaluation of the culture.
Comment author: Manfred 29 June 2016 08:03:37PM *  2 points [-]

Argument thread!

You should probably stay at your big company job because the people who are currently startup founders are self-selected for, on average, different things than you're selecting yourself for by trying to jump on a popular trend, and so their success is only a weak predictor of your success.

Startups often cash out by generating hype and getting bought for ridiculous amounts of money by a big company. But they are very, very often, in more sober analysis, not worth this money. From a societal perspective this is bad because it's not properly aligning incentives with wealth creation, and from a new-entrant perspective this is bad because you likely fail if the bubble pops before you can sell.

View more: Prev | Next