Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by gjm on Selling Nonapples
Comment author: milindsmart 24 August 2016 06:20:10AM -1 points [-]

Thanks :) Can you elaborate a bit? Are you saying that I overreached, and that largely there should be some transformed domain where the model turns out to be simple, but is not guaranteed to exist for every model?

Comment author: gjm 24 August 2016 01:32:18PM -1 points [-]

I'm not sure "overreached" is quite my meaning. Rather, I think I disagree with more or less everything you said, apart from the obvious bits :-).

And that is the reason linear models are mathematically tractable : they form such a small space of possible models.

I don't think it has anything much to do with the size of the space. Linear things are tractable because vector spaces are nice. The only connection with the niceness of linear models and the fact that they form such a small fraction of all possible models is this: any "niceness" property they have is a constraint on the models that have them, and therefore for something to be very "nice" requires it to satisfy lots of constraints, so "nice" things have to be rare. But "nice, therefore rare" is not at all the same as "rare, therefore smart".

(We could pick out some other set of models, just as sparse as the linear ones, without the nice properties linear models have. They would form just as small a space of possible models, but they would not be as nice to work with as the linear ones.)

Of course nonlinear models don't have general formulae that always work : they're just defined as what is NOT linear.

If you mean that being nonlinear doesn't guarantee anything useful, of course that's right (and this is the same point about "nonapples" being made by the original article here). Particular classes of nonlinear models might have general formulae, a possibility we'll come to in a moment.

In other words, linear models are severely restricted in the form they can have.

I'm not sure what that's putting "in other words"; but yes, being linear is a severe restriction.

When we define another subset of models suitable to the specific thing being modelled, then we will just as easily be able to come up with a set of explicit symbolic formulae.

No. Not unless we cheat by e.g. defining some symbol to mean "a function satisfying this funky nonlinear condition we happen to be working with right now". (Which mathematicians sometimes do, if the same funky nonlinear condition comes up often enough. But (1) this is a special case and (2) it still doesn't get you anything as nice and easy to deal with as linearity does.)

In general, having a narrowly specified set of models suitable to a specific physical phenomenon is no guarantee at all of exact explicit symbolic formulae.

Then it will be just as "tractable" as linear models, even though it's nonlinear : simply because it has different special properties

No. Those different special properties may be much less useful than linearity. Linearity is a big deal because it is so very useful. The space of solutions to, I dunno, let's say the Navier-Stokes equations in a given region and with given boundary conditions is highly constrained; but it isn't constrained in ways that (at least so far as mathematicians have so far been able to figure out) are as useful as linearity.

So I don't agree at all that "largely there should be some transformed domain where the model turns out to be simple". Sometimes that happens, but usually not.

Comment author: gjm 23 August 2016 11:51:50PM -1 points [-]

Some of the stuff about "street epistemology" -- not what you've written yourself, but e.g. those videos by Anthony Magnabosco -- strikes me as weird and creepy and manipulative in almost the same way as much Christian evangelism does. Don't get me wrong; if you're going to be manipulative at people in order to try to manoeuvre them nearer to your religious position, "street epistemology" is probably about the least odious of your options. But that isn't a high bar to clear.

I know that in his videos Magnabosco talks about treating your targets with respect, and really listening to them, and so on. I expect Boghossian's book is the same (but I don't know; I haven't read it). But this gives me much the same impression as I had back in my religious days listening to Christians talking about how you have to love people for themselves in order to Win Them For Christ. I mean, look at the title of Boghossian's book. "A Manual for Creating Atheists". The believers you're talking to, doing "street epistemology", are merely raw material for the creation of atheists.

If it strikes me this way, then I bet religious people encountering this material don't like it any better than I do, and the site is (secondarily) intended to be suitable for believers to read. You might want to consider toning down your praise of "street epistemology".

(Not necessarily. I think I am unusually annoyed by, and perhaps unusually sensitive to, attempts at psychological manipulation. Maybe other potential readers will react more positively.)

Comment author: cody-bryce 23 August 2016 03:02:21PM 4 points [-]

If you're interested in 'balancing' work and pleasure, stop trying to balance them. Instead make your work more pleasurable.

-Donald Trump

Comment author: gjm 23 August 2016 04:11:44PM -7 points [-]

Much easier for some people, and some varieties of work, than others. (That "privilege" thing social-justice types like to talk about? You can see it in Trump's remark here. He has the power and the freedom to change the work he does and how he does it to make it more fun for him, and he shows no sign of awareness that anyone else is in a very different situation.)

Adjusting your work and/or your attitude to your work to make it more pleasurable is a good idea for anyone who can do it, but I think it's only a very small fraction of the population who can do it enough that it no longer makes sense to talk about balancing work with not-work.

Comment author: cody-bryce 23 August 2016 03:03:03PM 10 points [-]

In the second grade I actually gave a teacher a black eye — I punched my music teacher because I didn’t think he knew anything about music and I almost got expelled.

-Donald Trump

Comment author: gjm 23 August 2016 04:08:34PM -7 points [-]

Downvoted. Punching teachers in the eye because you don't think they know enough is generally counterproductive. This mostly tells us that in second grade Donald Trump had an overdeveloped sense of entitlement and underdeveloped sense of control. How could this possibly be a "rationality quote"?

Comment author: cody-bryce 23 August 2016 03:03:32PM 3 points [-]

There are people — I categorize them as life’s losers — who get their sense of accomplishment and achievement from trying to stop others. As far as I’m concerned, if they had any real ability...they’d be doing something constructive themselves

-Donald Trump

Comment author: gjm 23 August 2016 04:07:29PM -7 points [-]

Downvoted. This isn't entirely wrong -- surely there are some people who "get their sense of accomplishment and achievement from trying to stop others" -- but I strongly suspect that Trump is exaggerating how many people are actually in that category and I would expect (for fundamental-attribution-error reasons) that this is a commoner error than its opposite; and this sort of dismissive attitude to other people is usually unhelpful as well as unpleasant.

Comment author: cody-bryce 23 August 2016 03:02:10PM 4 points [-]

I’ve always thought about the issue of nuclear war; it’s a very important element in my thought process. It’s the ultimate, the ultimate catastrophe, the biggest problem this world has, and nobody’s focusing on the nuts and bolts of it. It’s a little like sickness. People don’t believe they’re going to get sick until they do. Nobody wants to talk about it. I believe the greatest of all stupidities is people’s believing it will never happen, because everybody knows how destructive it will be, so nobody uses weapons. What bullshit.

-Donald Trump

Comment author: gjm 23 August 2016 03:58:35PM -7 points [-]

Five Trump quotations in "Rationality Quotes" posted within about a minute of one another. Is this some sort of experiment to see how LW reacts to the name of Trump? (With one possible exception they do not in fact seem to me to be rationality quotes in any useful sense.)

Comment author: JonahSinick 22 August 2016 09:33:02PM 1 point [-]

Hello! I'm a cofounder of Signal Data Science.

Because our students have come into the program from very heterogeneous backgrounds (ranging from high school dropout to math PhD with years of experience as a software engineer), summary statistics along the lines that you're looking for are less informative than might seem to be the case prima facie. In particular, we don't yet have meaningfully large sample of students who don't fall into one of the categories of (i) people who would have gotten high paying jobs anyway and (ii) people who one wouldn't expect to have gotten high paying jobs by now, based on their backgrounds.

If you're interested in the possibility of attending the program, we encourage you to fill out our short application form. If it seems like it might be a good fit for you, we'd be happy to provide detailed answers to any questions that you might have about job placement.

Comment author: gjm 22 August 2016 10:13:27PM -1 points [-]

Wait, your category (ii) is surely exactly what we care about here. We want to know: For someone whose background would lead you not to expect high-paying data science jobs, is Signal effective in getting them a better chance of a high-paying data science job?

Comment author: TheAncientGeek 22 August 2016 06:46:26PM *  0 points [-]

I think he wants a system which works like realism, in that there are definite answers to ethical questions ("fixed", "frozen") ,but without spookiness.

Yudkowsky,'s theory entails the same problem as relativism: if morality is whatever people value, and if what people happen to value is intuitively immoral , slavery, torture,whatever, then there's no fixed standard of morality. The label "moral" has been placed on a moving target. (Standard relativism usually has this problem synchronously , ie different communities are said to have different but equally valid moralities at the same time, but it makes little difference if you are asserting that the global community has different but equally valid moralities at different times)

You can avoid the problems of relativism by setting up an external standard, and there are many theories of that type, but they tend to have the problem that the external standard is not naturalistic....God's commands, the Form of the good, and so on. I think Yudkowsky wants a theory that is non arbitrary and also naturalistic. I don't think he arrives a single theory that does both. If the Moral Equation is just a label for human intuition, then it ssuffers from all the vagaries of labeling values as moral, the original theory. If the Moral Equation is something ideal and abstract, why can't aliens partake?

Comment author: gjm 22 August 2016 10:08:33PM -1 points [-]

I agree.

Comment author: entirelyuseless 19 August 2016 04:36:04AM 0 points [-]

I agree that the typical realist theory implies more objectivity than is present in Eliezer's theory. But in the same way, the typical non-realist theory implies less objectivity than is present there. E.g. someone who says that "this action is good" just means "I want to do this action" has less objectivity, because it will vary from person to person, which is not the case in Eliezer's theory.

Comment author: gjm 19 August 2016 05:07:03PM -1 points [-]

I think we are largely agreed as to facts and disagree only on whether it's better to call Eliezer's theory, which is intermediate between many realist theories and many non-realist theories, "realist" or "non-realist".

I'm not sure, though, that someone who says that "this is good" = "I want to do this" is really a typical non-realist. My notion of a typical non-realist -- typical, I mean, among people who've actually thought seriously about this stuff -- is somewhat nearer to Eliezer's position than that.

Anyway, the reason why I class Eliezer's position as non-realist is that the distinction between Eliezer's position and that of many (other?) non-realists is purely terminological -- he agrees that there are all these various value systems, and that if ours seems special to us that's because it's ours rather than because of some agent-independent feature of the universe that picks ours out in preference to others, but he wants to use words like "good" to refer to one particular value system -- whereas the distinction between his position and that of most (other?) realists goes beyond terminology: they say that the value system they regard as real is actually built into the fabric of reality in some way that goes beyond the mere fact that it's our (or their) value system.

You may weight these differences differently.

Comment author: entirelyuseless 18 August 2016 03:33:14PM 0 points [-]

I disagree with this objection to Eliezer's ethics because I think the distinction between "realist" and "nonrealist" theories is a confusion that needs to be done away with. The question is not whether morality (or anything else) is "something real," but whether or not moral claims are actually true or false. Because that is all the reality that actually matter: tables and chairs are real, as far as I am concerned, because "there is a table in this room" is actually true. (This is also relevant to our previous discussion about consciousness.)

And in Eliezer's theory, some moral claims are actually true, and some are actually false. So I agree with him that his theory is realist.

I do disagree with his theory, however, insofar as it implies that "what we care about" is essentially arbitrary, even if it is what it is.

Comment author: gjm 18 August 2016 05:21:38PM -1 points [-]

The question is not whether morality (or anything else) is "something real", but whether or not moral claims are actually true or false.

That (whether moral claims are actually true or false) is exactly how I distinguish moral realism from moral nonrealism, and I think this is a standard way to understand the terms.

But any nonrealist theory can be made into one in which moral claims have truth values by redefining the key words; my suggestion is that Eliezer's theory is of this kind, that it is nearer to a straightforwardly nonrealist theory, which it becomes if e.g. you replace his use of terms like "good" with terms that are explicit about what value system the reference ("good according to human values") than to typical more ambitious realist theories that claim that moral judgements are true or false according to some sort of moral authority that goes beyond any particular person's or group's or system's values.

View more: Next