Filter Last three months
Comment author: Furcas 24 September 2016 03:39:19PM 17 points [-]

Donated $500!

Comment author: Jiro 15 August 2016 03:20:10AM *  5 points [-]

That demonstrates that Japanese culture has the phrase. Not that Japanese culture has the phrase with the same meaning as Eliezer uses.

And even if Japanese culture has it, there's a difference between having it as a fictional thing and having it as a concept commonly applied to actual people.

Also, in this context, remember that fictional scenarios are often set up to have individuals drastically influence the result where real life scenarios do not. People like reading about Voldemort defeated by Harry Potter, not by 200 wizards doing routine policing misions that are thorough enough that they happen to find all the horcruxes, followed by massive military backup for the squad of identically trained men raiding his compound. That's why fictional characters often have something like tsuyoku naritai; it doesn't carry over to the real world.

By the way:

"Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah."

Obviously Eliezer was not familiar with the concept "asymptote".

Comment author: Elo 05 August 2016 05:27:30AM -2 points [-]

voting is enabled at 10+ karma. Welcome! You managed to make a post which means you successfully verified your email address (which sometimes stops people).

Comment author: gjm 24 August 2016 01:32:18PM -1 points [-]

I'm not sure "overreached" is quite my meaning. Rather, I think I disagree with more or less everything you said, apart from the obvious bits :-).

And that is the reason linear models are mathematically tractable : they form such a small space of possible models.

I don't think it has anything much to do with the size of the space. Linear things are tractable because vector spaces are nice. The only connection with the niceness of linear models and the fact that they form such a small fraction of all possible models is this: any "niceness" property they have is a constraint on the models that have them, and therefore for something to be very "nice" requires it to satisfy lots of constraints, so "nice" things have to be rare. But "nice, therefore rare" is not at all the same as "rare, therefore smart".

(We could pick out some other set of models, just as sparse as the linear ones, without the nice properties linear models have. They would form just as small a space of possible models, but they would not be as nice to work with as the linear ones.)

Of course nonlinear models don't have general formulae that always work : they're just defined as what is NOT linear.

If you mean that being nonlinear doesn't guarantee anything useful, of course that's right (and this is the same point about "nonapples" being made by the original article here). Particular classes of nonlinear models might have general formulae, a possibility we'll come to in a moment.

In other words, linear models are severely restricted in the form they can have.

I'm not sure what that's putting "in other words"; but yes, being linear is a severe restriction.

When we define another subset of models suitable to the specific thing being modelled, then we will just as easily be able to come up with a set of explicit symbolic formulae.

No. Not unless we cheat by e.g. defining some symbol to mean "a function satisfying this funky nonlinear condition we happen to be working with right now". (Which mathematicians sometimes do, if the same funky nonlinear condition comes up often enough. But (1) this is a special case and (2) it still doesn't get you anything as nice and easy to deal with as linearity does.)

In general, having a narrowly specified set of models suitable to a specific physical phenomenon is no guarantee at all of exact explicit symbolic formulae.

Then it will be just as "tractable" as linear models, even though it's nonlinear : simply because it has different special properties

No. Those different special properties may be much less useful than linearity. Linearity is a big deal because it is so very useful. The space of solutions to, I dunno, let's say the Navier-Stokes equations in a given region and with given boundary conditions is highly constrained; but it isn't constrained in ways that (at least so far as mathematicians have so far been able to figure out) are as useful as linearity.

So I don't agree at all that "largely there should be some transformed domain where the model turns out to be simple". Sometimes that happens, but usually not.

Comment author: TheAncientGeek 18 October 2016 01:04:49PM *  3 points [-]

I am not taking charity to be a central example of ethics.

Charity, societal improvement,etc are not centrally ethical, because the dimension of obligation is missing. It is obligatory to refrain from murder, but supererogatory to give to charity. Charity is not completely divorced from ethics, because gaining better outcomes is the obvious flipside of avoiding worse outcomes, but it does not have every component that which is centrally ethical.

Not all value is morally relevant. Some preferences can be satisfied without impacting anybody else, preferences for flavours of ice cream being the classic example, and these are morally irrelevant. On the other had, my preference for loud music is likely to impinge on my neighbour's preference for a good nights sleep: those preferences have a potential for conflict.

Charity and altrusim are part of ethics, but not central to ethics. A peaceful and prosperous society is in a position to consider how best to allocate its spare resources (and utiliariansim is helpful here, without being a full theory of ethics), but peace and prosperity are themselves the outcome a functioning ethics, not things that can be taken for granted. Someone who treats charity as the outstanding issue in ethics is, as it were, looking at the visible 10% of the iceberg while ignoring the 90% that supports it.

If you mean conflict between individuals' own values,

I mean destructive conflict.

Consider two stone age tribes. When a hunter of tribe A returns with a deer, everyone falls on it, trying to grab as much as possible, and end up fighting and killing each other. When the same thing happens in tribe b, they apportion the kill in an orderly fashion according to a predefined rule. All other things being equal, tribe B will do better than tribe A: they are in possession of a useful piece of social technology.

Comment author: Zack_M_Davis 04 October 2016 06:55:36PM 3 points [-]

Again, people sometimes use idiomatic English to describe subjective states of high confidence that do not literally correspond to probabilities greater than 0.999! (Why that specific threshold, anyway?)

You know, I take it back; I actually can't see how this might be confusing.

Comment author: ChristianKl 15 August 2016 10:40:21AM 3 points [-]

Obviously Eliezer was not familiar with the concept "asymptote".

When he was a kid at a religious elementary school.

Comment author: hairyfigment 27 July 2016 05:30:18AM 3 points [-]

Khepri Prime, if the sequel to "Worm" goes the way I hope. More seriously, I don't believe any of that, and physics sadly appears to make some of it impossible even in the far future. Most of us would balk at that first word, "perfect," citing logical impossibility results and their relation to idealized induction. So your question makes you seem - let us say disconnected from the discussion. Would you happen to be assuming we reject theism because we see it as low status, and not because there aren't any gods?

Comment author: Gunnar_Zarncke 14 October 2016 10:56:15PM 2 points [-]

I'm not sure this has the best visibility here in Main. I just noted it right now because I haven't looked in Main for ages. And it wasn't featured in discussions, or was it?

Comment author: hairyfigment 14 October 2016 10:37:41AM 2 points [-]

Clearly I should have asked about actions rather than people. But the Babyeaters were not ignorant that they were causing great pain and emotional distress. They may not have known how long it continued, but none of the human characters IIRC suggested this information might change their minds. Because those aliens had a genetic tendency towards non-human preferences, and the (working) society they built strongly reinforced this.

Comment author: CCC 13 October 2016 01:49:46PM 2 points [-]

"Morals" and "goals" are very different things. I might make it a goal to (say) steal an apple from a shop; this would be an example of an immoral goal. Or I might make a goal to (say) give some money to charity; this would be a moral goal. Or I might make a goal to buy a book; this would (usually) be a goal with little if any moral weight one way or another.

Morality cannot be the same as terminal goals, because a terminal goal can also be immoral, and someone can pursue a terminal goal while knowing it's immoral.

AI morals are not a category error; if an AI deliberately kills someone, then that carries the same moral weight as if a person deliberately kills someone.

Comment author: TheOtherDave 13 October 2016 04:05:12AM 2 points [-]

When you see the word "morals" used without further clarification, do you take it to mean something different from "values" or "terminal goals"?

Depends on context.

When I use it, it means something kind of like "what we want to happen." More precisely, I treat moral principles as sort keys for determining the preference order of possible worlds. When I say that X is morally superior to Y, I mean that I prefer worlds with more X in them (all else being equal) to worlds with more Y in them.

I know other people who, when they use it, mean something kind of like that, if not quite so crisply, and I understand them that way.

I know people who, when they use it, mean something more like "complying with the rules tagged 'moral' in the social structure I'm embedded in." I know people who, when they use it, mean something more like "complying with the rules implicit in the nonsocial structure of the world." In both cases, I try to understand by it what I expect them to mean.

Comment author: ChristianKl 10 October 2016 09:17:40AM 1 point [-]

thereby creating a clearer distinction between religious and secular.

Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.

You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.

To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.

Comment author: So8res 04 October 2016 08:41:49PM 2 points [-]

Huh, thanks for the heads up. If you use an ad-blocker, try pausing that and refreshing. Meanwhile, I'll have someone look into it.

Comment author: Good_Burning_Plastic 29 September 2016 08:03:41AM 2 points [-]

Computing can't harm the environment in any way

Well...

Comment author: Vaniver 27 September 2016 09:38:12PM 2 points [-]

There shouldn't be any conflicts between VoI and Bayesian reasoning; I thought of all of my examples as Bayesian.

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

I don't think that example describes the situation you're talking about. Remember that VoI is computed in a forward-looking fashion; when one has a (1, 1) beta distribution over the arm, one thinks it is equally likely that the true propensity of the arm is above .5 and below .5.

The VoI comes into that framework by being the piece that agitates for exploration. If you've pulled arm1 seven times and gotten 4 heads and three tails, and haven't pulled arm2 yet, the expected value of pulling arm1 is higher than pulling arm2 but there's a fairly substantial chance that arm2 has a higher propensity than arm1. Heuristics that say to do something like pull the level with the higher 95th percentile propensity bake in the VoI from pulling arms with lower means but higher variances.


If, from a forward-looking perspective, one does decrease their subjective value of decision situation by gaining information, then one shouldn't gain that information. That is, it's a bad idea to pay for a test if you don't expect the cost of the test to pay for the additional value. (Maybe you'll continue to pull arm1, regardless of the results of pulling arm2, as in the case where arm1 has delivered heads 7 times in a row. Then switching means taking a hit for nothing.)

One thing that's important to remember here is conservation of expected evidence--if I believe now that running an experiment will lead me to believe that arm1 has a propensity of .1 and arm2 has a propensity of .2, then I should already believe those are the propensities of those arms, and so there's no subjective loss of well-being.

Comment author: So8res 26 September 2016 06:39:53PM 2 points [-]

Thanks!

Comment author: gucciCharles 26 September 2016 05:01:11AM 2 points [-]

She gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it's only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters.

Perhaps she doesn't have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older.

Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it's done. Imagine a 5 foot tall jewish guy that loves basketball. He's not gonna make the NBA. It's simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.

Comment author: Wes_W 14 September 2016 11:29:34PM *  2 points [-]

But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?

That's what the problem is asking!

This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. "Stick to your decision theory, except when you non-rigorously decide not to" isn't a resolution to the problem, any more than "ignore the calculations since they're wrong" was a resolution to the ultraviolet catastrophe.

Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don't HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.

Comment author: Seth_Goldin 08 September 2016 04:46:54PM 2 points [-]

For those interested, Netflix has a new documentary out about the case: https://youtu.be/9r8LG_lCbac

View more: Prev | Next