Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Jotaf10

The Dark Knight has an even better example - in the bank robbery scene, each subgroup excludes only one more member, until the only man left is... That's enough of a spoiler I guess.

Jotaf00

I don't really wanna rock the boat here, but in the words of one of my professors, it "needs more math".

I predict it will go somewhat like this: you specify the problem in terms of A implies B, etc; you find out there's infinite recursion; you prove that the solution doesn't exist. Reductio ad absurdum anyone?

Jotaf00

I agree with the thesis; the referenced paper is really interesting, but the article on LW is a bit long-winded in trying to convey the notion that "there is no internal model". Amusingly the paper's title is "Internal models in the cerebellum"!

Jotaf10

I'd only like to add a small contribution, concerning Mises's argument that "The human being cannot see the infinitely small step" and thus continuous functions cannot be used as models.

Discretely sampled (digital) signals are used all the time in engineering, and they are analogous to their continuous counterparts (analog). Particular care must be taken when "converting" between one and the other; but for most purposes they're pretty close.

All the appliances you have at home now take discrete samples of continuous quantities (physical quantities like temperature and such). Just because a human can't sample a signal / economic quantity infinitely, it doesn't mean that math must be taken out of the equation entirely. You just need to bend your math a bit to accommodate.

http://en.wikipedia.org/wiki/Sampling_(signal_processing) (And notice how there -is- math despite the fact that such systems cannot discern infinitely small changes in quantity.) (Edit: The link parser doesn't like that ending braces somehow; it should be included in the link.)

As a personal note, some people's narrow views simply blow my mind, how they can dismiss entire subjects with just one fallacious argument.

Jotaf10

I guess the ship's "council" making the decisions helps the argument that Eliezer is making, and can be waived simply because you could have a lengthier story where they traveled back to Earth and then the Earth Government had exactly the same debate. But that's nitpicky, how would it help the story or the argument behind it? IMHO it's good the way it is.

Some of the council's members having these extreme reactions of empathy seems a bit alien to us even, but that is our own bias. We ignore suffering in so-called 3rd world countries every day. "Aliens eating each other? Please! Live and let die." Who's right on this one, us or them?

The council members live in a world where humans (the only sentients they know) don't fight other humans, and that's it. Suffering is considered the most amoral thing by their standards. I think it's entirely in-character and reasonable that they feel compelled to stop it.

Jotaf00

"This talk about "'right' means right" still makes me damn uneasy. I don't have more to show for it than "still feels a little forced" - when I visualize a humane mind (say, a human) and a paperclipper (a sentient, moral one) looking at each other in horror and knowing there is no way they could agree about whether using atoms to feed babies or make paperclips, I feel wrong. I think about the paperclipper in exactly the same way it thinks about me! Sure, that's also what happens when I talk to a creationist, but we're trying to approximate external truth; and if our priors were too stupid, our genetic line would be extinct (or at least that's what I think) - but morality doesn't work like probability, it's not trying to approximate anything external. So I don't feel so happier about the moral miracle that made us than about the one that makes the paperclipper."

Oh my, this is so wrong. So you're postulating that the paperclipper would be extinct too due to natural selection? Somehow I don't see the mechanisms of natural selection applying to that. With it being created once by humans and then exploding, and all that.

If 25% of its "moral drive" is the result of a programming error, is it still "understandable and as much of a worthy creature/shaper of the Universe" as us? This is the cosmopolitan view that Eliezer describes; and I don't see how you're convinced that admiring static is just as good as admiring evolved structure. It might just be bias but the later seems much better. Order > chaos, no?

Jotaf10

Emile and Caledonian are right. Eliezer should've defined exceptions to boredom instead (and more simply) as "activities that work towards your goal". Those are exempt of boredom and can even be quite fun. No need to distinguish between high, low and mid-level.

The page at Lostgarden that Emile linked to is a bit long, so I'll try to summarize the proposed theory of fun, with some of my own conclusions:

You naturally find activities that provide you with valuable insights fun (the "aha!" moment, or "fun"). Tolerance to repetition (actually, finding a repetitive act "fun" as well) is roughly proportional to your expectation of how it will provide you with a future fun moment.

There are terminal fun moments. Driving a car is repetitive, but at high speeds adrenaline makes up for that. Seeing Mario jump for the first time is fun (you' found a way of impacting the world [the computer screen] through your own action). I'm sure you can think of other examples of activities chemically wired to being fun, of course ;)

Working in the financial business might be repetitive and boring (or at least it seems that way at first), but if it yields good paychecks, which give you the opportunity to buy nice things, gain social status, etc, you'll keep doing it.

Jumping in Mario is repetitive, and if jumping didn't do anything, you'd never touch that button again after 10 jumps (more or less). But early on it allows you to get to high platforms, which kinda "rekindles" the jumping activity, and the expectation that it will be useful in the future/yield more fun. Moving from platform to platform gets repetitive, unless it serves yet another purpose.

(The above is all described in Lostgarden and forms the basis of their theory of fun, and how to build a fun game. Following are some of my own conclusions.)

The highest goal of all is usually to "beat the game"/"explore the game world"/"have the highest score", and you set it upon yourself naturally. This is like the goal of jumping over a ledge, even if you don't know what's beyond it (in the Mario world). You ran out of goals so you're exploring, which usually means thinking up an "exploratory goal", ie, trying something new.

You can say that finding goals is fun in itself. If you start in a blank state, nothing will seem fun at first. You might as well just sit down and whither away! So a good strategy is to set yourself a modest goal (an exploratory goal), and the total fun had will be greater than the fun you assigned earlier to the goal in itself, which might be marginal. A more concrete example: The fun in reading "You win!" is marginal, but you play through Mario just to read those words. So I guess that the journey is more important than getting to the end.

Jotaf00

Thanks, I must have missed that in the myriad of information. I'll follow the references but your explanation is certainly more reassuring.

Jotaf00

Is it just me or they completely ignored the following arguments in all reports?

1) Over-reliance on the effects of Hawking radiation, leaving a big hole in their reasoning if it turns out it doesn't exist. 2) Extremely high pressures near the center of the Earth might increase the accretion rate substantially. 3) Any products of cosmic ray interaction with existing stellar bodies are very likely to escape the body's gravitational influence before doing any damage because of their near-c speeds, which is not the case in the LHC.

I read all the reports. I feel a bit better knowing that different commissions analyzed the risks involved, and that at least one of them was supposedly independent from CERN. But the condescending tone of the reports worries me; it seems that these calculations were not given the appropriate attention, rather being little more than a nuisance in the authors' agendas.

Would you write someone in charge about your concerns, or you're betting on the "it's all good" side of things?

Jotaf00

Problem with these AI's is that, in order to do something useful for us, they will for sure have some goals to attain, and be somewhat based on today's planning algorithms. Typical planning algorithms will plan with an eye for the constraints given and, rightfully, ignore everything else.

A contrived example, but something to consider: imagine a robot with artificial vision capabilities and capable of basic manipulation. You tell it to do something, but a human would be harmed by doing that action (by pushing an object, for example). One of the constraints is "do not harm humans", which was roughly translated as "if you see a human, don't exert big forces on it". The robot then happily adjusts its vision software to deliberately not see the human, as it would under any other condition adjust it the other way around to actively look for a human when it is not seen (you can imagine this as an important adjustable threshold for 3D object recognition or whatever, one that has to be adjusted to look for different kinds of objects). Yes this is a contrived example, but it's easy to imagine loopholes in AI design, and anyone who has worked closely with planning algorithms knows that if these loopholes exist, the algorithm will find them. As sad as it might sound, in medieval times, the way that religious people found to justify using slave labor was to classify slaves as "not actual people and thus, our laws are not applicable". It's only reasonable to assume that an AI can do this as well.

Load More