Posts

Sorted by New

Wiki Contributions

Comments

When will Harry tell Hermione the truth? I feel like he should insist she learn occlumency first.

Harry can just claim to have already used it that day for an innocuous purpose, like studying or something. Sure, McGonagall could accuse him of stupidity because that leaves him unprepared for an emergency, but pleading guilty to stupidity is easy. (Well, easier, anyway.)

Don't be too hasty, whatever you end up deciding! It's only been a day. A lot of people put a lot of thought into solving this problem, and it makes sense that their attitudes about whether the problem was too easy, or too hard, or whether they solved guessed the author's solution, or whether it's unrealistic, would be emotionally enhanced by the effort they spent.

Take a week, take a month, talk to people you trust.

I'm a postdoc in differential geometry, working in pure math (not applied). The word "engineering" in a title of a forum would turn me away and lead me to suspect that the contents were far from my area of expertise. I suspect (low confidence) that many other mathematicians (in non-applied fields) would feel the same way.

There's also the problem of actually building such a thing.

edit: I should add, the problem of building this particular thing is above and beyond the already difficult problem of building any AGI, let alone a friendly one: how do you make a thing's utility function correspond to the world and not to its perceptions? All it has immediately available to it is perception.

Let me try to strengthen my objection.

Xia: But the 0, 0, 0, ... is enough! You've now conceded a case where an endless null output seems very likely, from the perspective of a Solomonoff inductor. Surely at least some cases of death can be treated the same way, as more complicated series that zero in on a null output and then yield a null output.

Rob: There's no reason to expect AIXI's whole series of experiences, up to the moment it jumps off a cliff, to look anything like 12, 10, 8, 6, 4. By the time AIXI gets to the cliff, its past observations and rewards will be a hugely complicated mesh of memories. In the past, observed sequences of 0s have always eventually given way to a 1. In the past, punishments have always eventually ceased. It's exceedingly unlikely that the simplest Turing machine predicting all those intricate ups and downs will then happen to predict eternal, irrevocable 0 after the cliff jump.

Put multiple AIXItI's in a room together, and give them some sort of input jack to observe each other's observation/reward sequences. Similarly equip them with cameras and mirrors so that they can see themselves. Maybe it'll take years, but it seems plausible to me that after enough time, one of them could develop a world-model that contains it as an embodied agent.

I.e. it's plausible to me that an AIXItI under those circumstances would think: "the turing machines with smallest complexity which generate BOTH my observations of those things over there that walk like me and talk like me AND my own observations and rewards, are the ones that compute me in the same way that they compute those things over there".

After which point, drop an anvil on one of the machines, let the others plug into it and read a garbage observation/reward sequence. AIXItI thinks, "If I'm computed in the same way that those other machines are computed, and an anvil causes garbage observation and reward, I'd better stay away from anvils".

It's really great to see all of these objections addressed in one place. I would have loved to be able to read something like this right after learning about AIXI for the first time.

I'm convinced by most of the answers to Xia's objections. A quick question:

Yes... but I also think I'm like those other brains. AIXI doesn't. In fact, since the whole agent AIXI isn't in AIXI's hypothesis space — and the whole agent AIXItl isn't in AIXItl's hypothesis space — even if two physically identical AIXI-type agents ran into each other, they could never fully understand each other. And neither one could ever draw direct inferences from its twin's computations to its own computations.

Why couldn't two identical AIXI-type agents recognize one another to some extent? Stick a camera on the agents, put them in front of mirrors and have them wiggle their actuators, make a smiley face light up whenever they get rewarded. Then put them in a room with each other.

Lots of humans believe themselves to be Cartesian, after all, and manage to generalize from others without too much trouble. "Other humans" isn't in a typical human's hypothesis space either — at least not until after a few years of experience.

Agreed about Eliezer thinking similar thoughts. At least, he's thinking thoughts which seem to me to be similar to those in this post. See Building Phenomenological Bridges (article by Robby based on Eliezer's facebook discussion).

That article discusses (among other things) how an AI should form hypotheses about the world it inhabits, given its sense perceptions. The idea "consider all and only those worlds which are consistent with an observer having such-and-such perceptions, and then choose among those based on other considerations" is, I think, common to both these posts.

(I haven't seen the LW co-working chat)

If you want to tell people off for being sexist, your speech is just as free as theirs. People are free to be dicks, and you're free to call them out on it and shame them for it if you want.

I think you should absolutely call it out, negative reactions be damned, but I also agree with NancyLebovitz that you may get more traction out of "what you said is sexist" as opposed to "you are sexist".

To say nothing is just as much an active choice as to say something. Decide what kind of environment you want to help create.

Umm........ but caffeine is also addictive. This seems like a flaw in the plan.

Load More