All of nigerweiss's Comments + Replies

It's going to be really hard to come up with any models that don't run deeply and profoundly afoul of the Occam prior.

When asked a simple question about broad and controversial assertions, it is rude to link to outside resources tangentially related to the issue without providing (at minimum) a brief explanation of what those resources are intended to indicate.

I don't speak Old English, unfortunately. Could someone who does please provide me with a rough translation of the provided passage?

[This comment is no longer endorsed by its author]Reply
4CAE_Jones
It's at the bottom of the chapter. "Three shall be the Peverelle's sons, and three their devices by which death shall be defeated." [edit] Originally misremembered the last word as "destroyed".

It isn't the sort of bad argument that gets refuted. The best someone can do is point out that there's no guarantee that MNT is possible. In which case, the response is 'Are you prepared to bet the human species on that? Besides, it doesn't actually matter, because [insert more sophisticated argument about optimization power here].' It doesn't hurt you, and with the overwhelming majority of semi-literate audiences, it helps.

Of course there is. For starters, most of the good arguments are much more difficult to concisely explain, or invite more arguments from flawed intuitions. Remember, we're not trying to feel smug in our rational superiority here; we're trying to save the world.

0Shmi
if your bad argument gets refuted, you lose whatever credibility you may have had.

That's... not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT - I'd bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.

All we're doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There's nothing wrong with that. Would we need to do it if people were rational agents? No - but, as you may be aware, we definitely don't live in that universe.

0Shmi
There is no need to use known bad arguments when there are so many good ones.

I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.

0Shmi
This is not a great argument, given that it works equally well if you replace MNT with God/Devil in the above.

There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.

One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI... (read more)

8Vratko_Polak
Yes, CEV is a slippery slope. We should make sure to be as aware of possible consequences as practical, before making the first step. But CEV is the kind of slippery slope intended to go "upwards", in the direction of greater good and less biased morals. In the hands of superintelligence, I expect CEV to extrapolate values beyond "weird", to "outright alien" or "utterly incomprehensible" very fast. (Abandoning Friendliness on the way, for something less incompatible with The Basic AI Drives. But that is for completely different topic.) Thank you for mentioning "childish foolishness". I was not sure whether such suggestive emotional analogies would be welcome. This is my first comment on LessWrong, you know. Let me just state that I was surprised by my strong emotional reaction while reading the original post. As long as higher versions are extrapolated to be more competent, moral, responsible and so on; they should be allowed to be extrapolated further. If anyone considers the original post to be a formulation of a problem (and ponders possible solutions), and if the said anyone is interested in counter-arguments based on shallow, emotional and biased analogies, here is one such analogy: Imagine children pondering their future development. They envision growing up, but they also see themselves start caring more about work and less about play. Children consider those extrapolated values to be unwanted, so they formulate the scenario as "problem of growing up" and they try to come up with a safe solution. Of course, you may substitute "play versus work" with any "children versus adults" trope of your chice. Or "adolescents versus adults", and so on. Reades may wish to counter-balance any emotional "aftertaste" by focusing on The Legend of Murder-Gandhi again. P.S.: Does this web interface have anything like "preview" button? Edit: typo and grammar.

I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.

Yeah, The glia seem to serve some pretty crucial functions as information-carriers and network support infrastructure - and if you don't track hormonal regulation properly, you're going to be in for a world of hurt. Still, I think the point stands.

Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.

Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists w... (read more)

1Baughn
If you find someone to bet against you, I'm willing to eat half the hat.

When I was younger, I picked up 'The Emperor's New Mind' in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.

Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.

I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.

-2GeraldMonroe
An optimal de novo AI, sure. Keep in mind that human beings have to design this thing, and so the first version will be very far from optimal. I think it's a plausible guess to say that it will need on the order of the same hardware requirements as an efficient whole brain emulator. And this assumption shows why all the promises made by past AI researchers have so far failed : we are still a factor of 10,000 or so away from having the hardware requirements, even using supercomputers.

Evidence?

EDIT: Sigh. Post has changed contents to something reasonable. Ignore and move on.

Reply edit: I don't have a copy of your original comment handy, so I can't accurately comment on what I was thinking when I read it. However, I don't recall it striking me as a joke, or even an exceptionally dumb thing for someone on the internet to profess belief in.

0Kawoomba
Wrong reference class, "someone on the internet", much too broad. Just as your comment shouldn't usefully be called an exceptionally smart thing for a mammal to say, we should refer to the most applicable reference class -- "someone on LW" -- which screens for most simple "haha, that guy is clearly dumb, damn I'm so smart figuring that out" gotcha moments. Shift gears. The original comment was close to "we'll need quantum and/xor quarks to explain qualia (qualai?)." Not exactly subtle with the "xor" ...
0Kawoomba
I'd really want to know this (no need to pay the karma penalty, just PM or edit your comment): Did you really take the comment at face value? This was the intent of the comment pre-edit. It may be interesting if that's a cultural boundaries thing for humor, or if LW'ers just keep an unusually open mind and are ready to accept others to hold outlandish positions.

Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you'd be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can't do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It's not powerful in the way that a true GAI would be.

That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably ... (read more)

Zero? Why?

At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn't ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I'd be very surprised if nobody... (read more)

1Desrtopa
I think it's more likely than not that simulating a world like our own would be regarded as ethically impermissible. Creating a simulated universe which contains things like, for example, the Killing Fields of Cambodia, seems like the sort of thing that would be likely to be forbidden by general consensus if we still had any sort of self-governance at the point where it became a possibility. Plus, while I've encountered plenty of people who suggest that somebody would want to create such a simulation, I haven't yet known anyone to assert that they would want to make such a simulation. I don't understand why you're leaping from "simulators are not our descendants" to "simulators do not resemble us closely enough to meaningfully call them 'people.'" If I were in the position to create universe simulations, rather than simulating my ancestors, I would be much more interested in simulating people in what, from our perspective, is a wholly invented world (although, as I said before, I would not regard creating a world with as much suffering as we observe as ethically permissible.) I would assign a far higher probability to simulators simulating a world with beings which are relatable to them than a world with beings unrelatable to them, provided they simulate a world with beings in it at all, but their own ancestors are only a tiny fraction of relatable being space.

I know some hardcore C'ers in real life who are absolutely convinced that centrally-planned Marxist/Leninist Communism is a great idea, and they're sure we can get the kinks out if we just give it another shot.

1Richard_Kennaway
As in, line up all those against and shoot them? Do these people see themselves as among the organisers of such a system, or the organised?
-2ThrustVectoring
You also know some people who desperately need a course in computational complexity. Markets aren't perfect, of course, but good luck trying to centrally compute distribution of resources.
2pragmatist
C'ers?

Unless P=NP, I don't think it's obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we're alarmingly close to solipsism again.

I guess one way to to test this hypothesis would be to try to construct... (read more)

0Luke_A_Somers
, or the simulating entity has mindbogglingly large amounts of computational power. But yes, it would rule out broad classes of simulating agents.

We can be a simulation without being a simulation created by our descendants.

We can, but there's no reason to think that we are. The simulation argument isn't just 'whoa, we could be living in a simulation' - it's 'here's a compelling anthropic argument that we're living in a simulation'. If we disregard the idea that we're being simulated by close analogues of our own descendants, we lose any reason to think that we're in a simulation, because we can no longer speculate on the motives of our simulators.

0elharo
I think the likelihood of our descendants simulating us is negligible. While it is remotely conceivable that some super-simulators who are astronomically larger than us and not necessarily subject to the same physical laws, could pull off such a simulation, I think there is no chance that our descendants, limited by the energy output of a star, the number of atoms in a few planets, and the speed of light barrier, could plausibly simulate us at the level of detail we experience. This is the classic fractal problem. As the map becomes more and more accurate, it become larger and larger until it is the same size as the territory. The only simulation our descendants could possibly achieve, assuming they don't have better things to do with their time, would be much less detailed than reality.
0Desrtopa
I don't think that the likelihood of our descendants simulating us at all is particularly high; my predicted number of ancestor simulations should such a thing turn out to be possible is zero, which is one reason I've never found it a particularly compelling anthropic argument in the first place. But, if people living in universes capable of running simulations tend to do run simulations, then it's probable that most people will be living in simulations, regardless of whether anyone ever chooses to run an ancestor simulation.

That doesn't actually solve the problem: if you're simulating fewer people, that weakens the anthropic argument proportionately. You've still only got so much processor time to go around.

There's a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you're trying to model. Suffice to say that it'll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation - at least, in the domain that we're talking about.

And, you're right - we rely on Solomonoff priors to come to conclusions in science, an... (read more)

I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.

That seems... problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that's con... (read more)

0elharo
I don't think it's that hard to defend. That people like us emerge accidentally is the default assumption of most working scientists today. Personally I find that a lot more likely than that we are living in a simulation. And even if you think that it is more likely that we are living in a simulation (I don't, by the way) there's still the question of how the simulators arose. I'd prefer not to make it an infinite regress. Such an approach veers dangerously close to unfalsifiable theology. (Who created/simulated God? Meta-God. Well then, who created/simulated Meta-God? Meta-Meta-God. And who created/simulated Meta-Meta-God?...) Sometime, somewhere there's a start. Occam's Razor suggests that the start is our universe, in the Big Bang, and that we are not living in a simulation. But even if we are living in a simulation, then someone is not living in a simulation. I also think there are stronger, physical arguments for assuming we're not in a digital simulation. That is, I think the universe routinely does things we could not expect any digital computer to do. But that is a subject for another post.
3Desrtopa
I don't see anything contradictory about it. There's no reason that a simulation that's not of the simulators' past need only contain people incidentally. We can be a simulation without being a simulation created by our descendants. Personally, if I had the capacity to simulate universes, simulating my ancestors would probably be somewhere down around the twentieth spot on my priorities list, but most of the things I'd be interested in simulating would contain people. I don't think I would regard simulating the universe as we observe it as ethically acceptable though, and if I were in a position to do so, I would at the very least lodge a protest against anyone who tried.

Not for the simulations to work - only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn't have had the tools to measure the difference.

Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are per... (read more)

3elharo
This argument is anthropomorphizing. It assumes that the purpose of the purported simulation is to model humanity. Suppose it isn't? Suppose the purpose of the simulation is to model a universe with certain physical laws, and one of the unexpected outcomes is that intelligent technological life happens to evolve on a small rocky planet around one star out in the spiral arm of one galaxy. That could be a completely unexpected outcome, maybe even an unnoticed outcome, of a simulation with a very different purpose.
8JGWeissman
Ok, before you were talking about "grainier" simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work. It is not so easy to remove "quantum weirdness" because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.

The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they're based on.

If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they'd run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.

5NancyLebovitz
Thanks for the reminder. I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.

In that case, you've lost the anthropic argument entirely, and whether or not we're a simulation relies on your probability distributions over possible simulating agents, which is... weird.

4NancyLebovitz
How did I lose the anthropic argument? We're still only going to know about the sort of universe we're living in.

Could also be a temporary effect. Your gut flora adjusts to what you're eating, and a sudden shift in composition can cause digestive distress.

Once you have an intelligent AI, it doesn't really matter how you got there - at some point, you either take humans out of the loop because using slow, functionally-retarded bags of twitching meat as computational components is dumb, or you're out-competed by imitator projects that do. Then you've just got an AI with goals, and bootstrapping tends to follow. Then we all die. Their approach isn't any safer, they just have different ideas about how to get a seed AI (and ideas, I'd note, that make it much harder to define a utility function that we like).

I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.

There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that ... (read more)

1elharo
Something doesn't click here. You claim "that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference"; but how do we know that we do live in a computationally expensive universe if we can't recognize the difference between this and a less computationally expensive universe? Almost by definition anything we can measure (or perhaps more accurately have measured) is a necessary component of the simulation.
7AlanCrowe
The human brain is subject to glitches, such as petit mal, transient ischaemic attack, or misfiling a memory of a dream as a memory of something that really happened. There is a lot of scope for a cheap simulation to produce glitches in the matrix without those glitches spoiling the results of the simulation. The inside people notice something off and just shrug. "I must have dreamt it" "I had a petit mal." "That wasn't the simulators taking me off line to edit a glitch out of my memory, that was just a TIA. I should get my blood pressure checked." And the problem of "brain farts" gives the simulators a very cheap way for protecting the validity of the results simulation against people noticing glitches and derailing the simulation by going on a glitch hunt motivated by the theory that they might be living in a simulation. Simply hide the simulation hypothesis by editing Nick Bostrom under the guise of a TIA. In the simulation Nick wakes up with his coffee spilled and his head on the desk. Thinking up the simulation hypothesis "never happened". In all the myriad simulations, the simulation hypothesis is never discussed. I'm not sure that entirely resolves the matter. How can the simulators be sure that editing out the simulation hypothesis works as smoothly as they expect? Perhaps they run a few simulations with it left in. If it triggers an in-simulation glitch hunt that compromises the validity of the simulation, they have their answer and can turn off the simulation.
5JoshuaZ
The problem is more serious than that, in that not only is our universe computationally expensive, it is set up in a way such that it would (apparently) have a lot of trouble doing universe simulations. You cannot simulate n+1 arbitrary bits with just n qubits. This means that a simulation computer needs to be at least as effectively large as what it is simulating. You can assume that some aspects are more coarse grained (so you don't do a perfect simulation of most of Earth, just say the few kilometers near the surface that humans and other life are likely to be), but this is still a lot of stuff.
4Randaly
A fourth answer is that the entire world/universe isn't being simulated; only a small subset of it is. I believe that more arguments about simulations assume that more simulators wouldn't simulate the entire current population.

It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.

It's clear that we're not living i... (read more)

2roystgnr
We live in something that is experimentally indistinguishable from an unbelievably computationally expensive universe... but there are whole disciplines of mathematics dedicated to discovering computationally easy ways to calculate results which are indistinguishable from unbelievably computationally expensive underlying mathematical models. If we can already do that, how much easier might it be for The Simulators?
4JGWeissman
To the extent that super-intelligent demons / global conspiracies are both required for a grainier simulation to work and unreasonable to include in a simulation hypothesis, this undermines your claim that "We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower."

Another possibility is that whoever is running the simulation is both computationally very rich and not especially interested in humans, they're interested in the sub-atomic flux or something. We're just a side-effect.

Anti-trust law hasn't (yet!) destroyed Google - however splitting up monopolists like Standard Oil or various cartels seems a clear win.

This has more to do with failure to enforce anti-trust laws in a meaningful way, though. In the case of Oil and most major cartels, these are not natural monopolies: they are monopolies built and maintained with the express help of various world states, which is a somewhat different matter.

Inherited wealth certainly does harm you. You and I are not on a level playing field with the son of some Saudi prince. We cann

... (read more)
6pragmatist
Significant status differences in a society are correlated with all kinds of adverse outcomes. One causal hypothesis (with quite a bit of compelling evidence backing it) is that a lot of this has to do with the neuroendocrinological stress response triggered by the perception that others are higher status than oneself. I don't know if I'd classify this as rich people harming poor people, but (if accurate) it is an example of entrenched social inequality harming poor (and other low-status) people.

I've heard this sort of thing before, and I've never been totally sold on the idea of post-scarcity economics. Mostly because I think that if you give me molecular nanotechnology, I, personally, can make good use of basically as much matter and energy (the only real resources) as I can get my hands on, with only moderately diminishing returns. If that's true for even a significant minority of the population, then there's no such thing as a post-scarcity economy, merely an extremely wealthy one.

In practice, I expect us all to be dead or under the watchful eye of some kind of Friendly power singlet by then, so the point is rather moot anyway.

This seems intuitively likely, but, on the other hand, we thought the same thing about telecommunications, and our early move to nationalize that under the Bell corporation was wholeheartedly disastrous, and continues to haunt us to this day. I... honestly don't know. I suspect that some level of intervention is optimal here, but I'm not sure exactly how much.

In the case of water, if we were required to move water in tanks rather than pipes, water would be more expensive and traffic would be worse, but we'd also probably see far less wasted water and more water conservation.

7Luke_A_Somers
Nationalization is not the only possible government intervention - antitrust regulations have different down-sides but they do mainly work by preserving the market rather than destroying it.

anti-trust law, laws against false advertising, corruption laws.

I'll give you the false advertising. Anti-trust laws do not seem like an obvious win in the case of natural monopolies; for example, destroying Google and giving an equal share of their resources and employees to Bing, Yahoo, and Ask.com does not seem obviously likely to improve the quality of search for consumers. As for anti-corruption laws, I'd need to see a much clearer definition before I gave you an opinion.

Your mention of wanting to "preclude blackmail, theft, and slavery&qu

... (read more)
1[anonymous]
On my last two sentences - I intended them as somewhat of a cheeky wink, but maybe they're a bit snotty. I'm certainly not trying to convert anyone; and I'm all for being aware of doubts, inconsistencies and hard choices. Political discussion is just great fun. On the particular examples: * Anti-trust law hasn't (yet!) destroyed Google - however splitting up monopolists like Standard Oil or various cartels seems a clear win. * I guess anti-corruption laws can be taken as an extension of anti-trust (no bribing the supply manager to get the contract) or as solving problems caused by government problem-solving actions (no bribing the antitrust investigator). * Inherited wealth certainly does harm you. You and I are not on a level playing field with the son of some Saudi prince. We cannot compete fairly for jobs, or wealth. Its not 'caring for them after they're gone' its giving them an unfair advantage over the rest of us. * The education point follows from this, since purchasing better education is perhaps the primary way people inherit privelege. Education in our present society is a positional good - its distribution is zero sum. Some rich woman buys an extra qualification for her daughter, your parents can't buy it for you, she gets the job - not because shes better, but because her parents are richer than yours. Certainly hurts you. * So perhaps the type of information campaign needs the public backing of government, as this carries the legitimacy of collective action. Also, if we start from now, the 'private organisations' with disproportionate wealth and power will be able to produce more propaganda and preserve the status quo (that benefits them)

That does seem like a better idea, ignoring issues of price setting. Unfortunately, nation states are extremely bad at game theory, and it's difficult to achieve international agreement on these issues, especially when it will impact one nation disproportionately (China would be much harder hit, economically, by cap-and-trade legislation than the US).

I'd disagree pretty strongly with the energy issue, at least for now - but that's a discussion for another time. In politics, as in fighting couples, it is crucial to keep your peas separate from your pudding - one issue at a time.

2Stuart_Armstrong
Without wanting to start a fight, which half do you disagree with? The Moore's law or the nuclear estimate? I'm personally more confident about the first than the second.

Here's a point of consideration: if you take Kurzweil's solution, then you can avoid Pascal's mugging when you are an agent, and your utility function is defined over similar agents. However, this solution wouldn't work on, for example, a paperclip maximizer, which would still be vulnerable - anthropiic reasoning does not apply over paperclips.

While it might be useful to have Friendly-style AIs be more resilient to P-mugging than simple maximizers, it's not exactly satisfying as an epistemological device.

I figured it out from context. But, sure, that could probably be clearer.

So, in general, trying to dramatically increase the intelligence of species who lack our specific complement of social instincts and values seems like an astoundingly, overwhelmingly Bad Idea. The responsibilities to whatever it is that you wind up creating are overwhelming, as is the danger, especially if they can reproduce independently. It's seriously just a horrible, dangerous, irresponsible idea.

Related

The baseline inconvenience cost associated with using bitcoins is also really high for conducting normal commerce with them.

The bitcoin market value is predicated mostly upon drug use, pedophilia, nerd paranoia, and rampant amateur speculation. Basically, break out the tea leaves.

drug use, pedophilia, (...), and rampant amateur speculation

Hey, that's almost 2.5% of the world GDP! Can't go wrong with a market this size.

5Tripitaka
As of january, the pizza-chain Dominos accepts payment in bitcoins; and as of this week, Kim Dotcoms "Mega" filehosting-service accepts them, too.

That would definitely make you one of those tricky people.

0Pentashagon
Does the problem have a time limit? Does the computer you use to write your program have any storage limits? For a non-tricky answer I think the only solution is to repeatedly apply the busy beaver function BB as many times as physically possible before you fall over from exhaustion. You have to be fairly clever about it; first write a macro expander and then run that for as long as possible recursively expanding bb(X) on X = bb(some-big-number). Submit the result when it's physically impossible to continue expanding the macro. Actually, this can be improved if the lower limit is the length of program that you can submit. Assume there is a maximum of N bits that you can submit. If you have hypercomputation locally available then just find a particular solution to bb(N) and submit it. If you do not have hypercomputation locally available then you have to try your best at writing a solution to bb(N). The solution in the first paragraph is not bad as a starting point, and in fact I was silly and didn't realize I was trying to approximate bb(N) and actually spent some time thinking of other strategies that beat it. The basic idea is to realize that every solution is (probably!) going to be roughly of the form F(G()) = return BB^G()(3), where G() is the largest function you can think of that fits in the remaining bits. If the notation is confusing, I mean bb(bb(bb ... bb(3) .. ))) as many times as G() returns. Naively I thought that meant I could just treat G() as a sub-problem of the form G(H()) = BB^H()(3) and so on until I ran out of bits. But that can be composed into a meta-function of F(G()) = BB^(BB^H()(3)) = BB^^H()(3). And then BB^^^^^...^^^^H()(3) with H() up-arrows as a meta-meta-solution. Presumably there is a meta-function for that process as well, and so on, meta-function piled upon meta-function. So that means there is probably a procedure to derive new meta-functions and you should just run that derivation procedure until you find a sufficiently large me

That's fair.

Actually, my secret preferred solution to GAME3 is to immediately give up, write a program that uses all of us working together for arbitrary amounts of time (possibly with periodic archival and resets to avoid senescence and insanity), to create an FAI, then plugging our minds into an infinite looping function in which the FAI makes a universe for us, populates it with agreeable people, and fulfills all of our values forever. Program never halts, return value is taken to be 0, Niger0 is instantly and painlessly killed, and Niger1 (the simulation) eventually gets to go live in paradise for eternity.

How does your proposed solution for Game 1 stack up against the brute-force metastrategy?

Game 2 is a bit tricky. An answer to your described strategy would be to write a large number generator f(1),which produces some R, which does not depend on your opponents' programs, create a virtual machine that runs your opponents' programs for r steps, and, if they haven't halted, swaps the final recursive entry on the call stack with some number (say, R, for simplicity), and iterates upwards to produce real numbers for their function values. Then you just retur... (read more)

0earthwormchuck163
Well the brute force strategy is going to do a lot better, because it's pretty easy to come up with a number bigger than the length of the longest program anyone has ever thought to write, and then plugging that into your brute force strategy automatically beats any specific program that anyone has ever thought to write. On the other hand, the meta-strategy isn't actually computable (you need to be able to decide whether program produces large outputs, which requires a halting oracle or at least a way of coming up with large stopping times to test against). So it doesn't really make sense to compare them.

Note that the code is being run halting oracle hypercomputer, which simplifies your strategy to strategy number two.

0DanielLC
So? I'm not allowed to actually use the oracle. It's just used to make sure my program halts. No. Strategy number two has an upper bound for how high it can answer, where mine does not. For example, it may be that you reach a program that does not halt before you reach one that takes TREE(3) steps to halt. In fact, I'm pretty sure you will. Second, strategy two is highly likely to fail due to reaching an obviously unhalting program. My version would not do so. This was supposed to be an improvement on strategy two.

So, there are actually compelling reasons that halting oracles can't actually exist. Quite aside from your solution, it's straightforward to write programs with undefined behavior. Ex:

function undef():

if ORACLE_HALT(undef)::

    while 1 != 2:

       print "looping forever"

else:

   print "halting"

   return 0

For the sake of the gdanken-experiment, can we just assume that Omega has a well-established policy of horribly killing tricky people who try to set up recursive hypercomputational functions whose halting behavior depends on their own halting behavior?

1Pentashagon
Oh, yes please, that makes it easy. if (halt(other_player_1)) do_not_halt(); if (halt(other_player_2)) do_not_halt(); return 0; At least in GAME3 this kills off the clever opponents who try to simulate me.

So I guess I should have specified which model of hypercomputation Omega is using. Omega's computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implem... (read more)

4Pentashagon
I don't think players actually have hypercomputational abilities otherwise they could do the following: function self(param n) { if (hypercompute(self(n)) != 0) return hypercompute(self(n+1)); return n; } self(1); If the recursive function self(n) would not halt for some n then the player's program would halt and return a value < n. But in that case it would halt for self(n) and return a value larger than n. So it must eventually halt and return The Largest Integer. I assume you just mean that any program the players write will either halt or not and only Omega will know for sure if it doesn't halt.
2earthwormchuck163
In this case, I have no desire to escape from the room.

Playing around with search-space heuristics for more efficiently approximating S-induction.

Which actually sounds a lot more impressive than the actual thing itself, which mostly consists of reading wikipedia articles on information theory, then writing Python code that writes brainfuck (decent universal language).

EDIT: Also writing a novel, which is languishing at about the 20,000 word mark, and developing an indie videogame parody of Pokemon. Engine is basically done, getting started on content creation.

That's got to be close to a best case suspension. I wish her nothing but the best.

That would make sense. I assume the problem is lotus eating - the system, given the choice between a large cost to optimize whatever you care about, or small cost to just optimize its own sense experiences, will prefer the latter.

I find this stuff extremely interesting. I mean, when we talk about value modelling what we're really talking about isolating some subset of the causal mechanics driving human behavior (our values) from those elements we don't consider valuable. And, since we don't know if that subset is a natural category (or how to define ... (read more)

6Eliezer Yudkowsky
You built the machine to optimize its sense experiences. It is not constructed to optimize anything else. That is just what it does. Not when it's cheaper, not when it's inconvenient to do otherwise, but at all times universally.

The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-c... (read more)

0ikrase
That much I do totally agree.

Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there's a logarithmic component to intelligence, as at some point you've already sampled the future outcome space thoroughly enough that most of the new bits of prediction you're getting back are redundant -- but the point of diminishing returns could be very, very high.

0ikrase
What about manipulators? I havent, as far as I know, seen much analysis of manipulation capabilities (and counter-manipulation) on Less Wrong. Mostly there is the AI-box issue (a really freaking big deal, I agree) and then it seems to be considered here that the AI will quickly invent super-nanotech, will not be able to be impeded in its progress, and will become godlike very quickly. I've seen some arguments for this, but never a really good analysis, and it's the remaining reason I am a bit skeptical of the power of FOOM.

I believe I saw a post a while back in which Anja discussed creating a variant on AIXI with a true utility function, though I may have misunderstood it. Some of the math this stuff involves I'm still not completely comfortable with, which is something I'm trying to fix.

In any case, what you'd actually want want to do is to model your agents using whatever general AI architecture you're using in the first place - plus whatever set of handicaps you've calibrated into it - which, presumably has a formal utility function, and is an efficient optimizer.

7Eliezer Yudkowsky
I could be mistaken, but I think this is a case of (unfortunately) several people using the term "utility function" for functions over sensory information instead of a direct reward channel. Dewey has a paper on why such functions don't add up to utility functions over outcomes, IIRC.
Load More