It isn't the sort of bad argument that gets refuted. The best someone can do is point out that there's no guarantee that MNT is possible. In which case, the response is 'Are you prepared to bet the human species on that? Besides, it doesn't actually matter, because [insert more sophisticated argument about optimization power here].' It doesn't hurt you, and with the overwhelming majority of semi-literate audiences, it helps.
That's... not a strong criticism. There are compelling reasons not to believe that God is going to be a major force in steering the direction the future takes. The exact opposite is true for MNT - I'd bet at better-than-even odds that MNT will be a major factor in how things play out basically no matter what happens.
All we're doing is providing people with a plausible scenario that contradicts flawed intuitions that they might have, in an effort to get them to revisit those intuitions and reconsider them. There's nothing wrong with that. Would we need to do it if people were rational agents? No - but, as you may be aware, we definitely don't live in that universe.
I don't have an issue bringing up MNT in these discussions, because our goal is to convince people that incautiously designed machine intelligence is a problem, and a major failure mode for people is that they say really stupid things like 'well, the machine won't be able to do anything on its own because it's just a computer - it'll need humanity, therefore, it'll never kill us all." Even if MNT is impossible, that's still true - but bringing up MNT provides people with an obvious intuitive path to the apocalypse. It isn't guaranteed to happen, but it's also not unlikely, and it's a powerful educational tool for showing people the sorts of things that strong AI may be capable of.
There's a deeper question here: ideally, we would like our CEV to make choices for us that aren't our choices. We would like our CEV to give us the potential for growth, and not to burden us with a powerful optimization engine driven by our childish foolishness.
One obvious way to solve the problem you raise is to treat 'modifying your current value approximation'' as an object-level action by the AI, and one that requires it to compute your current EV - meaning that, if the logical consequences of the change (including all the future changes that the AI...
I've read some of Dennet's essays on the subject (though not the book in question), and I found that, for me,his ideas did help to make consciousness a good deal less mysterious. What actually did it for me was doing some of my own reasoning about how a 'noisy quorum' model of conscious experience might be structured, and realizing that, when you get right down to it, the fact that I feel as though I have subjective experience isn't actually that surprising. It'd be hard to design to a human-style system that didn't have a similar internal behavior that it could talk about.
Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain.
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists w...
Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.
I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.
Evidence?
EDIT: Sigh. Post has changed contents to something reasonable. Ignore and move on.
Reply edit: I don't have a copy of your original comment handy, so I can't accurately comment on what I was thinking when I read it. However, I don't recall it striking me as a joke, or even an exceptionally dumb thing for someone on the internet to profess belief in.
Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you'd be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can't do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It's not powerful in the way that a true GAI would be.
That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably ...
Zero? Why?
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn't ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I'd be very surprised if nobody...
Unless P=NP, I don't think it's obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we're alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct...
We can be a simulation without being a simulation created by our descendants.
We can, but there's no reason to think that we are. The simulation argument isn't just 'whoa, we could be living in a simulation' - it's 'here's a compelling anthropic argument that we're living in a simulation'. If we disregard the idea that we're being simulated by close analogues of our own descendants, we lose any reason to think that we're in a simulation, because we can no longer speculate on the motives of our simulators.
There's a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you're trying to model. Suffice to say that it'll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation - at least, in the domain that we're talking about.
And, you're right - we rely on Solomonoff priors to come to conclusions in science, an...
I can see a case that we're more likely to be living in an ancestor simulation (probably not very accurate) than to be actual ancestors, but I believe strongly that the vast majority of simulations will not be ancestor simulations, and therefore we are most likely to be in a simulation that doesn't have a close resemblance to anyone's past.
That seems... problematic. If your argument depends on the future of people like us being likely to generate lots of simulations, and of us looking nothing like the past of the people doing the simulating, that's con...
Not for the simulations to work - only for the simulations to look exactly like the universe we now find ourselves in. 95% of human history could have played out, unchanged, in a universe without relativistic effects or quantum weirdness, far more inexpensively. We simply wouldn't have had the tools to measure the difference.
Even after the advent of things like particle accelerators, we could still be living in a very similar but-less-expensive universe, and things would be mostly unchanged. Our experiments would tell us that Newtonian mechanics are per...
The original form of the Bostrom thesis is that, because we know that our descendants will probably be interested in running ancestor simulations, we can predict that, eventually, a very large number of these simulations exist. Thus, we are more likely to be living in an ancestor simulation than the actual, authentic history that they're based on.
If we take our simulators to be incomprehensible, computationally-rich aliens, then that argument is gone completely. We have no reason to believe they'd run many simulations that look like our universe, nor do we have a reason to believe that they exist at all. In short, the crux of the Bostrom argument is gone.
Once you have an intelligent AI, it doesn't really matter how you got there - at some point, you either take humans out of the loop because using slow, functionally-retarded bags of twitching meat as computational components is dumb, or you're out-competed by imitator projects that do. Then you've just got an AI with goals, and bootstrapping tends to follow. Then we all die. Their approach isn't any safer, they just have different ideas about how to get a seed AI (and ideas, I'd note, that make it much harder to define a utility function that we like).
I think a slightly sturdier argument is that we live in an unbelievably computationally expensive universe, and we really don't need to. We could easily be supplied with a far, far grainier simulation and never know the difference. If you're interested in humans, you'd certainly take running many orders of magnitude more simulations, over running a single, imperceptibly more accurate simulation, far slower.
There are two obvious answers to this criticism: the first is to raise the possibility that the top level universe has so much computing power that ...
It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.
It's clear that we're not living i...
Anti-trust law hasn't (yet!) destroyed Google - however splitting up monopolists like Standard Oil or various cartels seems a clear win.
This has more to do with failure to enforce anti-trust laws in a meaningful way, though. In the case of Oil and most major cartels, these are not natural monopolies: they are monopolies built and maintained with the express help of various world states, which is a somewhat different matter.
...Inherited wealth certainly does harm you. You and I are not on a level playing field with the son of some Saudi prince. We cann
I've heard this sort of thing before, and I've never been totally sold on the idea of post-scarcity economics. Mostly because I think that if you give me molecular nanotechnology, I, personally, can make good use of basically as much matter and energy (the only real resources) as I can get my hands on, with only moderately diminishing returns. If that's true for even a significant minority of the population, then there's no such thing as a post-scarcity economy, merely an extremely wealthy one.
In practice, I expect us all to be dead or under the watchful eye of some kind of Friendly power singlet by then, so the point is rather moot anyway.
This seems intuitively likely, but, on the other hand, we thought the same thing about telecommunications, and our early move to nationalize that under the Bell corporation was wholeheartedly disastrous, and continues to haunt us to this day. I... honestly don't know. I suspect that some level of intervention is optimal here, but I'm not sure exactly how much.
In the case of water, if we were required to move water in tanks rather than pipes, water would be more expensive and traffic would be worse, but we'd also probably see far less wasted water and more water conservation.
anti-trust law, laws against false advertising, corruption laws.
I'll give you the false advertising. Anti-trust laws do not seem like an obvious win in the case of natural monopolies; for example, destroying Google and giving an equal share of their resources and employees to Bing, Yahoo, and Ask.com does not seem obviously likely to improve the quality of search for consumers. As for anti-corruption laws, I'd need to see a much clearer definition before I gave you an opinion.
...Your mention of wanting to "preclude blackmail, theft, and slavery&qu
That does seem like a better idea, ignoring issues of price setting. Unfortunately, nation states are extremely bad at game theory, and it's difficult to achieve international agreement on these issues, especially when it will impact one nation disproportionately (China would be much harder hit, economically, by cap-and-trade legislation than the US).
I'd disagree pretty strongly with the energy issue, at least for now - but that's a discussion for another time. In politics, as in fighting couples, it is crucial to keep your peas separate from your pudding - one issue at a time.
Here's a point of consideration: if you take Kurzweil's solution, then you can avoid Pascal's mugging when you are an agent, and your utility function is defined over similar agents. However, this solution wouldn't work on, for example, a paperclip maximizer, which would still be vulnerable - anthropiic reasoning does not apply over paperclips.
While it might be useful to have Friendly-style AIs be more resilient to P-mugging than simple maximizers, it's not exactly satisfying as an epistemological device.
So, in general, trying to dramatically increase the intelligence of species who lack our specific complement of social instincts and values seems like an astoundingly, overwhelmingly Bad Idea. The responsibilities to whatever it is that you wind up creating are overwhelming, as is the danger, especially if they can reproduce independently. It's seriously just a horrible, dangerous, irresponsible idea.
That's fair.
Actually, my secret preferred solution to GAME3 is to immediately give up, write a program that uses all of us working together for arbitrary amounts of time (possibly with periodic archival and resets to avoid senescence and insanity), to create an FAI, then plugging our minds into an infinite looping function in which the FAI makes a universe for us, populates it with agreeable people, and fulfills all of our values forever. Program never halts, return value is taken to be 0, Niger0 is instantly and painlessly killed, and Niger1 (the simulation) eventually gets to go live in paradise for eternity.
How does your proposed solution for Game 1 stack up against the brute-force metastrategy?
Game 2 is a bit tricky. An answer to your described strategy would be to write a large number generator f(1),which produces some R, which does not depend on your opponents' programs, create a virtual machine that runs your opponents' programs for r steps, and, if they haven't halted, swaps the final recursive entry on the call stack with some number (say, R, for simplicity), and iterates upwards to produce real numbers for their function values. Then you just retur...
So, there are actually compelling reasons that halting oracles can't actually exist. Quite aside from your solution, it's straightforward to write programs with undefined behavior. Ex:
function undef():
if ORACLE_HALT(undef)::
while 1 != 2:
print "looping forever"
else:
print "halting"
return 0
For the sake of the gdanken-experiment, can we just assume that Omega has a well-established policy of horribly killing tricky people who try to set up recursive hypercomputational functions whose halting behavior depends on their own halting behavior?
So I guess I should have specified which model of hypercomputation Omega is using. Omega's computer can resolve ANY infinite trawl in constant time (assume time travel and an enormous bucket of phlebotinum is involved) - including programs which generate programs. So, the players also have the power to resolve any infinite computation in constant time. Were they feeling charitable, in an average utilitarian sense, they could add a parasitic clause to their program that simply created a few million copies of themselves which would work together to implem...
Playing around with search-space heuristics for more efficiently approximating S-induction.
Which actually sounds a lot more impressive than the actual thing itself, which mostly consists of reading wikipedia articles on information theory, then writing Python code that writes brainfuck (decent universal language).
EDIT: Also writing a novel, which is languishing at about the 20,000 word mark, and developing an indie videogame parody of Pokemon. Engine is basically done, getting started on content creation.
That would make sense. I assume the problem is lotus eating - the system, given the choice between a large cost to optimize whatever you care about, or small cost to just optimize its own sense experiences, will prefer the latter.
I find this stuff extremely interesting. I mean, when we talk about value modelling what we're really talking about isolating some subset of the causal mechanics driving human behavior (our values) from those elements we don't consider valuable. And, since we don't know if that subset is a natural category (or how to define ...
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-c...
Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there's a logarithmic component to intelligence, as at some point you've already sampled the future outcome space thoroughly enough that most of the new bits of prediction you're getting back are redundant -- but the point of diminishing returns could be very, very high.
I believe I saw a post a while back in which Anja discussed creating a variant on AIXI with a true utility function, though I may have misunderstood it. Some of the math this stuff involves I'm still not completely comfortable with, which is something I'm trying to fix.
In any case, what you'd actually want want to do is to model your agents using whatever general AI architecture you're using in the first place - plus whatever set of handicaps you've calibrated into it - which, presumably has a formal utility function, and is an efficient optimizer.
It's going to be really hard to come up with any models that don't run deeply and profoundly afoul of the Occam prior.