Penrose would claim not to understand how 'collapse' occurs.
When I was younger, I picked up 'The Emperor's New Mind' in a used bookstore for about a dollar, because I was interested in AI, and it looked like an exciting, iconoclastic take on the idea. I was gravely disappointing when it took a sharp right turn into nonsense right out of the starting gate.
These people's objections are not entirely unfounded. It's true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today's hardware to do it.
The point is : you can simplify people's arguments to "I'm not worried about the imminent existence of AI because we cannot build the hardware to run one". The fact that a detail about their argument is wrong doesn't change the conclusion.
Building a whole brain emulation right now is completely impractical. In ten or twenty years, though... well, let's just say there are a lot of billionaires who want to live forever, and a lot of scientists who want to be able to play with large-scale models of the brain.
I'd also expect de novo AI to be capable of running quite a bit more efficiently than a brain emulation for a given amount of optimization power.. There's no way simulating cell chemistry is a particularly efficient way to spend computational resources to solve problems.
Watson is pretty clearly narrow AI, in the sense that if you called it General AI, you'd be wrong. There are simple cognitive tasks (like making a plan to solve a novel problem, modelling a new system, or even just playing Parcheesi) that it just can't do, at least, not without a human writing a bunch of new code to add a module that that does that new thing. It's not powerful in the way that a true GAI would be.
That said, Watson is a good deal less narrow than, say, for example, Deep Blue. Watson has a great deal of analytic depth in a reasonably broad domain (structured knowledge extraction from unformatted English) , which is a major leap forward. You might say that Watson is a rough analog to a language center connected to a memory system sitting in a box. It's not a GAI by itself, but it could be a substantial component of one down the line.
I don't think that the likelihood of our descendants simulating us at all is particularly high; my predicted number of ancestor simulations should such a thing turn out to be possible is zero, which is one reason I've never found it a particularly compelling anthropic argument in the first place.
But, if people living in universes capable of running simulations tend to do run simulations, then it's probable that most people will be living in simulations, regardless of whether anyone ever chooses to run an ancestor simulation.
Zero? Why?
At the fundamental limits of computation, such a simulation (with sufficient graininess) could be undertaken with on the order of hundreds of kilograms of matter and a sufficient supply of energy. If the future isn't ruled by a power singlet that forbids dicking with people without their consent (i.e. if Hanson is more right than Yudkowsky), then somebody (many people) with access to that much wealth will exist, and some of them will run such a simulation, just for shits and giggles. Given the no-power-singlets, I'd be very surprised if nobody decided to play god like that. People go to Renaissance fairs, for goodness sakes. Do you think that nobody would take the opportunity to bring back whole lost eras of humanity in bottle-worlds?
As for the other point, if we decide that our simulators don't resemble us, then calling them 'people' is spurious. We know nothing about them. We have no reason to believe that they'd tend to produce simulations containing observers like us (the vast majority of computable functions won't). Any speculation, if you take that approach, that we might be living in a simulation is entirely baseless and unfounded. There is no reason to privilege that cosmological hypothesis over simpler ones.
Socialist: 326, 27.5%
I wish the next census to taboo "socialism". In my experience people use this word to describe three rather different things.
a) An imaginary post-scarcity utopia where money is not necessary, work is voluntary, and all people are educated to love each other.
b) Sweden -- either the real one, or its idealized imaginary version.
c) The political and economical system led by Communist parties in the 20th century.
And I hope the majority of those people meant something like (a) and (b), because honestly, I can't imagine how (c) could be related to rationality or truth-seeking or altruism or ethics.
I know some hardcore C'ers in real life who are absolutely convinced that centrally-planned Marxist/Leninist Communism is a great idea, and they're sure we can get the kinks out if we just give it another shot.
It would be trivial for an SI to run a grainy simulation that was only computed out in greater detail when high-level variables of interest depended on it. Most sophisticated human simulations already try to work like this, e.g. particle filters for robotics or the Metropolis transport algorithm for ray-tracing works like this. No superintelligence would even be required, but in this case it is quite probable on priors as well, and if you were inside a superintelligent version you would never, ever notice the difference.
It's clear that we're not living in a set of physical laws designed for cheapest computation of intelligent beings, i.e., we are inside an apparent physics (real or simulated) that was chosen on other grounds than making intelligent beings cheap to simulate (if physics is real, then this follows immediately). But we could still, quite easily, be cheap simulations within a fixed choice of physics. E.g., the simulators grew up in a quantum relativistic universe, and now they're much more cheaply simulating other beings within an apparently quantum relativistic universe, using sophisticated approximations that change the level of detail when high-level variables depend on it (so you see the right results in particle accelerators) and use cached statistical outcomes for proteins folding instead of recomputing the underlying quantum potential energy surface every time, or even for whole cells when the cells are mostly behaving as a statistical aggregate, etc. This isn't a conspiracy theory, it's a mildly-more-sophisticated version of what sophisticated simulation algorithms try to do right now - expend computational power where it's most informative.
Unless P=NP, I don't think it's obvious that such a simulation could be built to be perfectly (to the limits of human science) indistinguishable from the original system being simulated. There are a lot of results which are easy to verify but arbitrarily hard to compute, and we encounter plenty of them in nature and physics. I suppose the simulators could be futzing with our brains to make us think we were verifying incorrect results, but now we're alarmingly close to solipsism again.
I guess one way to to test this hypothesis would be to try to construct a system with easy-to-verify but arbitrarily-hard-to-compute behavior ("Project: Piss Off God"), and then scrupulously observe its behavior. Then we could keep making it more expensive until we got to a system that really shouldn't be practically computable in our universe. If nothing interesting happens, then we have evidence that either we aren't in a simulation, or P=NP.
I don't see anything contradictory about it. There's no reason that a simulation that's not of the simulators' past need only contain people incidentally. We can be a simulation without being a simulation created by our descendants.
Personally, if I had the capacity to simulate universes, simulating my ancestors would probably be somewhere down around the twentieth spot on my priorities list, but most of the things I'd be interested in simulating would contain people.
I don't think I would regard simulating the universe as we observe it as ethically acceptable though, and if I were in a position to do so, I would at the very least lodge a protest against anyone who tried.
We can be a simulation without being a simulation created by our descendants.
We can, but there's no reason to think that we are. The simulation argument isn't just 'whoa, we could be living in a simulation' - it's 'here's a compelling anthropic argument that we're living in a simulation'. If we disregard the idea that we're being simulated by close analogues of our own descendants, we lose any reason to think that we're in a simulation, because we can no longer speculate on the motives of our simulators.
A fourth answer is that the entire world/universe isn't being simulated; only a small subset of it is. I believe that more arguments about simulations assume that more simulators wouldn't simulate the entire current population.
That doesn't actually solve the problem: if you're simulating fewer people, that weakens the anthropic argument proportionately. You've still only got so much processor time to go around.
Ok, before you were talking about "grainier" simulations, I thought you meant computational shortcuts. But now you are talking about taking out laws of physics which you think are unimportant. Which is clever, but it is not so obvious that it would work.
It is not so easy to remove "quantum weirdness" because quantum is normal and lots of things depend on it. Like atoms not losing their energy to electromagnetic radiation. You want to patch that by making atoms indivisible and forget about the subatomic particles? Well, there goes chemistry, and electricity. Maybe you patch those also, but then we end up with a grab bag of brute facts about physics, unlike the world we experience, where if you know a bit about quantum mechanics, the periodic table of the elements actually makes sense. Transistors also depend on quantum, and if you patch that, and the engineering of the transistors depends on people understanding quantum mechanics. So now you need to patch things on the level of making sure inventors invent the same level of technology, and we are back to simulator-backed conspiracies.
There's a sliding scale of trade-offs you can make between efficiency and Kolmogorov complexity of the underlying world structure. The higher the level your model is, the more special cases you have to implement to make it work approximately like the system you're trying to model. Suffice to say that it'll always be cheaper to have a mind patch the simpler model than to just go ahead and run the original simulation - at least, in the domain that we're talking about.
And, you're right - we rely on Solomonoff priors to come to conclusions in science, and a universe of that type would be harder to do science in, and history would play out differently. However, I don't think there's a good way to get around that (that doesn't rely on simulator-backed conspiracies). There are never going to be very many fully detailed ancestor simulations in our future - not when you'd have to be throwing the computational mass equivalents of multiple stars at each simulation, to run them at a small fraction of real time. Reality is hugely expensive. The system of equations describing, to the best of our knowledge, a single hydrogen atom in a vacuum, are essentially computationally intractable.
To sum up:
If our descendants are willing to run fully detailed simulations, they won't be able to run very many for economic reasons - possibly none at all, depending on how many optimizations to the world equations wind up being possible.
If our descendants are unwilling to run fully detailed simulations, then we would either be in the past, or there would be a worldwide simulator-backed conspiracy, or we'd notice the discrepancy, none of which seem true or satisfying.
Either way, I don't see a strong argument that we're living in a simulation.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
There are a lot of things we simply don't know about the brain, and even less so about consciousness and intelligence in the human sense. In many ways, I don't think we even have the right words to talk about this. Last I checked scientists were not sure that neurons were the right level at which to understand how our brains think. That is, neurons have microtubule substructures several orders of magnitude smaller than the neurons themselves that may (or may not) have something significant to do with the encoding and processing of information in the brain. Thus it's conceivable that a whole-brain emulation at the level of individual neurons might be insufficient to produce human-type intelligence and consciousness. If so, we'd need quite a few more generations of Moore's law before we could expect to finish a whole brain emulation than we're currently estimating.
Furthermore the smaller structures would be more susceptible to quantum effects. Then again, maybe not. Roger Penrose and Stuart Hameroff have developed this as Orchestrated objective reduction theory. This theory has been hotly disputed; but so far I don't think it's been conclusively proven or disproven. However, it is experimentally testable and falsifiable. I suspect it's too early to claim definitively either that quantum effects either are or are not required for human type intelligence and consciousness; but more research will likely help us answer this question one way or the other.
I will say this: there is a lot of bad physics and philosophy out there that has been misled by bad popular descriptions of quantum mechanics and how the conscious observer collapses the wave function, and thus came to the conclusion that consciousness is intimately tied up with quantum mechanics. I feel safe ruling that much out. However it still seems possible that our consciousness and intelligence is routinely or occasionally susceptible to quantum randomness, depending on the scale at which it operates.
Even if Penrose's ideas about how human intelligence arises from quantum effects is all true, that still does not prove that all intelligence requires quantum randomness. If you want to answer that question, then the first thing you need to do is define what you mean by "intelligence". That's trickier than it sounds at first, but I think it can be usefully done. In fact, there are multiple possible definitions of intelligence useful for different purposes. For instance one is the ability to formulate plans that enable one to achieve a goal. Consciousness is a much thornier nut to crack. I don't know that anyone has a good handle on that yet.
Sure? No. Pretty confident? Yeah. The people who think microtubules and exotic quantum-gravitational effects are critical for intelligence/consciousness are a small minority of (usually) non-neuroscientists who are, in my opinion, allowing some very suspect intuitions to dominate their thinking. I don't have any money right now to propose a bet, but if it turns out that the brain can't be simulated on a sufficient supply of classical hardware, I will boil, shred, and eat my entire (rather expensive) hat.
Daniel Dennet's papers on the subject seem to be making a lot of sense to me. The details are still fuzzy, but I find that having read them, I am less confused on the subject, and I can begin to see how a deterministic system might be designed that would naturally begin to have behavior that would cause them to say the sorts of things about consciousness that I do.