I hadn't seen this before. Hanson's conception of intelligence actually seems much simpler and more plausible than how I had previously imagined it. I think 'intelligence' can easily act as a Semantic Stopsign because it feels like a singular entity through the experience of consciousness, but actually may be quite modular as Hanson suggests.
Intelligence must be very modular - that's what drives Moravec's paradox (problems like vision and locomotion that we have good modules for feel "easy", problems that we have to solve with "general" intelligence feel "hard"), the Wason Selection task results (people don't always have a great "general logic" module even when they could easily solve an isomorphic problem applied to a specific context), etc.
Does this greatly affect the AGI takeoff debate, though? So long as we can't create a module which is itself capable of creating modules, what we have doesn't qualify as human-equivalent AGI. But if/when we can, then it's likely that it can also create an improved version of itself, and so it's still an open question as to how fast or how far it can improve.
Would there be any unintended consequences? I'm worried that possessing an incorrect belief may lead the Oracle to lose accuracy in other areas.
For instance, if accuracy is defined in terms of the reaction of the first person to read the output, and that person is isolated from the rest of the world, then we can get the Oracle to act as if it believed a nuclear bomb was due to go off before the person could communicate with the rest of the world.
In this example, would the imminent nuclear threat affect the Oracle's reasoning process? I'm sure there are some questions whose answers could vary depending on the likelihood of a nuclear detonation in the near future.
Regardless of the mechanism for misleading the oracle, its predictions for the future ought to become less accurate in proportion to how useful they have been in the past.
"What will the world look like when our source of super-accurate predictions suddenly disappears" is not usually the question we'd really want to ask. Suppose people normally make business decisions informed by oracle predictions: how would the stock market react to the announcement that companies and traders everywhere had been metaphorically lobotomized?
We might not even need to program in "imminent nuclear threat" manually. "What will our enemies do when our military defenses are suddenly in chaos due to a vanished oracle?"
I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.
I don't know if it's the mainstream of transhumanist thought but it's certainly a significant thread.
Information hazard warning: if your state of mind is again closer to "panic attack" and "grief" than to "calmer", or if it's not but you want to be very careful to keep it that way, then you don't want to click this link.
Isn't using a laptop as a metaphor exactly an example of
Most often reasoning by analogy?
I think one of the points trying to be made was that because we have this uncertainty about how a superintelligence would work, we can't accurately predict anything without more data.
So maybe the next step in AI should be to create an "Aquarium," a self-contained network with no actuators and no way to access the internet, but enough processing power to support a superintelligence. We then observe what that superintelligence does in the aquarium before deciding how to resolve further uncertainties.
Isn't using a laptop as a metaphor exactly an example
The sentence could have stopped there. If someone makes a claim like "∀ x, p(x)", it is entirely valid to disprove it via "~p(y)", and it is not valid to complain that the first proposition is general but the second is specific.
Moving from the general to the specific myself, that laptop example is perfect. It is utterly baffling to me that people can insist we will be able to safely reason about the safety of AGI when we have yet to do so much as produce a consumer operating system that is safe from remote exploits or crashes. Are Microsoft employees uniquely incapable of "fully general intelligent behavior"? Are the OpenSSL developers especially imperfectly "capable of understanding the logical implications of models"?
If you argue that it is "nonsense" to believe that humans won't naturally understand the complex things they devise, then that argument fails to predict the present, much less the future. If you argue that it is "nonsense" to believe that humans can't eventually understand the complex things they devise after sufficient time and effort, then that's more defensible, but that argument is pro-FAI-research, not anti-.
Many libertarians and conservatives have been calling for a free market in water in California. I agree that would likely be the best solution overall. However that solution will have inevitable pushback from farmers, who benefit from their existing usage rights. My understanding is that California farmers have a "use it or lose it" right to water resources. In other words, they can use the water or not use it, but they can't re-sell it. This leads to a lot of waste, including absurdities like planting monsoon crops in a semi-arid region. If the farmers could simply resell the water they don't use (at or near the residential water price), there would be more water to go around, and farmers would probably actually come out ahead of the game. While less beneficial overall, it might be politically easier to implement.
If everybody understood the problem, then allowing farmers to keep their current level of water rights but also allowing them to choose between irrigation and resale would be a Pareto improvement. "Do I grow and export an extra single almond, or do I let Nestle export an extra twenty bottles of water?" is a question which is neutral with respect to water use but which has an obvious consistent answer with respect to profit and utility.
But as is typical, beneficiaries of price controls benefit from not allowing the politicians' electorate to understand the problem. If you allow trade and price equilibration to make subsidies transparent and efficient, you risk instead getting the subsidies taken away. That extra single almond is still more profitable than nothing.
We haven't seen anything like evidence that our laws of physics are only approximations at all.
And we shouldn't expect to, as that is an inherent contradiction. Any approximation crappy enough that we can detect it doesn't work as a simulation - it diverges vastly from reality.
Maybe we live in a simulation, maybe not, but this is not something that we can detect. We can never prove we are in a simulation or not.
However, we can design a clever experiment that would at least prove that it is rather likely that we live in a simulation: we can create our own simulations populated with conscious observers.
On that note - go back and look at the first video game pong around 1980, and compare to the state of the art 35 years later. Now project that into the future. I'm guessing that we are a little more than half way towards Matrix style simulations which essentially prove the simulation argument (to the limited extent possible).
If we're in a simulation, this implies that with high probability either a) the laws of physics in the parent universe are not our own laws of physics (in which case the entire idea of ancestor simulations fails) or
Depends what you mean by 'laws of physics'. If we are in a simulation, then the code that creates our observable universe is a clever efficient approximation of some simpler (but vastly less efficient) code - the traditional 'laws of physics'.
Of course many simulations could be of very different physics, but those are less likely to contain us. Most of the instrumental reasons to create simulations require close approximations. If you imagine the space of all physics for the universe above, it has a sharp peak around physics close to our own.
b) they are engaging in an extremely detailed simulation.
Detail is always observer relevant. We only observe a measly few tens of millions of bits per second, which is nothing for a future superintelligence.
The limits of optimal approximation appear to be linear in observer complexity - using output sensitive algorithms.
I'm not sure what you mean by this. Can you expand?
Consider simulating a universe of size N (in mass, bits, whatever) which contains M observers of complexity C each, for T simulated time units.
Using a naive regular grid algorithm (of the type most people think of), simulation requires O(N) space and O(NT) time.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time. In other words the size of the universe is irrelevant and the simulation complexity is only output dependent - focused on computing only the observers and their observations.
We already can simulate entire planets using the tiny resources of today's machines. I myself have created several SOTA real-time planetary renderers back in the day.
Again, the statistical artifact problem comes up, especially when there are extremely subtle issues going on, such as the different (potential) behavior of neutrinos.
What is a neutrino such that you would presume to notice it? The simulation required to contain you - and indeed has contained you your entire life - has probably never had to instantiate a single neutrino (at least not for you in particular, although it perhaps has instantiated some now and then inside accelerators and other such equipment).
Your basic point that I may be overestimating the difficulty of simulations may be valid; since simulations don't explain the Great Filter for other reasons I discussed, this causes an update in the direction of us being in a simulation but doesn't really help explain the Great Filter much at all.
I agree that the sim arg doesn't explain the Great Filter, but then again I'm not convinced there even is a filter. Regardless, the sim arg - if true - does significantly effect ET considerations, but not in a simple way.
Lots of aliens with lots of reasons to produce sims certainly gains strength, but models in which we are alone can also still produce lots of sims, and so on.
Using the hypothetical optimal output sensitive approximation algorithm, simulation requires ~O(MC) space and ~O(MCT) time.
For any NP problem of size n, imagine a universe of size N = O(2^n), in which computers try to verify all possible solutions in parallel (using time T/2 = O(n^p)) and then pass the first verified solution along to a single (M=1) observer (of complexity C = O(n^p)) who then repeats that verification (using time T/2 = O(n^p)).
Then simulate the observations, using your optimal (O(MCT) = O(n^{2p})) algorithm. Voila! You have the answer to your NP problem, and you obtained it with costs that were polynomial in time and space, so the problem was in P. Therefore NP is in P, so P=NP.
Dibs on the Millennium Prize?
The last time I read about this was here:
https://www.cs.drexel.edu/~sa499/papers/adversarial_stylometry.pdf
A quick google (for stylometry, fingerprinting) results in summaries e.g.
Thank you both!
Hm. That's indeed plausible. More so in our age where software can reliably detect authors reliably based on their writing fingerprint. I wonder what will become of pseudonyms in the future.
Link, please? I seem to be failing at Google.
The last time I saw "writing fingerprint" software it was being used to "prove" that The Book of Mormon's purported authors were real, in a study whose designers clearly would have failed at the 2-4-6 task. I'm afraid I tossed the idea in a mental box alongside "phrenology" after that.
The fact that he's wearing it at all stuns me. It needs to be maintained by a coven of the greatest wizards around.
Yes.
But Harry tends not to see other people as PCs, or as able to add anything to his plots.
Kind of an interesting mirror to Voldemort, yes? The one Tom has trouble thinking of ideas that involve him being helpful to other people; the other has trouble thinking of ideas that involve other people being helpful to him.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
A voice of reason.
Against Musk, Hawking and all other "pacifists".
Where did "pacifists" and the scare quotes around it come from?