Isn't that fraud? That is, if you work for a company that matches donations, and I ask to give you money for you to give to MIRI, aren't I asking you to defraud your company?
Correct, but it is a kind of fraud that is hard to detect and easy to justify to oneself as being "for the greater good" so the scammer is hoping that you won't care.
The comments baffle me. I think it can be taken for granted that people on this site have an elevated sense of skepticism -- perhaps not enough to repel ALL scams, but certainly enough to recognize a scam when your attention is explicitly drawn to it contemporaneously. Why are we now wasting time with in-depth discussion ABOUT scams and methodology, WITH the scammer in the conversation? And if you believe him not to be a scammer, why are you putting a burden of proof onto him to countersignal "fishy behavior" rather than simply lay out behaviors which will not be tolerated, or setting up an escrow Bitcoin wallet?
Rationality isn't just about being skeptical, though, and there is something to be said for giving people the benefit of the doubt and engaging with them if they are willing to do so in an open manner. There are obviously limits to the extend to which you want to do so, but so far this thread has been an interesting read so I wouldn't worry to much about us wasting our time.
It does mean that not-scams should find ways to signal that they aren't scams, and the fact that something does not signal not-scam is itself strong evidence of scam.
It might not be easy to figure out good signals that can't be replicate by scammers though. More importantly, and what I think MarsColony_in10years is getting at, even if you can find hard to copy signals they are unlikely to be without costs of their own and it is unfortunate that scammers are forcing these costs on legitimate charities.
A high school student would say no, because by definition a molecule has more than one atom.
That depends entirely on your definition (which is the point of the quote I guess), I've heard people use it both ways.
that's why we left
I think he's mistaken in believing we left :-/
Well, we're working on it, ok ;)
We obviously haven't left nature behind entirely (whatever that would mean), but we have at least escaped the situation Brady describes, where we are spending most of our time and energy searching for our next meal while preventing ourselves from becoming the next meal for something else.
The life for the average human in first world countries is definitely no longer only about eating and not dying.
Context: Brady is talking about a safari he took and the life the animals he saw were leading.
Brady: It really was very base, everything was about eating and not dying, pretty amazing.
Grey: Yeah, that is exactly what nature is, that's why we left.
-- Hello internet (link, animated)
Might be more anti-naturalist than strictly rationalist, but I think it still qualifies.
is very, very difficult not to give a superintelligence any hints of how the physics of our world work.
I wrote a short update to the post which tries to answer this point.
Maybe they notice minor fluctuations in the speed of the simulation based on environmental changes to the hardware
I believe they should have no ability whatsoever to detect fluctuations in the speed of the simulation.
Consider how the world of world of warcraft appears to an orc inside the game. Can it tell the speed at which the hardware is running the game?
It can't. What it can do is compare the speed of different things: how fast does an apple fall from a tree vs how fast a bird flies across the sky.
The orc's inner perception of the flow of time is based on comparing these things (e.g., how fast does an apple fall) to how fast their simulated brains process information.
If everything is slowed down by a factor of 2 (so you, as a player, see everything twice is slow), nothing appears any different to a simulated being within the simulation.
You are absolutely correct, they wouldn't be able to detect fluctuations in processing speed (unless those fluctuations had an influence in, for instance, the rounding errors in floating point values).
About update 1: It knows our world very likely has something approximating newtonian mechanics, that is a lot of information by itself. but more than that, it knows that the real universe is capable of producing intelligent beings that chose this particular world to simulate. From a strictly theoretical point of view that is a crapton of information, I don't know if the AI would be able to figure out anything useful from it, but I wouldn't bet the future of humanity on it.
About update 2: That does work, provided that this is implemented correctly, but it only works for problems that can be automatically verified by non-AI algorithms.
I'm a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that,
Why not? You are pretty smart, and all you are is a combination of 10^11 or so very "dumb" neurons. Now imagine a "being" which is actually a very large number of human-level intelligences, all interacting...
Yeah, that didn't came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn't combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn't accurate (anymore) it's probably still a good enough place to start. You mention running the simulation for a million years simulated time, let's assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat.
But we're not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn't) that's another 2 orders of magnitude.
I don't know how many orcs you had in mind for this scenario, but let's assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need.
Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn't take into account a number of simplifications that could be build into the system, but it also doesn't take into account the other parts of the simulated environment that require processing power. Now I don't doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.
- Keep the AI in a box and don't interact with it.
The rest of your posting is about how to interact with it.
Don't have any conversations with it whatsoever.
Interaction is far broader than just conversation. If you can affect it and it can affect you, that's interaction. If you're going to have no interaction, you might as well not have created it; any method of getting answers from it about your questions is interacting with it. The moment it suspects what it going on, it can start trying to play you, to get out of the box.
I'm at a loss to imagine how they would take over the world.
This is a really bad argument for safety. It's what the scientist says of his creation in sci-fi B-movies, shortly before the monster/plague/AI/alien/nanogoo escapes.
To be fair, all interactions described happen after the AI has been terminated, which does put up an additional barrier for the AI to get out of the box. It would have to convince you to restart it without being able to react to your responses (apart from those it could predict in advance) and then it still has to convince you to let it out of the box.
Obviously, putting up additional barriers isn't the way to go and this particular barrier is not as impenetrable for the AI as it might seem to a human, but still, it couldn't hurt.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Barry Allen in Vanishing into Things
I read the source before reading the quote and was expecting a quote from The Flash.