You put them into a social enviroment where the high status people value logic and evidence. You give them the plausible promise that they can increase their status in that enviroment by increasing the amount that they value logic and evidence.
How would this encourage them to actually value logic and evidence instead of just appearing to do so?
I have a question about timeless physics. If the future state of the universe is only based on the current state with no reference to time then what determines how much the universe changes from state to state? Removing time seems to reintroduce Zeno's paradox. Either the universe changes in discrete steps or something else has to keep track of how much the universe changes at each step and the only way I can think of to measure how much it changes is a derivative with respect to "time".
Any better insights?
Do you think continuous spatial + temporal dimensions have problems continuous spatial dimensions lack? If so, what and why?
There's a question in OkCupid that asks "In some sense, wouldn't nuclear war be exciting?" which [I immediately answered no and rated everyone who said yes as completely undateable] I think falls into this same class of bug, but I can't quite put my finger on how to describe it.
Wouldn't the failure to acknowledge all the excitement nuclear war would cause be an example of the horns effect?
I immediately answered no and rated everyone who said yes as completely undateable
I can understand answering no for emotional or political reasons, but rating the epistemically correct answer as undateable? That's... a good reason for me to answer such questions honestly, actually.
Is it "awesome" to crush your enemies, see them driven before you, and hear the lamentations of the women?
Is it "awesome" to be the one who gets crushed?
Given you have enemies you hate deeply enough? Yes.
Having such enemies in the first place? Definitely not.
I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.
Take a modern company with a broad reach. The convenience store, CVS, say. Yes, there is a big organizational hierarchy staffed by people. But there is also a massive data collecting and business intelligence aspect. Every time they try to get you to swipe your CVS card when you buy toothpaste, they are collecting information which they then mine for patterns on how they stock shelves and price things.
That's just business. It's also a sophisticated execution of intelligence that is far beyond the capacity of an individual person.
I don't understand your point about specialization. Can you elaborate?
Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.
I think that you are underestimating the efficiency of intersystem communication in a world where a lot of organizational communication is handled through information technology.
Speech and reading seem to be at most 60 bits per second. A single neuron is faster than that.
Compare to the human brain. The optic nerve transmits 10 million bits per second and I'd expect interconnections between brain areas to generally fall within a few orders of magnitude.
I'd call five orders of magnitude a serious bottleneck and don't really see how it could be significantly improved without cutting humans out of the loop. That's what your data mining example does, but it's only as good as the algorithms behind it. And when those approach human level we get AI.
I don't understand your point about specialization. Can you elaborate?
Individual humans have ridiculous amounts of overlap in skills and abilities. Basic levels of housekeeping, social skills etc. are pretty much assumed. A lot of that is necessary given our social instincts and organizational structures: a savant may outperform anyone in a specific field, but good luck integrating them in an organization.
I'm not sure how much specialization can be improved with baseline humans, but relaxing the constraint that everyone should be able to function independently in the wider society might help. Also, focused training from a young age could be useful in creating genius-level specialists, but that takes time.
Also, I don't understand what the difference between a 'superintelligence' and a 'sped-up human' would be that would be pertinent to the argument.
Given a large enough speedup and indefinite lifespan, pretty much none. The analogy may have been poorly chosen.
At a glance this seems pretty silly, because the first premise fails. Organizations don't have goals. That's the main problem. Leaders have goals, which frequently conflict with the goals of their followers and sometimes with the existence of the organization.
Do humans have goals in this sense? Our subsystems seem to conflict often enough.
An organization could be viewed as a type of mind with extremely redundant modular structure. Human minds contain a large number of interconnected specialized subsystems, in an organization humans would be the subsystems. Comparing the two seems illuminating.
Individual subsystems of organizations are much more powerful and independent, making them very effective at scaling and multitasking. This is of limited value, though: it mostly just means organizations can complete parallelizable tasks faster.
Intersystem communication is horrendously inefficient in organizations: bandwidth is limited to speech/typing and latency can be hours. There are tradeoffs here: military and emergency response organizations cut the latency down to seconds, but that limits the types of tasks the subsystems can effectively perform. Humans suck at multitasking and handling interruptions. Communication patters and quality are more malleable, though. Organizations like Apple and Google have had some success in creating environments that leverage human social tendencies to improve on-task communication.
Specialization seems like a big one. Most humans are to some degree interchangeable: what one can do, most others can do less effectively, or at least learn given time. There are ways to improve individual specialization, but barring radical cultural or technological change, we're pretty much stuck on that front.
Mostly organizations seem limited by the competence of their individual members. They do more, not better. Specialization and communication seem to be the limiting factors and I'm not sure if they can make enough of a difference even in theory to qualify as a superintelligence, except in the sense a sped-up human would.
Thoughts?
This assumes there is such a thing as a particular stream of consciousness, rather than your brain retconning a stream of consciousness to you when you bother to ask it (as is what appears to happen).
Yes it does assume that. However, we have plenty of evidence for this hypothesis.
My memory, and the memory of humans and higher mammals alike, has tremendous predictive power. Things like I remember a particular National Lampoon magazine cartoon with a topless boxer chanting "I am the queen of england, I like to sing and dance, and if you don't believe me, I will punch you in the pants," from about 40 years ago. I recently saw a DVD purporting to have all National Lampoons recorded digitally on it, I bought this and sure enough, the cartoon was there.
It seems clear to me that if conscious memory is predictive of future physical experience, it is drawn from something local to the Everett Branch my consciousness is in.
Let me design an experiment to test this. Set up a Schrodinger's cat experiment, include a time display which will show the time at which the cat was killed if in fact the cat is killed. If I once open the lid of the box and find the cat, and look at the time it was killed, record the time on a piece of paper which I put in a box on the table next to me and then close the box. I reopen it many subsequent times and each time I record the time on a piece of paper and put it on the box, or I record "N/A" on the paper if the cat is still alive.
My prediction is that every time I open the box with the memory of seeing the dead cat, I will still see the dead cat. Further, I predict that the time on the decay timer will be the same every time I reopen the box. This in my opinion proves that memory sticks with the branch my consciousness is in. Even if we only saw the same time 99 times out of 100, it would still prove that memory sticks, but not perfectly, with the branch my consciousness is in, which would then be a fact that physics explaining what I experience of the world would have to explain.
Having not explicitly done this experiment, I cannot claim for sure that we will conclude my consciousness is "collapsing" on an Everett Branch just as in Copenhagen interpretation it was the wave function that collapsed. But I will bet $100 against $10,000 if anybody wants to do the experiment. The terms of the bet are if you have a set-up that shows the counter result, that consciousness apparently dredges up memories of different nearby Everett branches by seeing different times on the timer, then I will come to where you are with your set-up and if you can show me it working for both you and I you get the $10,000, otherwise I get the $100 to defray my travel expenses. I'll reserve the right to pass on checking your set-up out if travel costs would be over $600, but for me that covers a good fraction of the world (I am in Sandy Eggo in this Everett Branch).
Fortunately for you cat lovers, the experiment can be done without the cat. You simply need to measure the time of radioactive decay, killing a cat with cyanide on detection of the radioactive decay is not necessary to win or lose the bet (or to prove the point.)
Note the box of papers with recorded times in it can also be used as evidence. If I open that box and all the papers have the same time written on them, and that is the time I remember, then I take this as strong evidence that my memory has been returning memories from only the current everett branch. If my memory were unhooked from this everett branch, then one would expect the physical evidence of what I had previously remembered which is in this everett branch, to include times from other everett branches. If it does not, then I think we can conclude that human consciousness, including its memories, are branch local, that a "collapse" occurs in MWI when we attempt to use it to predict what we will experience in this universe.
And indeed, I think predicting what we will experience is the hallmark of all good theories of how the universe works. We may say we want to predict "what will happen," but I believe by this we mean "what I will see happen."
I haven't seen one example of a precise definition of what constitutes an "observation" that's supposed to collapse the wavefunction in Copenhagen interpretation. Decoherence, OTOH, seems to perfecty describe the observed effects, including the consistency of macro-scale history.
This in my opinion proves that memory sticks with the branch my consciousness is in.
Actually it just proves that memory sticks with the branch it's consistent with. For all we know, our consciousnesses are flitting from branch to branch all the time and we just don't remember because the memories stay put.
We may say we want to predict "what will happen," but I believe by this we mean "what I will see happen."
Yeah, settling these kinds of questions would be much easier if we weren't limited to the data that manages to reach our senses.
In MWI the definition of "I" is not quite straightforward: the constant branching of the wavefunction creates multiple versions of everyone inside, creating indexical uncertainty which we experience as randomness.
Sometimes I still marvel about how in most time-travel stories nobody thinks of this.
The alternate way of computing this is to not actually discard the future, but to split it off to a separate timeline so that you now have two simulations: one that proceeds normally aside for the time-traveler having disappeared from the world, and one that's been restarted from an earlier date with the addition of the time traveler. Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.
Of course, this has its own moral dilemmas as well - such as the fact that you're as good as dead for your loved ones in the timeline that you just left - but generally smaller than erasing a universe entirely.
You could get around this by forking the time traveler with the universe: in the source universe it would simply appear that the attempted time travel didn't work.
That would create a new problem, though: you'd never see anyone leave a timeline but every attempt would result in the creation of a new one with a copy of the traveler added at the destination time. A persistent traveler could generate any number of timelines differing only by the number of failed time travel attempts made before the succesful one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
(Except when it's a novel and the text on the back cover spoilers events from the middle of the book or later which I would have preferred to not read until the right time.)
Spoilers matter less than you think.