It doesn't preclude scenario B. It just makes it unlikely.
I have a "Many Worlds/QM" style interpretation of time turner mechanics. Basically, all of the possible interpretations of the information+metainformation you have transmitted via time turner "exists" or is in a kind of superposition, until receiving information precludes them. Making Scenario B overwhelmingly unlikely is precluding it.
No, because if she was able to provide that much information as a conscious communication, she will have provided enough information to have affixed her departure at a specific time.
In any case, there's probably some reason that would make it impossible for her to convey that much information inside 6 hours, anyhow.
I am going to have to accuse you of making a grave Mind Projection
Apparently Black Holes preserve information. There are other connections to physics and information theory, Such as the theoretical computers that can use ever smaller quantities of energy, so long as all of their operations are reversible. Given that, it doesn't seem unreasonable that there would be an information theoretic component to the rules of magic. My formulation doesn't require a human mind. If I talk about minds or arbiters, or use language suggesting that then that's just lazy writing on my part.
The problem here is that even if Scenario A and Scenario B are indistinguishable, Amelia's words still constitute Bayesian evidence on which Dumbledore can update his beliefs.
In my formulation, that's "side information." Really, my gedankenexperiment doesn't work unless Amelia Bones happens to be very ditzy concerning time.
I'm inclined to believe that whatever intelligence is behind capital-T Time is enforcing an intuitive definition of information, in the same way that brooms work off of Aristotelian mechanics.
So then, this is a limitation in the "interface" that the Atlantean engine is following. I think my hypothesis is testable.
This is precisely what I meant when I mentioned the empirical side information detector. The "informational point of view of Dumbledore" is "whatever-it-is that keeps histories consistent," and the indistinguishability only has to come into play in the local context of whenever Dumbledore uses the time turner. In the way I've envisioned it to work, Dumbledore can only use your algorithm to detect leaked information or side-information that was available to him which he might not be aware of.
Your formulation of "indistinguishable" was already invalidated on reddit.com/r/hpmor by a different objection to my hypothesis. When you lie, you leak information. That information just puts the situation into the 6-hour rule. This cuts off the rest of your reasoning below. It also shows how hard the 6-hour rule is to "fool," which in turn explains why it hasn't been figured out yet.
EDIT: Rewrote one sentence to put the normal 6-hour rule back.
EDIT: Basically, if all of the information Dumbledore can receive from Amelia Bones could lo...
From Chapter 6:
...Harry was examining the wizarding equivalent of a first-aid kit, the Emergency Healing Pack Plus. There were two self-tightening tourniquets. A Stabilisation Potion, which would slow blood loss and prevent shock. A syringe of what looked like liquid fire, which was supposed to drastically slow circulation in a treated area while maintaining oxygenation of the blood for up to three minutes, if you needed to prevent a poison from spreading through the body. White cloth that could be wrapped over a part of the body to temporarily numb pain.
What the hell is green tech? Is it just more efficient tech? Or does it have less to do with the technology and more to do with economic agents acknowledging externalities, consciously choosing to internalize some of that cost?
I'll take that as an analogy for what it means to be a moral person. (It's another way of talking about Kant's Categorical Imperative.)
You're telling us that everyone should party with the million dollars for three days, and then die.
[Citation Needed] Ahem.
No, I'm not saying that. I'm painting the other position in a light so it's understandable. Your analogy is incomplete. What if they could also donate that million dollars to other research that could increase the life expectancy of 1000 people by 1 year with 90% certainty?
Science is much worse at figuring out what is right because it's method of determining what is right is "Of all the possible hypotheses, we'll eliminate the wrong ones and choose the most probably of what exists".
Someone should write a Sherlock script, where someone uses Sherlock's principle: "when you have eliminated the impossible, whatever remains, however improbable, must be the truth," against him, so that he decisively takes the wrong action.
"Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".
There was a time when expecting mothers did the rational thing by not going to the maternity ward. http://www.ehso.com/ehshome/washing_hands.htm#History
Resources to be devoted to cryonics and a future lifespan could also be devoted to the lifespan you are fairly sure you have right now. The situation would be more like getting into an ambulance, whe...
It is important to be rational about charity for the same reason it is important to be rational about Arctic exploration: it requires the same awareness of opportunity costs and the same hard-headed commitment to investigating efficient use of resources
In his Mars Direct talks, Robert Zubrin cited the shoestring budget Amundsen expedition through the Northwest Passage in comparison to around 30 contemporary government funded expeditions with state of the art steam frigates and huge logistics trains. The Amundsen expedition traveled in a cheap little sea...
There are massive intractable problems with human society on earth at the moment which lack easy solutions (poverty, aids, overpopulation, climate change, social order).
Poverty - has always been with us. Many, many people are better off. AIDS - We will solve this. Overpopulation - Population will stabilize at 10 billion. See 2nd link. Climate change - see below. Social order - so long as we don't extinguish ourselves, this will work itself out.
http://www.gapminder.org/videos/hans-rosling-ted-2006-debunking-myths-about-the-third-world/
For the longer term, it is hugely beyond our technological abilities
We could start colonizing Mars using nuclear rockets in 20 years, if we wanted to. Heck, if we wanted to badly enough, we could start it in 20 years with chemical rockets.
whatever determines our survival as a species for the nex millennium will be decided on earth. And we are struggling with that right now.
Certain things will be decided in the next century. We could colonize Mars with agriculture but without terraforming well inside that. When it comes to an issue like "specie...
How about large stations with artificial gravity and zero-G? We were launching 747 sized hulls 97% of the way into orbit, only to dispose of them about once or twice a year for many, many years. (Shuttle main tank.) Large trampoline-sided spaces would result in really cool new sports and forms of art.
The problem with this (and related theories) is that the soul believers believe that the soul itself can live and think without the body. Much of thinking is mediated by language. I don't think a believer in soul would accept that their soul after death will be incapable of thought until God provides it a substitute pineal gland.
Actually, the concept of soul without language makes more sense on its own and fits more religious traditions (especially if you abandon literal translations) than souls that have language.
So, a little background- I've just come out as an atheist to my dad, a Christian pastor, who's convinced he can "fix" my thinking and is bombarding me with a number of flimsy arguments that I'm having trouble articulating a response to
Being articulate has nothing to do with the truth. If your dad isn't willing to explore where he's wrong, then you shouldn't be talking about your world views with him. If you can't establish your world view without him, then you're not ready to establish it at all.
I'd advise not worrying about "the big que...
The approx 2% figure is interesting to me. This seems to be about the right frequency to be related to the small minority of jerks who will haze strangers for sexist and/or racist motivations.
http://news.ycombinator.com/item?id=3736037
This might be related to the differences in the perception of the prevalence of racism between minorities and mainstream members of society. If one stands out in a crowd, then one can be more easily "marked" by individuals seeking to victimize someone vulnerable. This is something that I seem to have observed over ...
It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness. ... I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically.
A summary of your points is that: while conceivable, there's no reason to think it's at all likely. Ok. How...
To write a culture that isn't just like your own culture, you have to be able to see your own culture as a special case - not as a norm which all other cultures must take as their point of departure.
Most North Americans that fall into the rather arbitrary "white" category do not see their culture as a special case. "White" North Americans tend to see themselves as the "plain vanilla" universal human. Everyone else is a "flavor." In truth, vanilla is also a flavor, of course.
How do I know this? Because I'm of Kore...
If within our own lifetime we undergo such alien thought changes, alien thoughts in actual aliens will be alien indeed.
Indeed. However, I am beginning to think that by emphasizing the magnitude of the alienness of alien thought, we are intending to avoid complacency but we are also creating another kind of "woo."
Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can't kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of "human fundamentalists," to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.
Sounds silly, and it's not very hip, but Fly Lady has worked very well for my girlfriend. Basically, they send you messages giving you mostly short (like 3 minute) tidying and cleaning missions. Your place gets messy a minute at a time, so they keep you cleaning for short intervals to counteract that.
When my girlfriend is participating, the difference is dramatic, and it stays that way for weeks at a time.
Which god? If by "God" you mean "something essentially perfect and infallible," then yes.
That one. Big man in sky invented by shepherds does't interest me much. Just because I'm a better optimizer of resources in certain contexts than an amoeba doesn't make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn't make X perfect and infallible. Just because X can rapidly optimize itself doesn't make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.
In a practical sense, I think this means you want to put yourself in situations where success is the default, expected result.
This is a little like "burning the boats."
http://techcrunch.com/2010/03/06/andreessen-media-burn-boats/
Isn't it almost certain that super-optimizing AI will result in unintended consequences? I think it's almost certain that super-optimizing AI will have to deal with their own unintended consequences. Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?
Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?
Which god? If by "God" you mean "something essentially perfect and infallible," then yes. If by "God" you mean "that entity that killed a bunch of Egyptian kids" or "that entity that's responsible for lightning" or "that guy that annoyed the Roman empire 2 millennia ago," then no.
Also, essentially infallible to us isn't necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).
True story. Some years back, I was having trouble sleeping and decided I was getting too much light in the mornings. So I measured my bedroom windows, which were all different, odd widths, and went to Lowe's where they sell nicely opaque vinyl blinds. So I pick out the blinds I want, and go to the cutting machine and press the button to summon store help. The cutting machine turned the blinds, which were cut by a blade which screw clamps to a metal bar marked off like a ruler. There were no detents or slots, so any width could be cut by simply moving the b...
http://www.crinfo.org/articlesummary/10594/
...Bushman society is fairly egalitarian, with power being evenly and widely dispersed. This makes coercive bilateral power-plays (such as war) less likely to be effective, and so less appealing. A common unilateral power play is to simply walk away from a dispute which resists resolution. Travel among groups and extended visits to distant relatives are common. As Ury explains, Bushmen have a good unilateral BATNA (Best Alternative to a Negotiated Agreement). It is difficult to wage war on someone who can simply wa
Computation market prices can and do go down. But since society can grow almost infinitely quickly (by copying ems), from an em's POV it's more relevant to say that everything else's price goes up.
A society of super-optimizers better have a darn good reason for allowing resource use to outstrip N^3. (And no doubt, they often will.)
A society of super-optimizers that regulates itself in a way resulting in mass death either isn't so much super-optimized, or has a rather (to me) unsavory set of values.
...Otherwise we might as well talk about a society of &
Instead of the deletion or killing of uploads that want to live but can't cut it economically, why not slow them down? (Perhaps to the point where they are only as "quick" and "clever" as an average human being is today.) Given that the cost of computation keeps decreasing, this should impose a minimal burden on society going forward. This could also be an inducement to find better employment, especially if employers can temporarily grant increased computation resources for the purposes of the job.
From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.
Perhaps this varies because of local environmental/economic conditions. From my undergraduate studies, I seem to remember that !Kung Bushmen would sometimes walk away from conflicts into another group.
In my experience, Pandora simply doesn't tend to give me music that I like even when I put in an artist that I like.
Yes, Pandora does give me music with qualities in common with the music I like. It's just that those aren't the qualities that make me really like the music. Instead, I just get ho-hum dopplegangers of bands that I like.
Perhaps we should view our moral intuitions as yet another evolved mechanism, in that they are imperfect and arbitrary though they work well enough for hunter gatherers.
When we lived as hunter gatherers, an individual could find a group with compatible moral intuitions or walk away from a group with incompatible ones. The ability or possibility that an unpleasant individual's moral intuitions would affect you from one valley over was minimal.
One should note, though, that studies of murder rates amongst hunter gatherer groups found that they were on the high side compared to industrialized societies.
Conversations seem to occur on several levels simultaneously. There's a level of literal truth. There are also multiple dimensions of politics. (What I call "micro" and "macro," in a way analogous to the application to economics.) There's even a meta-level that consists of just trying to overwhelm people with verbiage.