All of stcredzero's Comments + Replies

Conversations seem to occur on several levels simultaneously. There's a level of literal truth. There are also multiple dimensions of politics. (What I call "micro" and "macro," in a way analogous to the application to economics.) There's even a meta-level that consists of just trying to overwhelm people with verbiage.

Well, I note in a comment somewhere, that it would have to be a version of Amelia who was rather ditsy about time.

It doesn't preclude scenario B. It just makes it unlikely.

I have a "Many Worlds/QM" style interpretation of time turner mechanics. Basically, all of the possible interpretations of the information+metainformation you have transmitted via time turner "exists" or is in a kind of superposition, until receiving information precludes them. Making Scenario B overwhelmingly unlikely is precluding it.

0DanielLC
How unlikely does it need to be to be precluded? The given Scenario A is pretty unlikely.

It's very possible for to distinguish the two situations. The same probabilistic mechanism that determines the arrow of time precludes scenario B. Also, it's not really that Dumbledore is actually doing the distinction. It's more if he could do it.

0DanielLC
It doesn't preclude scenario B. It just makes it unlikely. The same could be said about the original scenario A. It's possible that Amelia Bones was mistaken about when she came back, but it's unlikely. The probability is more extreme, but the information is still there.

No, because if she was able to provide that much information as a conscious communication, she will have provided enough information to have affixed her departure at a specific time.

In any case, there's probably some reason that would make it impossible for her to convey that much information inside 6 hours, anyhow.

0DanielLC
How about this: Scenario A: Amelia Bones comes back from six hours in the future and provides large amounts of evidence of this fact, and of what happens. Scenario B: Due to quantum randomness, a large number of particles happen to jump into the spot to create a clone of Amelia Bones who believes she is from the future and carries evidence of this. It is, again, impossible for Dumbledore to tell which of these situations happens. Yet the time turner does not work.

I am going to have to accuse you of making a grave Mind Projection

Apparently Black Holes preserve information. There are other connections to physics and information theory, Such as the theoretical computers that can use ever smaller quantities of energy, so long as all of their operations are reversible. Given that, it doesn't seem unreasonable that there would be an information theoretic component to the rules of magic. My formulation doesn't require a human mind. If I talk about minds or arbiters, or use language suggesting that then that's just lazy writing on my part.

1[anonymous]
The most obvious instance of Information Theory/Bayesian Statisics overlapping physics is Thermodynamics and Statistical Mechanics, which both deal with the notion of Entropy (all physical entropy is Shannon Entropy). The Information Problem of black holes is a question of their entropic behaviour, and really, black holes are sort of a grey area of our map of reality. Your formulation exactly hinges on information, not observables. I read it twice. It is much more likely that the 6 hours is a conceptual limitation and that the HPMOR-verse is consistent if not causal, either by recomputation or being the solution to an equation, and just has time travel built in. Conceptual limitations are apparent in other branches of magic, and I would hazard a guess that timetravel requires energy correlating to the amount of time jumped back. This would put a practical limit on it too.

I only saw the 91-92 thread and didn't think it fit there. Other threads that I found were marked as superseded.

All information is probabilistic, Bayesian.

Is there a rigorous argument for this, or is this just a very powerful way of modeling the world?

0TobyBartels
In discussions here (ETA: meaning, in the Less Wrong community), I mostly take it for granted that people have adopted the Bayesian perspective promoted in Eliezer's sequences. I think that one can make a pretty good argument (although mathematical rigour is too much to ask for) that receiving information through one's senses can never be enough to justify absolute certainty about anything external. But I'd rather not try to make it here (ETA: meaning, in this discussion thread).
0linkhyrule5
It's more that Bayesian Analysis is a technique you can apply on anything, and under certain conditions is useful.

The problem here is that even if Scenario A and Scenario B are indistinguishable, Amelia's words still constitute Bayesian evidence on which Dumbledore can update his beliefs.

In my formulation, that's "side information." Really, my gedankenexperiment doesn't work unless Amelia Bones happens to be very ditzy concerning time.

I'm inclined to believe that whatever intelligence is behind capital-T Time is enforcing an intuitive definition of information, in the same way that brooms work off of Aristotelian mechanics.

So then, this is a limitation in the "interface" that the Atlantean engine is following. I think my hypothesis is testable.

I don't think the path of a single neutrino could do it. Answer this, from the informational POV of Dumbledore's location in space-time, is path P of that neutrino any less consistent with Scenario A or Scenario B?

3DanielLC
I don't think you quite get what I'm saying. You have given two scenarios that Dumbledore cannot distinguish. This proves that he has incomplete information. It does not prove that he has no infomation. That would require that he be unable to distinguish any two scenarios. Imagine that Amelia Bones tells Dumbledore everything that happened up until her departure from the future, except the path of some neutrino. Furthermore, thanks to Dubledore's legillimancy, he knows she's telling the truth. Can he distinguish between Scenario C, what actually happened, and Scenario C', which is just like C, except that the neutrino went left instead of right? Is the inability to distinguish C and C' enough for Dumbledore to be able to go back another six hours? If not, how is distinguishing C and C' different from distinguishing A and B?

This is precisely what I meant when I mentioned the empirical side information detector. The "informational point of view of Dumbledore" is "whatever-it-is that keeps histories consistent," and the indistinguishability only has to come into play in the local context of whenever Dumbledore uses the time turner. In the way I've envisioned it to work, Dumbledore can only use your algorithm to detect leaked information or side-information that was available to him which he might not be aware of.

Your formulation of "indistinguishable" was already invalidated on reddit.com/r/hpmor by a different objection to my hypothesis. When you lie, you leak information. That information just puts the situation into the 6-hour rule. This cuts off the rest of your reasoning below. It also shows how hard the 6-hour rule is to "fool," which in turn explains why it hasn't been figured out yet.

EDIT: Rewrote one sentence to put the normal 6-hour rule back.

EDIT: Basically, if all of the information Dumbledore can receive from Amelia Bones could lo... (read more)

From Chapter 6:

Harry was examining the wizarding equivalent of a first-aid kit, the Emergency Healing Pack Plus. There were two self-tightening tourniquets. A Stabilisation Potion, which would slow blood loss and prevent shock. A syringe of what looked like liquid fire, which was supposed to drastically slow circulation in a treated area while maintaining oxygenation of the blood for up to three minutes, if you needed to prevent a poison from spreading through the body. White cloth that could be wrapped over a part of the body to temporarily numb pain.

... (read more)
6[anonymous]
Huh, reading that quote again it occurs to me that Harry doesn't reach for the oxygenating potion, he reaches for the syringe of glowing orange liquid that was the oxygenating potion. A truly prepared murderer would merely have to replace the syringe with... something else.
3William_Quixote
Man, that's brutal
0[anonymous]
How bad is it for someone's legs to be missing?

Yes, but instead of the mechanism making the beliefs more radical in the context of the whole society, it acts to make beliefs more mainstream. Though, one could argue that a more jingoistic China would be more radical in the analogous larger context.

What the hell is green tech? Is it just more efficient tech? Or does it have less to do with the technology and more to do with economic agents acknowledging externalities, consciously choosing to internalize some of that cost?

I'll take that as an analogy for what it means to be a moral person. (It's another way of talking about Kant's Categorical Imperative.)

A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.

What does it mean when people hold difficult to understand moral opinions?

You're telling us that everyone should party with the million dollars for three days, and then die.

[Citation Needed] Ahem.

No, I'm not saying that. I'm painting the other position in a light so it's understandable. Your analogy is incomplete. What if they could also donate that million dollars to other research that could increase the life expectancy of 1000 people by 1 year with 90% certainty?

0DaFranker
Ah, yes, of course. I hadn't included any opportunity costs in the calculation, and (perhaps deliberately, though if so I can't remember why) framed the problem as a two-option dilemma when in real life it's obvious to most that this is a false dilemma. As I stated in response to another comment, these were rough same-ballpark-expected-utility numbers. My response was attempting to make a closer-to-real-world referent available as contrast to the ambulance situation, and illustrate the other numbers of the equation as proportionally as possible (to the resulting EU; the individual numbers aren't nearly in the right orders of magnitude for real cryo). I'm not claiming that I have an actual solution to the problem or know which is the right thing to do out of all the many options (there are more than the three we've said here, I'm rather confident we agree on that), even for my own utility function, partially because of the black box problem but also because of a lack of information and credence in my current estimates of the various numbers.

Science is much worse at figuring out what is right because it's method of determining what is right is "Of all the possible hypotheses, we'll eliminate the wrong ones and choose the most probably of what exists".

Someone should write a Sherlock script, where someone uses Sherlock's principle: "when you have eliminated the impossible, whatever remains, however improbable, must be the truth," against him, so that he decisively takes the wrong action.

6Pudlovich
It was done by Doyle himself. In 1898 he published two short stories - "The Lost Special" and "The Man with the Watches", where "an amateur reasoner of some celebrity" participates in solving a crime mystery and fails. It was written after Doyle killed off Sherlock, so he is probably parodying the character - he was quite tired with him at the time.

"Call me when cryonicists actually revive someone," they say; which, as Mike Li observes, is like saying "I refuse to get into this ambulance; call me when it's actually at the hospital".

There was a time when expecting mothers did the rational thing by not going to the maternity ward. http://www.ehso.com/ehshome/washing_hands.htm#History

Resources to be devoted to cryonics and a future lifespan could also be devoted to the lifespan you are fairly sure you have right now. The situation would be more like getting into an ambulance, whe... (read more)

0DaFranker
Ahem. Am I reading this right? There's a 20-year-old human with three days left to live. They have a choice: Either they spend a million dollars having fun during those three days, or invest that million dollars in research to find a cure for their unique illness and put themselves on life support in the meantime. There is only 10% chance that a cure will be found within <10 years (after which life support fails), but if it is found, they gain all of their remaining life expectancy, which is probably more than 50 years. You're telling us that everyone should party with the million dollars for three days, and then die.

It is important to be rational about charity for the same reason it is important to be rational about Arctic exploration: it requires the same awareness of opportunity costs and the same hard-headed commitment to investigating efficient use of resources

In his Mars Direct talks, Robert Zubrin cited the shoestring budget Amundsen expedition through the Northwest Passage in comparison to around 30 contemporary government funded expeditions with state of the art steam frigates and huge logistics trains. The Amundsen expedition traveled in a cheap little sea... (read more)

So the real threat to humanity are the machines that humanity will become. (Is in the process of becoming.)

There are massive intractable problems with human society on earth at the moment which lack easy solutions (poverty, aids, overpopulation, climate change, social order).

Poverty - has always been with us. Many, many people are better off. AIDS - We will solve this. Overpopulation - Population will stabilize at 10 billion. See 2nd link. Climate change - see below. Social order - so long as we don't extinguish ourselves, this will work itself out.

http://www.gapminder.org/videos/hans-rosling-ted-2006-debunking-myths-about-the-third-world/

http://www.ted.co... (read more)

For the longer term, it is hugely beyond our technological abilities

We could start colonizing Mars using nuclear rockets in 20 years, if we wanted to. Heck, if we wanted to badly enough, we could start it in 20 years with chemical rockets.

whatever determines our survival as a species for the nex millennium will be decided on earth. And we are struggling with that right now.

Certain things will be decided in the next century. We could colonize Mars with agriculture but without terraforming well inside that. When it comes to an issue like "specie... (read more)

How about large stations with artificial gravity and zero-G? We were launching 747 sized hulls 97% of the way into orbit, only to dispose of them about once or twice a year for many, many years. (Shuttle main tank.) Large trampoline-sided spaces would result in really cool new sports and forms of art.

The problem with this (and related theories) is that the soul believers believe that the soul itself can live and think without the body. Much of thinking is mediated by language. I don't think a believer in soul would accept that their soul after death will be incapable of thought until God provides it a substitute pineal gland.

Actually, the concept of soul without language makes more sense on its own and fits more religious traditions (especially if you abandon literal translations) than souls that have language.

So, a little background- I've just come out as an atheist to my dad, a Christian pastor, who's convinced he can "fix" my thinking and is bombarding me with a number of flimsy arguments that I'm having trouble articulating a response to

Being articulate has nothing to do with the truth. If your dad isn't willing to explore where he's wrong, then you shouldn't be talking about your world views with him. If you can't establish your world view without him, then you're not ready to establish it at all.

I'd advise not worrying about "the big que... (read more)

You're assuming that there's always an answer for the more intelligent actor. Only happens that way in the movies. Sometimes you get the bear, and sometimes the bear gets you.

Sometimes one can pin their hopes on the laws of physics in the face of a more intelligent foe.

There's lots of scope for great adventure stories in dystopian futures.

The approx 2% figure is interesting to me. This seems to be about the right frequency to be related to the small minority of jerks who will haze strangers for sexist and/or racist motivations.

http://news.ycombinator.com/item?id=3736037

This might be related to the differences in the perception of the prevalence of racism between minorities and mainstream members of society. If one stands out in a crowd, then one can be more easily "marked" by individuals seeking to victimize someone vulnerable. This is something that I seem to have observed over ... (read more)

It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness. ... I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically.

A summary of your points is that: while conceivable, there's no reason to think it's at all likely. Ok. How... (read more)

2DanArmak
Regarding lasers: I could list things the attackers might do to succeed. But I don't want to discuss it because we'd be speculating on practically zero evidence. I'll merely say that I would rather that my hopes for the future do not depend on a failure of imagination on part of an enemy superintelligent AI.
0DanArmak
It's more fun to me to think about pleasant extremely improbable futures than unpleasant ones. To each their own.

To write a culture that isn't just like your own culture, you have to be able to see your own culture as a special case - not as a norm which all other cultures must take as their point of departure.

Most North Americans that fall into the rather arbitrary "white" category do not see their culture as a special case. "White" North Americans tend to see themselves as the "plain vanilla" universal human. Everyone else is a "flavor." In truth, vanilla is also a flavor, of course.

How do I know this? Because I'm of Kore... (read more)

If within our own lifetime we undergo such alien thought changes, alien thoughts in actual aliens will be alien indeed.

Indeed. However, I am beginning to think that by emphasizing the magnitude of the alienness of alien thought, we are intending to avoid complacency but we are also creating another kind of "woo."

Reason: Cockroaches and the behavior of humans. We can and do kill individuals and specific groups of individuals. We can't kill all of them, however. If humans can get into space, the lightspeed barrier might let far-flung tribes of "human fundamentalists," to borrow a term from Charles Stross, to survive, though individuals would often be killed and would never stand a chance in a direct conflict with a super AI.

0DanArmak
In itself that doesn't seem to be relevant evidence. "There exist species that humans cannot eradicate without major coordinated effort". It doesn't follow that either the same would hold for far more powerful AIs, nor that we should model AI-human relationship on humans-cockroaches rather than humans-kittens or humans-smallpox. It's easy to imagine specific scenarios, especially when generalizing from fictional evidence. In fact we don't have evidence sufficient to even raise any scenario as concrete as yours to the level of awareness. I could as easily reply that AI that wanted to kill fleeing humans could do so by powerful enough directed lasers, which will overtake any STL ship. But this is a contrived scenario. There really is no reason to discuss it specifically. (For one thing, there's still no evidence human space colonization or even solar system colonization will happen anytime soon. And unlike AI it's not going to happen suddenly, without lots of advanced notice.)

What if the AI are advanced over us as we are over cockroaches, and the superintelligent AI find us just as annoying, disgusting, and hard to kill?

0DanArmak
What reason is there to expect such a thing? (Not to mention that, proverbs notwithstanding, humans can and do kill cockroaches easily; I wouldn't want the tables to be reversed.)

I wonder if a DDR version of Dual-N-Back could be devised?

3shokwave
It could! Exactly like the normal DDR, except you only stomp a pad if it's the same direction as N-back arrow was. The distribution of arrows would have to change somewhat.

Sounds silly, and it's not very hip, but Fly Lady has worked very well for my girlfriend. Basically, they send you messages giving you mostly short (like 3 minute) tidying and cleaning missions. Your place gets messy a minute at a time, so they keep you cleaning for short intervals to counteract that.

http://flylady.net/

When my girlfriend is participating, the difference is dramatic, and it stays that way for weeks at a time.

2[anonymous]
flylady does more than that, too. A whole bunch of principles and habits for keeping the house in order. Also the products are good. I'll second flylady.
2CronoDAS
My mom's used that too.

Which god? If by "God" you mean "something essentially perfect and infallible," then yes.

That one. Big man in sky invented by shepherds does't interest me much. Just because I'm a better optimizer of resources in certain contexts than an amoeba doesn't make me perfect and infallible. Just because X is orders of magnitude a better optimizer than Y doesn't make X perfect and infallible. Just because X can rapidly optimize itself doesn't make it infallible either. Yet when people talk about the post-singularity super-optimizers, they seem to be talking about some sort of Sci-Fi God.

0faul_sname
Y'know, I'm not really sure where that idea comes from. The optimization power of even a moderately transhuman AI would be quite incredible, but I've never seen a convincing argument that intelligence scales with optimization power (though the argument that optimization power scales with intelligence seems sound).

In a practical sense, I think this means you want to put yourself in situations where success is the default, expected result.

This is a little like "burning the boats."

http://techcrunch.com/2010/03/06/andreessen-media-burn-boats/

Isn't it almost certain that super-optimizing AI will result in unintended consequences? I think it's almost certain that super-optimizing AI will have to deal with their own unintended consequences. Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?

0DanArmak
What's unintended consequences? An imperfect ability to predict the future? Read strictly, any finite entity's ability to predict the future is going to be imperfect.

Isn't the expectation of encountering intelligence so advanced, that it's perfect and infallible essentially the expectation of encountering God?

Which god? If by "God" you mean "something essentially perfect and infallible," then yes. If by "God" you mean "that entity that killed a bunch of Egyptian kids" or "that entity that's responsible for lightning" or "that guy that annoyed the Roman empire 2 millennia ago," then no.

Also, essentially infallible to us isn't necessarily essentially infallible to it (though I suspect that any attempt at AGI will have enough hacks and shortcuts that we can see faults too).

Simply switch to using it as a punishment on the days that you have little appetite. :)

See if I can free up more time and energy.

4sixes_and_sevens
You might have more luck regulating or rationing yourself, as opposed to going cold turkey. Unless you're planning on giving it up permanently.

The administrative admin of the group I was working with told me something that started my habit of brushing and flossing: "It's simple. You only have to floss between the teeth you want to keep." This evokes lots of images for me.

That was 15 years ago, and my habit is still strong to this day.

1sixes_and_sevens
Can I ask what your motives are?

True story. Some years back, I was having trouble sleeping and decided I was getting too much light in the mornings. So I measured my bedroom windows, which were all different, odd widths, and went to Lowe's where they sell nicely opaque vinyl blinds. So I pick out the blinds I want, and go to the cutting machine and press the button to summon store help. The cutting machine turned the blinds, which were cut by a blade which screw clamps to a metal bar marked off like a ruler. There were no detents or slots, so any width could be cut by simply moving the b... (read more)

[This comment is no longer endorsed by its author]Reply

http://www.crinfo.org/articlesummary/10594/

Bushman society is fairly egalitarian, with power being evenly and widely dispersed. This makes coercive bilateral power-plays (such as war) less likely to be effective, and so less appealing. A common unilateral power play is to simply walk away from a dispute which resists resolution. Travel among groups and extended visits to distant relatives are common. As Ury explains, Bushmen have a good unilateral BATNA (Best Alternative to a Negotiated Agreement). It is difficult to wage war on someone who can simply wa

... (read more)
2taw
Bushmen lived in contact with pastoralist and then agricultural societies nearby for millennia. The idea that they represent some kind of pre-contact human nature is baseless. "Industrialized" or not isn't relevant.

Computation market prices can and do go down. But since society can grow almost infinitely quickly (by copying ems), from an em's POV it's more relevant to say that everything else's price goes up.

A society of super-optimizers better have a darn good reason for allowing resource use to outstrip N^3. (And no doubt, they often will.)

A society of super-optimizers that regulates itself in a way resulting in mass death either isn't so much super-optimized, or has a rather (to me) unsavory set of values.

Otherwise we might as well talk about a society of &

... (read more)
0DanArmak
Yes, that's exactly the point of this discussion.

Instead of the deletion or killing of uploads that want to live but can't cut it economically, why not slow them down? (Perhaps to the point where they are only as "quick" and "clever" as an average human being is today.) Given that the cost of computation keeps decreasing, this should impose a minimal burden on society going forward. This could also be an inducement to find better employment, especially if employers can temporarily grant increased computation resources for the purposes of the job.

0RobinHanson
This is close to the me-now immorality that I have said can be possible: http://www.overcomingbias.com/2011/12/the-immortality-of-me-now.html
0DanArmak
If you assume resources will be spent on the happiness/continued life/etc. of uploads, you might as well stipulate they'll have simulated off-hours at home instead of being actually Malthusian. This discussion is about whether, as Hanson suggests, natural economic evolution - with no extra protection provided by law - might result in not-entirely-awful lives for futures ems. In a computation-intensive society, demand is almost certainly infinite. If the cost of computation decreases, the amount of computation done increases. More em (upload) copies are created, or existing ones run faster; either way, carrying out more work. Society grows. Computation market prices can and do go down. But since society can grow almost infinitely quickly (by copying ems), from an em's POV it's more relevant to say that everything else's price goes up. This relies on the crucial assumption that there's a limit to how much you can speed up an em relative to the physical universe. If not a hard limit, some other reason speeding them up has diminishing returns. Otherwise we might as well talk about a society of <10 planet-sized Jupiter brains, each owning its physical computing substrate and so immortal short of violent death.

From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.

Perhaps this varies because of local environmental/economic conditions. From my undergraduate studies, I seem to remember that !Kung Bushmen would sometimes walk away from conflicts into another group.

1[anonymous]
Yes. That's true of many other mobile forager societies as well.

In my experience, Pandora simply doesn't tend to give me music that I like even when I put in an artist that I like.

Yes, Pandora does give me music with qualities in common with the music I like. It's just that those aren't the qualities that make me really like the music. Instead, I just get ho-hum dopplegangers of bands that I like.

Perhaps we should view our moral intuitions as yet another evolved mechanism, in that they are imperfect and arbitrary though they work well enough for hunter gatherers.

When we lived as hunter gatherers, an individual could find a group with compatible moral intuitions or walk away from a group with incompatible ones. The ability or possibility that an unpleasant individual's moral intuitions would affect you from one valley over was minimal.

One should note, though, that studies of murder rates amongst hunter gatherer groups found that they were on the high side compared to industrialized societies.

3taw
Dear everyone, please stop talking about "hunter gatherers". We have precisely zero samples of any real Paleolithic societies unaffected by extensive contact with Neolithic cultures.
2mwengler
I suspect that this was much less true among hunter gatherers than it is now. From what I have read of groups in the Amazon and New Guinea, if you were to walk away from your group and try to walk into another, you would most likely be killed, and possibly captured and enslaved.
Load More