HPMOR related: why aren't assasinations common?

1 Jan_Rzymkowski 17 April 2016 05:40PM

In HPMOR Harry quickly learns that (amongst other thing) that if one applies some creativity and scientific knowledge to magic, one can trivially assasinate Bad Guys. He doesn't really make use of that, but considers it doable if necessity arises.

But this is also true in our world. If one wants to kill a Bad Guy (a dangerous president, PM or some other sort of Dark Lord), there are easy ways (though not completely risk free). Off the top of my head: smear something that a Bad Guy will touch with dimethylmercury. It permeates through skin in seconds and kills within months. Chelation therapy can help, but probably it's effective only within first hours from contact. Synthesis isn't terribly problematic. You can put in on some paper and smuggle as a used tissue. Of course you can't handle it with gloves (it gets thrugh latex and gloves would be suspicious), but a crumbled tissue will do well. (Your fingers would be just physically too far for dimethylmercury to diffuse). The biggest risk would be poisoning yourself. And it's a fair risk, though you can diminish it by starting early on chelation.

It seems simple. It is definitely risky, but not a sure death. So if assasination can be done this easily, why aren't there more of them? Any ideas?

 

Comment author: Jan_Rzymkowski 22 August 2015 09:43:25PM 0 points [-]

Does anybody knows any moodtracking app that asks you about your mood at random time of the day? (Simple rating of the mood and maybe some small question about whether something happened that day influencing your mood) All I found needed me to turn on the app, which meant I used to forget to rate my mood or when I was down I just couldn't be bothered. So it would be perfect if it would just daily pop-up an alert, make me choose something and then disappeared.

Comment author: Jan_Rzymkowski 18 August 2015 08:11:02PM 1 point [-]
  1. It must kill you (at least make you unconscious) on a timescale shorter than that on which you can become aware of the outcome of the quantum coin-toss
  2. It must be virtually certain to really kill you, not just injure you.

Both seem to be at odds with Many World Interpretation. In infinite number of those it will just injure you and/or you will become aware before, due to same malfuntion.

Comment author: Jan_Rzymkowski 12 August 2015 09:36:21PM 2 points [-]

Isn't it the formalization of Pascal mugging? It also reminds of the human sacrifice problem - if we don't sacrifice a person, the Sun won't come up the next day. We have no proof, but how can we check?

Comment author: Jan_Rzymkowski 12 August 2015 07:47:10PM 0 points [-]

Good (not only Friendly, but useful to full extent) AI would understand the intention, hence answer that luminous aether is not a valid way of explaining behavior of light.

Comment author: Jan_Rzymkowski 06 August 2015 09:14:12PM 5 points [-]

After years of confusion and lengthy hours of figuring out, in a brief moment I finally understood how is it possible for cryptography to work and how can Alice and Bob share secrets despite middleman listening from the start of their conversation. And of course now I can't imagine not getting it earlier.

Comment author: Jan_Rzymkowski 06 August 2015 03:21:09PM 1 point [-]

Is there a foundation devoted to promotion of cryonics? If no, it would be probably very desirable to create such. Popularizing cryonics can save an incredible amout of existences and so, many people supporting cryonics would probably be willing to donate money to make some more organized promotion. Not to mention personal gains - the more popular cryonics would become, the lower the costs and better logistics.

If you are or know someone supporting cryonics and having experience/knowledge in non-profit organisations or professional promotion, please consider that.

Comment author: jacob_cannell 13 July 2015 02:51:07AM 2 points [-]

This isn't an idea so terrible in my opinion to justify such a high downvoting percentage, but perhaps you could improve the presentation.

I think the idea would be more plausible if it was tied to some compelling rationale - what is the motivation for posthumans to expand computation on human dreams? That computation has opportunity cost in other experiences and lives that could be lived.

Comment author: Jan_Rzymkowski 13 July 2015 12:30:45PM 0 points [-]

I'm sorry for overly light-hearted presentation. It seemed suited for a presentation of a, to simplify greatly, form of fun.

Waker's reality doesn't really rely on dreams, but on waking in new realities and a form of paradoxical commitment to equally reality she lives in and a random reality she would wake up in.

It's rationale is purely a step in exploring new experiences, a form of meta-art. As human and transhuman needs will have been fulfilled, posthumans would (and here at least I expect future me) search for entirely new ways of existing, new subjectivities. That is what I consider posthumanism, meddling with most basic imperatives of concious existence.

I see as just a one possibility to explore, something to let copies of myself experience. (those are not independent copies however, I imagine whole cluster of myselves interconnected and gathering understanding of each others perceived realities. Those living Waker's lives would be less concerned with existence of other copies, but rather their experiences would be watched by higher level copies)

Comment author: Baughn 13 July 2015 10:13:11AM 2 points [-]

But you're always stuck in one reality.

Let's take a step back, and ask ourselves what's really going on here. It's an interesting idea, for which I thank you; I might use it in a story. But...

By living your life in this way, you'd be divorcing yourself from reality. There is a real world, and if you're interacting solely with these artificial worlds you're not interacting with it. That's what sets off my "no way, no how" alert, in part because it seems remarkably dangerous; anything might happen, your computing infrastructure might get stolen from underneath you, and you wouldn't necessarily know.

Comment author: Jan_Rzymkowski 13 July 2015 12:16:38PM 0 points [-]

Disclaimer: This comment may sound very crackpottish. I promise the ideas in it aren't as wonky as they seem, but it would be to hard to explain them properly in such short time.

By living your life in this way, you'd be divorcing yourself from reality.

Here comes the notion that in posthumanism there is no definite reality. Reality is a product of experiences and how your choices influence those experiences. In posthumanism however you can modify it freely. What we call reality is a very local phenomenon.

Anyhow, it's not the case that your computing infrastructure would be in danger - it would be either protected by some powerful AI, much better suited to protecting your infrastructure then you or there would be other copies of you keeping the maintenance in "meatspace" (Again, I strongly believe that it's only our contemporary perspective that makes us feel that reality in which computations are performed is more real then virtual reality).

What's more, a Waker can be perfectly aware that there is a world beyond her experiencing and may occasionally leave her reality.

Comment author: Tem42 13 July 2015 03:38:30AM 1 point [-]

"...she would have investment in life goals, relationships with other people, she'll be capable of real love..."

If she is choosing to spontaneously give up these life goals, relationships, and love, perhaps she is not experiencing them fully. I understand that posthuman life isn't necessarily a bed of roses, but at the point that you are able to create these new realities at will, shouldn't we also expect these new realities to be pretty good? At least good enough that we won't feel any need to abandon everything and start anew very often. Of course, it might be that the human brain needs a refresh every century or so, so I won't take any bets against never wanting it.

Comment author: Jan_Rzymkowski 13 July 2015 11:59:23AM 0 points [-]

Well, creating new realities at will and switching between them is an example of Hub World. And I expect that would indeed be the first thing the new posthumans would go for. But this type of existence is stripped from many restrictions, which in a way make life interesting and give it structure. So I expect some of the posthumans (amongst them - me in the future) to create curated copies of themselves, which would gather entirely new experiences, like Waker's subjectivity. (it's experiences would be reported to some top-level copy)

You see, a Waker doesn't consider waking abandoning everything, the way we do. She doesn't feel abandonment, the same way we don't feel we have abandoned everything and everyone in the dream. She has the perfect awareness of current world and a world to be feeling exactly as real.

One other way to state it - staying in a one reality forever is for a Waker feels like (to us) staying in a dream and never waking up to experience the actual reality.

View more: Next