Afterthought
When I was actually suicidal, what kept me from going through with it was:
1) Although my plan had three separate ways by which it could kill me, it was possible that all would fail, such that I would wind up still in all the pain that was driving me to kill myself, plus on life support machines and with people hovering over me annoying me.
2) I would actually have to get up and do it, which was effort.
When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I'd been. Somebody who tells you they're suicidal isn't asking you to talk him out of it; he's asking you to listen. Which is why the advice you were taught works. Someone who listens is a precious gift, there, where you can still feel the pull of suicide, even someone you suspect is listening just because they're socialized/paid to do it.
On the other hand, when you're out feeling the pull, you've had lots of (people you perceive as) idiots, giving you (seemingly) bad advice and (seemingly) pointless arguments. I, at least, didn't want to hear yet another theory as to why suicide was a bad idea; frustration at such yammerers made suicide look like a better idea the longer they talked.
The advice you were given back in high school was distilled professional expertise. Evaluate carefully before you dismiss it.
When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I'd been. Somebody who tells you they're suicidal isn't asking you to talk him out of it; he's asking you to listen.
Just wanted to say that I relate very strongly to this. When I was heavily mentally ill and suicidal, I was afraid of reaching out to other people precisely because that might mean I only wanted emotional support rather than being serious about killing myself. People who really wanted to end their lives, I reasoned, would avoid deliberately setting off alarm bells in others that might lead to interference. That I eventually chose to open up about my psychological condition at all (and thereby deviate from the "paradigmatic" rational suicidal person) gave me evidence that I didn't want to kill myself and helped me come to terms with recovering. Sorry if this is rambling.
A lot of people are suggesting something like "SIAI should publish more papers", but I'm not sure anyone (including those who are making the suggestion) would actually change their behavior based on that. It sounds an awful lot like "SIAI should hire a PhD".
Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.
Evangelion
Maybe someone should do some study about that peculiar group of depressed and/or psychopathological people who were significantly mentally kicked by NGE. Of course it's all anecdotal right now, but I really have the impression (especially after spending some time at EvaGeeks... ) that NGE produces a recurring pattern of effect on a cluster of people, moreover, that effect is much more dramatic than what is usual in art.
I don't imagine it would have nearly as much of an effect on people who aren't familiar with anime. But I would read that study in a heartbeat if it existed.
Rather than unfriendly AI, I think he means a Friendly AI that's only Friendly to one person (or very few people). If we're going to be talking about this concept then we need a better term for it. My inner nerd prefers Suzumiya AI.
I don't think this is true. Benatar's position is that any being that ever suffers is harmed by being created. This is not something that technological progress is very likely to relieve.
He has two main arguments. One is the asymmetry, which is the better one, but it has weird assumptions about personhood - reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living) or need to rely on consent, but I see no reason why consent can't be given without instantiating a person. (But I'm still confused about consent.)
The other argument is based on low expected value of any life. Specifically, he argues that life is much worse than commonly thought (plausible) and addresses why common approaches can't justify the harm anyway. This relies on the assumption that the status quo will more-or-less continue. Justifiable, but unless he provides an argument to the contrary, transhumanists can still argue to you only need to engineer a world in which humans don't suffer (or even can't - the wireheading solution). If we all lived in a Post-Singularity Utopia, I'm sure his justifications for his specific comparison of harms and benefits would look much stranger to us.
One is the asymmetry, which is the better one, but it has weird assumptions about personhood - reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living)
I'm not sure I remember his arguments relying on those assumptions in his asymmetry argument. Maybe he needs them to justify not committing suicide, but I thought the badness of suicide wasn't central to his thesis.
I'm currently working on several FAQs/overviews.
I'm reading Benatar's Better Never To Have Been and I noticed that the actual arguments for categorical antinatalism aren't as strong as I thought and seem to hinge on either a pessimistic view of technological progress (which might well be justified) or confusions about identity and personhood. But I'm currently confused about the relevant philosophy, so I'm collecting the arguments and justifications, and will turn this basically into an antinatalism FAQ and reference. A lot of people seem to dismiss Benatar without actually reading him, so writing a more accurate overview of his and related arguments might help.
Of more questionable importance, I'm also finally writing an Early Christianity FAQ. I'm working through Price's work on the New Testament and have a hard time keeping all the different characters and weird theories straight, so I'm writing an overview from a perspective of higher criticism. It seems to me that a lot of good insights in the field are too strongly compartmentalized, so just getting them all into one place should make things clearer. It's mostly for my own use, though, and intended as supplement material for my own crackpot theories, but it's a lot of fun.
I'm also writing/researching an introduction to Solomonoff Induction etc. and hope to get my first draft of a (German) student paper done by the end of the year. I'll probably write a less formal (English) version at the same time. I can't think without writing, so I basically get a rough draft for free and might as well clean it up a little. (However, I have abandoned overviews before once I understood everything and got bored, so yeah.)
(I'm also working on a student project about performance estimates of embedded systems, but I'm kinda tired of SystemC and wrestling with weird platforms. Should've stayed with that theology degree after all.)
I'm reading Benatar's Better Never To Have Been and I noticed that the actual arguments for categorical antinatalism aren't as strong as I thought and seem to hinge on either a pessimistic view of technological progress (which might well be justified)
I don't think this is true. Benatar's position is that any being that ever suffers is harmed by being created. This is not something that technological progress is very likely to relieve. Or are you thinking of some sort of wireheading?
or confusions about identity and personhood.
That sounds like an interesting criticism.
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven't realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions - I don't see it as a fatal flaw.
I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my "degree of belief" in a possible statement A is 2, I can be Dutch booked. But now that I'm licensed to disbelieve entailments (so long as I take myself to be ignorant that they're entailments), perhaps I justifiably believe that I can't be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, ..., Pn, I can always potentially justifiably believe the conditional "If the premises P1, ..., Pn are true, then C is correct" has low probability - even if the argument is purely deductive.
Is there more to it than that it is the definition of Bayesian epistemology?
Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn't have this belief a priori.
Logical omniscience comes from probability "statics," not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Something has gone horribly wrong here.