Comment author: Normal_Anomaly 07 December 2011 09:58:05PM 5 points [-]

Cryonics - lower

Cryonics Status - More likely to be signed up or to be considering it, less likely to be not planning to or to not have thought about it

So long-time participants were less likely to believe that cryonics would work for them but more likely to sign up for it? Interesting. This could be driven by any of: fluke, greater rationality, greater age&income, less akrasia, more willingness to take long-shot bets based on shutting up and multiplying.

Comment author: Randolf 08 December 2011 01:07:21AM 1 point [-]

I think the main reason for this is that these persons have simply spent more time thinking about cyronics compared to other people. By spending time on this forum they have had a good chance of running into a discussion which has inspired them to read about it and sign up. Or perhaps people who are interested in cyronics are also interested in other topics LW has to offer, and hence stay in this place. In either case, it follows that they are probably also more knowledgeable about cyronics and hence understand what cyrotechnology can realistically offer currently or in the near future. In addition, these long-time guys might be more open to things such as cyronics in the ethical way.

In response to comment by Randolf on Existential Risk
Comment author: NickiH 22 November 2011 04:39:26PM 0 points [-]

I would count myself among "general people". I didn't get it at all. In fact, having read the comments, I'm still not sure I get it. It's a pretty picture and all, but why is it there?

In response to comment by NickiH on Existential Risk
Comment author: Randolf 23 November 2011 02:45:36PM *  0 points [-]

The first picture is a dark image of a planet with a sligthly threatening atmosphere. It looks like the upper half of a mushroom cloud, but it could be also seen as the earth violently torn apart. This is why I think , given the context, that it symbolises the threat of a nuclear war, and more universally, the threat of a dystopia.

The last picture shows a beatiful utopia. I thought it's there to give a message of the type: "If everything goes well, we can still achieve a very good future." That is, while the first picture symbolises the threat of a dystopia, the last one symbolises the hope and possibility of an utopia.

Of course, this is merely my interpretation. There are very many ways one can inerprent these pictures.

Comment author: thomblake 16 November 2011 03:46:49PM 6 points [-]

The overall connotations and message are clear.

I'm a genius transhumanist who likes sci-fi, and the connotations and message of the image were not clear to me. I wasn't even sure what it was supposed to be a picture of (my first guess was something from the Halo games, though I couldn't imagine the relevance). Is this more something that would be clear to the general populace and not folks like me, and thus should be included in a post to appeal to the general populace?

In response to comment by thomblake on Existential Risk
Comment author: Randolf 17 November 2011 03:03:57PM 1 point [-]

Strange enough. After all, while I am a transhumanist to some degree and also enjoy scifi, I am far from being a genious. Still the message of the pictures were immeditately obvious.This would suggest towards what you said: they maybe appealing to general people, while not necessarily as appealing to those already very familiar with scifi and transhumanism.

In response to comment by Randolf on Existential Risk
Comment author: juliawise 16 November 2011 09:11:14PM 5 points [-]

This is why you would not have been hired to sit in front of the button, even given the Soviets' dubious hiring techniques. Also, if you had been raised in Soviet Russia, your thoughts on the topic might have been different.

In response to comment by juliawise on Existential Risk
Comment author: Randolf 17 November 2011 02:47:14PM *  0 points [-]

I could indeed simply lie and play the role of an obeying soldier to get the position I were looking for. However, it is of course true that if I had born and lived in a country where people are continiously fed with nationalist propaganda, I would be less likely to disobey the rules or to think it's wrong to retalite.

In response to Existential Risk
Comment author: Randolf 15 November 2011 07:48:48PM *  2 points [-]

If I had been one of those persons with the missile warning and red button, I wouldn't have pressed it even if I knew the warning was real. What use would it be to launch a barrage of nuclear weapons against normal citizens simply because their foolish leaders did so to you? It would only make things worse, and certainly wouldn't save anyone. Primitive needs to revenge can be extremely dangerous with todays technology.

Comment author: AlexM 12 July 2010 04:34:25PM 2 points [-]

|Interesting.

as interesting as picking up rocks and observing insects crawling under them, IMHO

|Never head of this guy. Link?

http://en.wikipedia.org/wiki/Yury_Ignatyevich_Mukhin

most of his works are online, in Russian of course, links from Russian wiki page

Comment author: Randolf 14 November 2011 09:27:07PM *  1 point [-]

as interesting as picking up rocks and observing insects crawling under them, IMHO

What, insects are fascinating!

Comment author: Randolf 14 November 2011 12:40:21AM *  0 points [-]

Rationality can be useful when drawing. It allows you to avoid simple mistakes which you could otherwise make. I think this is especially true when you are for example inking your work, or doing some other other task which is mostly mechanical. However, sometimes following mere feelings can provide very interesting results. I am not a good drawer, nor do I actually know anything about drawing, but I draw a little bit every now and then. I find drawing most enjoyable when I draw quided by intuition, just letting the pen draw curve after curve the way it feels. I have found that when I do this, I achieve results more to my liking than when I actually think about what to draw and how. Maybe this is simply because I don't have much actual knowledge about drawing, I don't know.

Anyway, interesting post, thanks.

Comment author: potato 12 November 2011 09:27:25AM *  8 points [-]

It's not as if a star would have absolutely no effect from a Boltzmann cake suddenly appearing inside of it. A civilization with a good enough model of how this star zigs and zags, they would be able to find facts about the star which would force a bayesian to move from the ridiculously tiny prior probability of the hypothesis :

On August 1st 2008 at midnight Greenwich time, a one-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved.

to some posterior distribution. Some pieces of evidence might increase the probability of the hypothesis, some might decrease it.

This is not a cheap objection in anyway. To misinterpret verifications such as early Wittgenstein and W.V. Quine as claiming that only those sentences which we can currently test are meaningful is a mistake. A common mistake, and one that some using the term positivist to describe themselves have made.

If logical positivism / verificationism were true, then the assertion of the spaceship's continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence. I don't see how this is compatible with a correspondence theory of truth.

This is another sort of mistake. Because a hypothesis can't be tested by me does not mean that it is meaningless. Vereficationists would agree with this because they think verification works everywhere, even on the other side of the universe. If some alien race over there could have seen the spaceship, or seen something which made the probability of there being a spaceship there high, or not have, then the claim is not meaningless.

What vereficationists like Quine are saying is that science is done through the senses. In the matrix code, way above the level of the machine language, our senses are the evidence nodes of our Bayes nets, and our hypotheses are the last nodes. The top layer of nodes consists of the complete set of states that some beings sensory apparatus can be in, any node in this mind containing a belief which is independent of all of the evidence nodes, contains a belief which is meaningless for that mind. But showing subjective meaninglessness of some hypothesis in one being is not enough to show that a belief/hypothesis is meaningless for all minds.

I think the critiques of this article apply to the worse of the worse of positivism. But many of those critiques are critiques that were made by hard verificationists such as Quine. But the simplest form of versificationism can be traced to Edmund Husserl belief it or not. The core of what the first movement of phenomenologists, and Quine, were saying is that only stimulus sentences can ever be used as initial evidence. Some stimulus may increase the probability of some other belief which may then be used as evidence for some other belief in turn, but without evidence from stimulus there wouldn't be enough useful shifting about of probability to do anything. Certainly a human brain, or even a replica of Einsteins brain, would have a hard time figuring out the theories of relativity if they only had a 4by4 binary black and white pixel view of the world, and could move around the camera providing them the input around freely as they like.

If no constructable mind could ever get any result from any instrument, natural, current or wildly advanced, that would force a rational mind to update its probability about a given sentence, a la bayes, then that sentence is not a scientifically meaningful belief. This is to be senseless for Wittgenstein, or literally meaningless, this is only to be scientifically meaningless for Quine. Both positions have been called vereficationism and I think both are useful, and true-ish at least.

Lastly,I've always thought of positivism as going perfectly with a correspondence theory of truth. We can treat "senseless" or "meaningless" as just meaning "un-entangle-able beliefs", as in beliefs which make no restrictions on experience.

It seems to me that Yudkowsky and the whole lot of LW staples are plainly positivists. And I have always thought of this as a good thing. Positivism plus LW style Bayesianism plus effort, form an epistemology which at least gives you a stronger fighting chance than you would have otherwise. Forming stupid belifs is harder after reading lesswrong, and harder after reading Quine, or Goodman, or even the most basic vereficationists texts. Many people have made philosophical mistakes which can then be avoided by reading vereficationists. Such as LW. Give credit where it is due, to yourself and Quine.

Comment author: Randolf 12 November 2011 12:49:20PM *  1 point [-]

This is another sort of mistake. Because a hypothesis can't be tested by me does not mean that it is meaningless. Vereficationists would agree with this because they think verification works everywhere, even on the other side of the universe. If some alien race over there could have seen the spaceship, or seen something which made the probability of there being a spaceship there high, or not have, then the claim is not meaningless.

I don't think I understand.. If it isn't possible to ever verify the existence of these aliens, what does it matter that they could have seen the spaceship? Essentially, how does it help that some being A could verify a phenomenon if I can't ever verify that this is indeed the case?

Comment author: Friendly-HI 04 November 2011 12:59:26AM *  10 points [-]

"fixed". I'm genuinely sorry for being inconsiderate, I'm young and still have a tendency to use provocative language if I feel emotionally stimulated.

On a lighter note... I'm curious how some of you may have estimated a very low probability of say... the likelihood that one religion is a very good approximation to the truth. I doubt that there really is any way in which someone could give a sensible estimate, unless one were to put years of work into it to weigh all the (non)evidence meticulously (and as we know religions tend to dress their stories in a LOT of colorful detail, because hearing details makes things appear more true, since they assist our human imagination).

How could one of us, in a practical way, come up with a roughly realistic number? I used something like 0,0000000000000000001% probability because that's what it -feels- like to me. I can only imagine how unlikely it would be, by comparing it to something very unlikely... like winning the lottery twice in a row. Which still doesn't feel as surprising as discovering that our world is formed out of the body of a slayed giant. But then again my feeling of surprise upon winning the lottery (I'm not actually playing) is of course in no way directly proportional to the actual odds of winning either. What kind of thought process went through your head when you had to answer this question? (I'm asking everyone in general, not just Alicorn).

Comment author: Randolf 10 November 2011 12:03:54AM *  0 points [-]

I left that field plank because I don't think the question is well defined. It has very little meaning to assign probabilities on the existence of something as vaque as a god. Maybe there is a god, maybe there isn't. It's entirely beyond my scope.

Comment author: jkaufman 15 October 2011 05:00:05PM 5 points [-]

Many fantasy stories are about an ordinary person who suddenly finds themself in a world of myth, magic, and sorcery. This seems to be in part about openness: the protagonist has to realize that their model of the world was way off, and come to understand a new and different world.

Comment author: Randolf 15 October 2011 07:01:51PM 2 points [-]

Yes, indeed. The ratio open/closed may be higher in scifi books than in fantasy books, but there are still many open fantasy books and closed scifi books. In the end it only depends on the invidual book. This is why I don't think it's really safe to label fantasy as a closed genre.

View more: Next