You're calling a rationality club the "Bayesian Conspiracy"? What are you gonna do if some statistics professor shows up, expecting it to be about their field?
Then he could give a guest lecture, and that'd be pretty cool.
Is everyone on Less Wrong an atheist?
It certainly can seem that way at times! Religion tends to get heavily criticised around here, as many atheists see it as a major source of irrationality. There are a few religious members, however, and by no means is being an atheist a prerequisite to being a part of our group. As long as you’re interested in honing your beliefs to match the truth (whatever it may be), you’re welcome here.
As one of the religious folk on LW, if your goal is to get people to still show up, I might tweak it to:
LW is definitely disproportionately atheist, but there are some religious members. We're interested in honing our beliefs to match the truth (whatever that may be) and a lot of our members have concluded religion is false and is a major source of irrationality. The existence of god(s) isn't the only claim about the world we discuss, and our focus isn't just on settling one claim but getting better at making sense of the evidence to deal with all sorts of claims. If that's a project you're interested in, you're very welcome at our meetings.
That way the focus is on method not on "we spend a lot of time heavily criticizing religion"
In our club, we've decided to assume atheism (or, minimum, deism) on the part of our membership. Our school has an extremely high percentage of atheists and agnostics, and we really don't feel it's worth arguing over that kind of inferential distance. We'd rather it be the 'discuss cool things' club than the 'argue with people who don't believe in evolution' club.
Short introductory materials for a rationality meetup
So, I and a few other people are starting a Bayesian Conspiracy chapter at my university (New Mexico Tech). We're trying to put together a short (three page) introductory packet to give to new members. We'd like the packet to introduce people to what rationality is, what it's useful for, and some of the basic techniques. We'd like it to be as readable and palatable as possible, to avoid the intimidation factor of simply pointing people at the Sequences, which are not particularly friendly to a casual reader.
I'm compiling some materials of my own for this purpose, but before I get too excited, I thought I ought to check if any of the other meetups had or knew of something along these lines already created. If not, we'll post our packet on our website for other meetups to use as they see fit.
It would be wrong to round up and exterminate a billion people in order to ensure than one billion and one babies are born.
I agree, but I don't think that cuts to the point. The process of rounding up and killing a billion people, the sadness of people left behind, the skill loss, and the change of the age distribution, would all have large negative effects, and a billion and one babies would be a heck of a baby boom.
While practical issues mean that killing people is just about never the right thing to do, I don't agree that "creating new people is much less valuable than preserving old ones". See my response to Nisan.
This perspective looks deeply insane to me.
I would not kill a million humans to arrange for one billion babies to be born, even disregarding the practical considerations you mentioned, and, I suspect, neither would most other people. This perspective more or less requires anyone in a position of power to oppose birth control availability, and require mandatory breeding.
I would be about as happy with a human population of one billion as a hundred billion, not counting the number of people who'd have to die to get us down to a billion. I do not have strong preferences over the number of humans. The same does not go for the survival of the living.
There would be some number of digital people that could run simultaneously on whatever people-emulating hardware they have.
I expect this number to become unimaginably high in the foreseeable future, to the point that it is doubtful we'll be able to generate enough novel cognitive structures to make optimal use of it. The tradeoff would be more like 'bringing back dead people' v. 'running more parallel copies of current people.' I'd also caution against treating future society as a monolithic Entity with Values that makes Decisions - it's very probably still going to be capitalist. I expect the deciding factor regarding whether or not cryopatients are revived to be whether or not Alcor can pay for the revival while remaining solvent.
Also, I'm not at all certain about your value calculation there. Creating new people is much less valuable than preserving old ones. It would be wrong to round up and exterminate a billion people in order to ensure than one billion and one babies are born.
As you'll see if you read his text, he's responding to proposals to emulate a brain without understanding how it all works, and is noting just how fine you'd need to actually go to do that.
If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.
I've heard the case made at length, but not of, e.g., a C. elegans that's learnt something, been frozen and shows it stil remembers it after it's unfrozen (to name one obvious experiment that, last time this precise Myers article was discussed, apparently no-one had ever done) or something of similar evidentiary value. Experiment beats arguing why you don't need an experiment. Edit: Not the last time this Myers article was discussed, but the discussion of kalla724's "what on earth" neuroscientist's opinion on cryonics practice.
Right, but (virtually) nobody is actually proposing doing that. It's obviously stupid to try from chemical first principles. Cells might be another story. That's why we're studying neurons and glial cells to improve our computational models of them. We're pretty close to having adequate neuron models, though glia are probably still five to ten years off.
I believe there's at least one project working on exactly the experiment you describe. Unfortunately, C. elegans is a tough case study for a few reasons. If it turns out that they can't do it, I'll update then.
Per PZ Myers, the state of the art in neural preservation doesn't recoverably preserve usable amounts of state in zebrafish brains, which are a few hundred microns on a side. How thin slices were you thinking of? And how fast were you going to be slicing?
I’ve worked with tiny little zebrafish brains, things a few hundred microns long on one axis, and I’ve done lots of EM work on them. You can’t fix them into a state resembling life very accurately: even with chemical perfusion with strong aldehyedes of small tissue specimens that takes hundreds of milliseconds, you get degenerative changes. There’s a technique where you slam the specimen into a block cooled to liquid helium temperatures — even there you get variation in preservation, it still takes 0.1ms to cryofix the tissue, and what they’re interested in preserving is cell states in a single cell layer, not whole multi-layered tissues. With the most elaborate and careful procedures, they report excellent fixation within 5 microns of the surface, and disruption of the tissue by ice crystal formation within 20 microns. So even with the best techniques available now, we could possibly preserve the thinnest, outermost, single cell layer of your brain…but all the fine axons and dendrites that penetrate deeper? Forget those.
Which is obvious nonsense. PZ Meyers thinks we need atom-scale accuracy in our preservation. Were that the case, a sharp blow to the head or a hot cup of coffee would render you information theoretically-dead. If you want to study living cell biology, frozen to nanosecond accuracy, then, no, we can't do that for large systems. If you want extremely accurate synaptic and glial structural preservation, with maintenance of gene expressions and approximate internal chemical state (minus some cryoprotectant-induced denaturing), then we absolutely can do that, and there's a very strong case to be made that that's adequate for a full functional reconstruction of a human mind.
In the new sequence Highly Advanced Epistemology 101 for Beginners EY has made use of exercise questions / statements intended to be pondered prior to continuing. He has labeled these "koans" but is open to suggestions for a better word, as a koan means something a bit more specific than that to Zen people. Any ideas? Here are the "koans" from this sequence in order of appearance:
If the above is true, aren't the postmodernists right? Isn't all this talk of 'truth' just an attempt to assert the privilege of your own beliefs over others, when there's nothing that can actually compare a belief to reality itself, outside of anyone's head?
If we were dealing with an Artificial Intelligence that never had to argue politics with anyone, would it ever need a word or a concept for 'truth'?
What rule could restrict our beliefs to just propositions that can be meaningful, without excluding a priori anything that could in principle be true?
"You say that a universe is a connected fabric of causes and effects. Well, that's a very Western viewpoint - that it's all about mechanistic, deterministic stuff. I agree that anything else is outside the realm of science, but it can still be real, you know. My cousin is psychic - if you draw a card from his deck of cards, he can tell you the name of your card before he looks at it. There's no mechanism for it - it's not a causal thing that scientists could study - he just does it. Same thing when I commune on a deep level with the entire universe in order to realize that my partner truly loves me. I agree that purely spiritual phenomena are outside the realm of causal processes, which can be scientifically understood, but I don't agree that they can't be real."
"Does your rule there forbid epiphenomenalist theories of consciousness - that consciousness is caused by neurons, but doesn't affect those neurons in turn? The classic argument for epiphenomenal consciousness has always been that we can imagine a universe in which all the atoms are in the same place and people behave exactly the same way, but there's nobody home - no awareness, no consciousness, inside the brain. The usual effect of the brain generating consciousness is missing, but consciousness doesn't cause anything else in turn - it's just a passive awareness - and so from the outside the universe looks the same. Now, I'm not so much interested in whether you think epiphenomenal theories of consciousness are true or false - rather, I want to know if you think they're impossible or meaningless a priori based on your rules."
Does the idea that everything is made of causes and effects meaningfully constrain experience? Can you coherently say how reality might look, if our universe did not have the kind of structure that appears in a causal model?
I propose that we continue to call them koans, on the grounds that changing involves a number of small costs, and it really, fundamentally, does not matter in any meaningful sense.
Suggested title: The Tao of Bayes
Ideally it should not be significantly longer than "The Tao of Pooh"
I'd be half-tempted to try my hand at it myself...
So far, I'm twenty pages in, and getting close to being done with the basic epistemology stuff.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
D'you mean you've found the topic of religion to be mindkilling, so all discussions in your group need to work within the majority framework of atheism/deism to be productive or that you restrict your membership?
Nothing so drastic. Just a question of the focus of the club, really. Our advertising materials will push it as a skeptics / freethinkers club, as well as a rationality club, and the leadership will try to guide discussion away from heated debate over basics (evolution, old earth, etc.).