Meetup : Berkeley meetup
Discussion article for the meetup : Berkeley meetup
This week's meetup will be at Zendo, as usual. See the mailing list, or contact me, for directions, see the mailing list at http://groups.google.com/group/bayarealesswrong or call me at four-zero-eight-nine-six-six-nine-two-seven-four.
Discussion article for the meetup : Berkeley meetup
Meetup : Berkeley meetup: board game night
Discussion article for the meetup : Berkeley meetup: board game night
Nisan is out of town, so I will be hosting Wednesday's meetup at Zendo. We will be having a board game night. Zendo's game library has Robo Rally, Smallworld, Dominion, Ticket to Ride, Settlers of Catan, Tigris & Euphrates, Set, and some others. If you have a game that you'd like to play, bring it along! Doors open at 7pm, and games start at 7:30. For directions to Zendo, see the mailing list http://groups.google.com/group/bayarealesswrong or call me at four-zero-eight-nine-six-six-nine-two-seven-four.
Discussion article for the meetup : Berkeley meetup: board game night
Friendship and happiness generation
Happiness and utility are different things, with happiness (measured in hedons) generally referring to the desirability of an agent being in its current mental state, while utility (measured in utils) refers to the desirability, from the point of view of some agent, of the configuration of the universe.
Naively, one could model caring about another person as having a portion of your utility function allocated to mimicking their utility (me.utility(universe) = caring_factor*friend.utility(universe) + me.utility(universe excluding value of friend's utility function)) or their happiness (me.utility(universe) = caring_factor*friend.happiness + me.utility(universe excluding friend's happiness)). However, I think these are bad models of how caring for people actually works in humans.
I've noticed that I often gladly give up small amounts of hedons so that someone I care about can gain a similar amount of hedons. Extrapolating this, one might conclude that I care about plenty of other people nearly as much as I care about myself. However, I would be much less likely to give up a large amount of hedons for someone I care about unless the ratio of hedons that they could gain over the hedons I would have to give up is also fairly large.
While trying to figure out why this is, I realized that whenever I think I'm sacrificing hedons for someone, I usually don't actually lose any hedons because I enjoy the feeling associated with knowing that I helped a friend. I expect that this reaction is fairly common. This implies that by doing small favors for each other, friends can generate happiness for both of them even when the amount of hedons sacrificed by one (not counting the friend-helping bonus) is similar to the amount of hedons gained by the other. However, this happiness bonus for helping a friend is bounded, and grows sublinearly with respect to the amount of good done to the friend. In terms of evolutionary psychology, this makes sense: seeking out cheap ways to signal loyalty sounds like a decent strategy for getting and keeping allies.
I don't think that this tells the whole story. If a friend had enough at stake, I would sacrifice much more for them than could be reimbursed with the happiness bonus for helping a friend (plus happiness penalty that I would otherwise absorb for the feeling of knowing I had abandoned a friend), because I do actually care about people. Again, I would expect that most other people would act this way as well. But it seems likely that most favors that people do for each other are primarily motivated by pursuing personal happiness that they can get from knowing that they've helped a friend, rather than directly caring about how happy their friends are.
Silicon Valley AI/Machine Learning study group
I am organizing a study group for people in the Silicon Valley area who are taking the Stanford online AI or Machine Learning classes. I posted this to the Tortuga LW meetup google group, but in case there are people here who are interested in such a study group but are not subscribed to it, I'm cross-posting it here. Anyway, I've made a google group for the study group. Please join if you are interested: http://groups.google.com/group/silicon-valley-ai-ml
Fiction: Letter from the End
http://alex.mennen.org/LetterFromTheEnd.pdf
I thought some LW-ers might find this interesting.
Assumption of positive rationality
Let's pretend for the sake of simplicity that all belief-holding entities are either rational or irrational. Rational entities have beliefs that correlate well with reality, and update their beliefs with evidence properly. Irrational entities have beliefs that do not correlate with reality at all, and update their beliefs randomly. Now suppose Bob wants to know what the probability that he is rational is. He estimates that someone with a thought process that seems like his does from the inside is 70% likely to be rational and 30% likely to be irrational. Unfortunately, this does not help much. If Bob is irrational, then his estimate is useless. If Bob is rational, then, after updating on the fact that a randomly selected Bob-like entity is rational, the we can estimate that the probability of another randomly selected Bob-like entity being rational is higher than 70% (exact value depending on the uncertainty regarding what percentage of Bob-like entities are rational). But Bob doesn't care whether a randomly selected Bob-like entity is rational; he wants to know whether he is rational. And conditional on Bob's attempts to figure it out being effective, the probability of that is 1 by definition. Conditional on Bob being irrational, he cannot give meaningful estimates of the probability of much of anything. Thus, even if we ignore the difficulty of coming up with a prior, if Bob tries to evaluate evidence regarding whether or not he is rational, he ends up with:
P(evidence given Bob is rational) = x (he can figure it out)
P(evidence given Bob is irrational) = ?
I am not aware of any good ways to do Bayesian reasoning with question marks. It seems that Bob cannot meaningfully estimate the probability that he is rational. However, in a decision theoretic sense, this is not really an issue for him, because Bob cannot be an effective decision agent if his beliefs about how to achieve his objectives are uncorrelated with reality, so he has no expected utility invested in the possibility that he is irrational. All he needs are probabilities conditional on him being rational, and that's what he has.
This does not seem to extend well to further increases in rationality. If you act on the assumption that you are immune to some common cognitive bias, you will just fail at life. However, I can think of one real-life application of this principle: the possibility that you are a Boltzmann brain. A Boltzmann brain would have no particular reason to have correct beliefs or good algorithms for evaluating evidence. When people talk about the probability that they are a Boltzmann brain, they often mention things like the fact that our sensory input is way more well-organized that it should be for almost all Boltzmann brains, but if you are a Boltzmann brain, then how are you supposed to know how well-organized your visual field should be? Is there any meaningful way someone can talk about the probability of em being a Boltzmann brain, or does ey just express all other probabilities as conditional on em not being a Boltzmann brain?
Creationism's effect on the progress of our understanding of evolution
Lynn Margulis argues that natural selection cannot provide a powerful enough evolutionary force to account for the punctuated equilibrium demonstrated in the fossil record. She proposes as an alternative that evolution is driven by changes in symbiotic relationships. I'm not a biologist, and I don't understand what exactly her theory means, so I'm not going to try to argue for or against it, but it got me thinking:
Evolutionary biologists cannot afford to let Margulis's theory become well-known and accepted as a mainstream theory, because that would create a rift in the pro-evolution camp, and creationists would be able to exploit this by combining Margulis's argument that natural selection cannot account for punctuated equilibrium with arguments by Neo-Darwinists against Margulis's theory to support their claim that evolution is false. This would be effective because many people would not understand that "we do not understand everything about how evolution works" does not imply "creationism is correct". Thus, many evolutionary biologists might feel that they have to be very careful to look like they do know everything about how evolution works. This could make it more difficult for them to spot aspects in which their assumptions about evolution are mistaken. Maybe the biggest damage caused by creationism is that it suppresses legitimate criticism of the current accepted models of evolution, besides spreading false information to the general public.
Again, I'm not arguing in favor of Margulis's theory in particular, but the statement "There exists at least one false fact about evolutionary biology that is accepted as true by a consensus of researchers in that field" seems fairly likely to be true.
Social ethics vs decision theory
It seems to me that usually, when someone says "ethics" on lesswrong, ey usually means something along the lines of decision theory. When an average person says "ethics", ey is usually referring to a system of intuitions and social pressures designed to influence the behavior of members of a group. I think that a lot of the disagreement regarding ethics (i.e. consequentialism vs deontology) is rooted in a failure to properly distinguish between decision theory and what society pressures people to do. Most lesswrong users probably understand the distinction fairly clearly, but we only ever talk about decision theory. Why don't we talk about the social meaning of ethics?
Resolving the unexpected hanging paradox
The unexpected hanging paradox: The warden tells a prisoner on death row that he will be executed on some day in the following week (last possible day is Friday) at noon, and that he will be surprised when he gets hanged. The prisoner realizes that he will not be hanged on Friday, because that being the last possible day, he would see it coming. It follows that Thursday is effectively the last day that he can be hanged, but by the same reasoning, he would then be unsurprised to be hanged on Thursday, and Wednesday is the last day he can be hanged. He follows this reasoning all the way back and realizes that he cannot be hanged any day that week at noon without him knowing it in advance. The hangman comes for him on Wednesday, and he is surprised.
Supposedly, even though the warden's statement to the prisoner was paradoxical, it ended up being true anyway. However, if the prisoner is no better at making inferences than he is in the problem, the warden's statement is true and not paradoxical; the prisoner was executed at noon within the week, and was surprised. This just shows that you can mess with the minds of people who can't make inferences properly. Nothing new there.
If the prisoner can evaluate the warden's statement properly, then the prisoner follows the same logic, realizes that he will not be hanged at noon within the week, remembers that the warden told him that he would be, and concludes that the warden's statements must be unreliable, and does not use them to predict actual events with confidence. If the hangman comes for him at noon any day that week, he will be unsurprised, even though he is not confident that he will be executed that week at all either. The warden's statement is then false and unparadoxical. This is similar to the one-day analogue, where the warden says "You will be executed tomorrow at noon, and will be surprised" and the prisoner says "wtf?".
Now let's assume that the prisoner can make these inferences, the warden always tells the truth, and the prisoner knows this. Well then, yes, that's a paradox. But assigning 100% probability to each of two propositions that contradict each other completely destroys any probability distribution, making the prisoner still unable to make predictions, and thus still not letting the warden’s statement be both paradoxical and correct.
If someone actually tried the unexpected hanging paradox, the closest simple model of what would actually be going on is probably that the warden chose a probability distribution so that, if the prisoner knew what the distribution was, the prisoner’s average expected assessment of the probability that he is about to get executed on the day that he does is minimized. This is a solvable and unparadoxical problem.
Could/would an FAI recreate people who are information-theoretically dead by modern standards?
If someone gets cremated or buried long enough for eir brain to fully decompose into dirt, it becomes extremely difficult to revive em. Nothing short of a vastly superhuman intelligence would have a chance of doing it. I suspect that it would be possible for a superintelligence to do it, but unless there's a more efficient way to do it, it would require recomputing the Earth's history from the time the AGI is activated back to the death of the last person it intends to save. Not only does this require immense computational resources that could be used to the benefit of people who are still alive, it also requires simulating people experiencing pain (backwards). On the other hand, this saves people's lives. Does anyone have any compelling arguments on why an FAI would or would not recreate me if I die, decompose, and then the singularity occurs a long time after my death?
Why do I want to know? Well, aside from the question being interesting in its own right, it is an important factor in deciding whether or not cryonics is worth-while.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)