Open Thread: May 2010
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
You know what to do.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Comments (543)
I am thinking of making a top-level post criticizing libertarianism, in spite of the current norm against discussing politics. Would you prefer that I write the post, or not write it?
Upvoted your comment for asking in the first place.
If your post was a novel explanation of some aspect of rationality, and wasn't just about landing punches on libertarianism, I'd want to see it. If it was pretty much just about criticizing libertarianism, I wouldn't.
I say this as someone very unsympathetic to libertarianism (or at least what contemporary Americans usually mean by 'libertarianism') - I'm motivated by a feeling that LW ought to be about rationality and things that touch on it directly, and I set the bar high for mind-killy topics, though I know others disagree with me about that, and that's OK. So, though I personally would want to downvote a top-level post only about libertarianism, I likely wouldn't, unless it were obnoxiously bare-faced libertarian baiting.
I agree on most counts.
However, I'd also enjoy reading it if it were just a critique of libertarianism but done in an exceptionally rational way, such that if it is flawed, it will be very clear why. At minimum, I'd want it to explicitly state what terminal values or top-level goals it is assuming we want a political system to maximize, consider only the least convenient possible interpretation of libertarianism, avoid talking about libertarians too much (i.e. avoid speculating on their motives and their psychology; focus as much as possible on the policies themselves), separate it from discussion of alternatives (except insofar as is necessary to demonstrate that there is at least one system from which we can expect better outcomes than libertarianism), not appear one-sided, avoid considering it as a package deal whenever possible, etc.
That standard sounds pretty weird. If it is so clear that it is flawed, wouldn't you expect it to be clear to the author and thus not posted? Perhaps you mean clear what your core disagreement is?
Not enough information to answer. I will upvote your post if I find it novel and convincing by rationalist lights. Try sending draft versions to other contributors that you trust and incorporate their advice before going public. I can offer my help, if being outside of American politics doesn't disqualify me from that.
I will vote it down unless you say something that I have not seen before. I think that it was a good idea to not make LW a site for rehearsing political arguments, but if you have thought of something that hasn't been said before and if you can explain how you came up with it then it might be a good reasoning lesson.
I will vote it up to cancel the above downvote, to encourage you to make the post in case the threat of downvoting scares you off.
I will only vote it up if there's something I haven't seen before, but will only vote it down if I think it's dreadful.
We may not be ready for it yet, but at some point we need to be able to pass the big test of addressing hard topics.
ergh.... after the recent flamewar I was involved in, I had resolved to not allow myself to get wrapped up in another one, but if there's going to be a top level post on this, I don't realistically see myself staying out of it.
I'm not saying don't write it though. If you do, I'd recommend you let a few people you trust read it over first before you put it up, to check for anything unnecessarily inflammatory. Also, what Blueberry said.
I'd love to read it, though I may well disagree with a lot of it. I'd prefer it if it were kept more abstract and philosophical, as opposed to discussing current political parties and laws and so forth: I think that would increase the light-to-heat ratio.
I'm interested.
If the future of the universe is a 'heat death' in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?
Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.
We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as 'utility') in the universe.
But if the universe reaches a final heat death, so that no intelligent life exists, and there is no memory and no record of anything, what do the contents of the antecedent eons count for? There is no way to tell if the-universe-which-resulted-in-heat-death saw the rise of marvelous intelligence and value or remained empty and unobserved.
What is the utility of a project after all of its participants, and all records and memory of it, are utterly destroyed?
The pragmatic answer is simply 'carpe diem': make the best of this finite existence. This is what people have done for years before the ideas of the singularity and transhumanism had been formulated.
Transhumanist beliefs, including the prospect of 'immortality' or transcendence seem to be a way in which some cope with their fear of death. But I fail to see why death should be any less gloomy a prospect for a 3^^^3 year old being than it is for a 30 year old. By definition, one cannot 'reminisce' about one's accumulated positive experiences after death, so in one sense a 3^^^3 year old has actually lost more: vastly more information has been destroyed!
So, in short, I struggle to see a rationale for my intuitive belief that surviving into deep time is truly better than a natural human lifespan, for if heat death is inevitable, as seems to be the case, the end result--the final tally of utils accumulated--is exactly the same. 0.
The problem with this is that it assumes we only care about the end state.
Is it rational for a decision procedure to place great value on the the interim state, if the end state contains absolutely no utility?
Does caring about interim states leave you open to Dutch books?
This is a philosophical question, not a rational one. Terminal values are not generated by rational processes; that's why they're terminal values. The metaethics sequence, especially existential angst factory and the moral void, should expand on this sufficiently.
Has anyone read "Games and Decisions: Introduction and Critical Survey" by R. Duncan Luce and Howard Raiffa? Any thoughts on its quality?
I have a cognitive problem and I figured someone might be able to help with it.
I think I might have trouble filtering stimuli, or something similar. A dog barking, an ear ache, loud people, or a really long day can break me down. I start to have difficulty focusing. I can't hold complex concepts in my head. I'll often start a task, and quit in the middle because it feels too difficult and try to switch to something else, ultimately getting nothing done. I'll have difficulty deciding what to work on. I'll start to panic or get intimidated. It's really an issue.
I've found two things that help:
Music is good at filtering out noise and helping me focus. However, sometimes i can't listen to it or it is not enough.
The other thing is to make a extremely granular tasklist and then follow it without question. The tasks have to be really small and seem manageable.
Anyone have any suggestions? I'm not neurotypical in the broader sense, but I don't believe I fall on the autism spectrum.
I have similar sensory issues on occasion and believe them to be a component of my autism, but if you don't have other features of an ASD then this could just be a sensory integration disorder. When it's an auditory processing issue, I find that listening to loud techno or other music with a strong beat helps more than other types of music, and ear-covering headphones help filter out other noise. I'm more often assaulted by textures, which I have to deal with by avoiding contact with the item(s).
As for the long day, that sounds like a matter of running out of (metaphorical) spoons. Paying attention to what activities drain or replenish said spoons, and choosing spoon-neutral or spoon-positive activities whenever they're viable options, is the way to manage this.
Thanks for the advice. The only other symptom I have is some problems with my social coprocessor, but it doesn't feel like it fits an ASD.
This seems like a potentially significant milestone: 'Artificial life' breakthrough announced by scientists
Given that this now opens the door for artificially designed and deployed harmful viruses, perhaps unfriendly AI falls a few notches on existentialist risk ladder.
I remember hearing a few anecdotes about abstaining for food for a period of time (fasting) and improved brain performance. I also seem to recall some pop-sci explanation involving detoxification of the body and the like. Today something triggered interest in this topic again, but a quick Google search did not return much on the topic (fasting is drowned in religious references).
I figure this is well within LW scope, so does anyone have any knowledge or links that offer more concrete insight into (or rebuttal of) this notion?
Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.
I agree that AI deterrence will necessarily fail if:
All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and
any deterrence simulation counts as a threat.
Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?
I don't believe statement 1 and don't see why it's required. After all, we are quite rational, and so is our future FAI.
The notion of "first mover" is meaningless, where the other player's program is visible from the start.
In another comment I coined (although not for the first time, it turns out) the expression "Friendly Human Intelligence". Which is simply geekspeak for how to bring up your kids right and not make druggie losers, wholesale killers, or other sorts of paperclipper. I don't recall seeing this discussed on LessWrong. Maybe most of us don't have children, and Eliezer has said somewhere that he doesn't consider himself ready to create new people, but as the saying is, if not now, when, and if not this, what?
I don't have children and don't intend to. I have two nephews and a niece, but have not had much to do with their lives, beyond sending them improving books for birthdays and Christmas. I wonder if LessWrongers, with or without children, have anything to say on how to raise children to be rational non-paperclippers?
I think that question is a conversation stopper because those who do not have children who not feel qualify and those that do have children know what a complex and tricky question it is. Personally I don't think there is a method that fits all children and all relationships with them. But... You might try activities rather than presents. 'Oh cool, uncle is gone to make a video with us and we're going to do it at the zoo.' If you get the right activity (depends on child), they will remember it and what you did and said for years. I had a uncle that I only saw a few times but he showed me how to make and throw a bomerang. He explained why it returned. I have thanked him for that day for 60 years.
I don't have children and I didn't try answering the question because I knew what a complex and tricky question it is - I don't expect it to be much different than the bigger question of how to improve human rationality for people in general.
Impossible motion: magnet-like slopes
http://illusioncontest.neuralcorrelate.com/2010/impossible-motion-magnet-like-slopes/
http://www.nature.com/news/2010/100511/full/news.2010.233.html
Crinimal profiling, good and bad
Article discusses the shift from impressive-looking guesswork to use of statistics. Also has an egregious example of the guesswork approach privileging the hypothesis.
HELP NEEDED Today if at all possible.
So I'm working on a Bayesian approach to the Duhem-Quine problem. Basically, the problem is that any experiment never tests a hypothesis directly but only the conjunction of the hypothesis and other auxiliary assumption. The standard method for dealing with this is to make
P(h|e)=P(h & a|e) + P(h & -a|e) (so if e falsifies h&a you just use the h&-a)
So if e falsifies h&a you end up with:
P(h|e) = P(e|h&-a) * P(h&-a) / P(e)
This guy Strevens objects on the grounds that e can impact h without impacting a. His example:
Am I crazy or shouldn't that information already be contained in the above formula? Specifically, the term P(e|h&-a) should be higher than it would otherwise.
Today the Pope finally admitted there has been a problem with child sex abuse by Catholic priests. He blamed it on sin.
What a great answer! It covers any imaginable situation. Sin could be the greatest tool for bad managers everywhere since Total Quality Management.
"Sir, your company, British Petroleum, is responsible for the biggest environmental disaster in America this decade. How did this happen, and what is being done to prevent it happening again?"
"Senator, I've made a thorough investigation, and I'm afraid there has been sin in the ranks of British Petroleum. BP has a deep need to re-learn penance, to accept purification, to learn on one hand forgiveness but also the need for justice."
"Thank you, Mr. Hayward. I'm glad you're on top of the situation."
I wonder if I can use this at work.
That sounds like the kind of remark that goes out of its way to offend several categories of people at once. :)
But in that category the gold standard remains Evelyn Waugh's “now that they no longer defrock priests for sexual perversities, one can no longer get any decent proofreading.”
— Chad Orzel
This seems possibly broadly applicable to me; e.g. replace “crank” with “fanboy”.
Science.
To me it is a process, a method, an outlook on life. But so often it is used as a pronoun: "Science says tomatoes are good for you".
It should be used to encourage rational thinking, clarity of arguement and assumption and rigorous unbiased testing. The pursuit of knowledge and truth. Instead it is often seen as a club, to which you either belong by working in a scientific profession, or you do not.
As a child of a mixed religeon household I felt like an outcast from religeon from an early age - it didn't matter that I have beliefs of my own, if I didn't belong to a specific club then I didn't belong at all. Very few religeous people I met encourage me to have faith regardless of what that faith is.
I see a scientific approach to life and its mysteries as my way of forming my own "map of the territory" as others perhaps use religeon and I hope that as promoters of rationality that we can encourage scientific principles in others rather than making them feel like outcasts for not being in our "club".
"Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners"
Likely the effects were due to the fish oil. This study was replicating similar results seen in a UK youth prison.
http://www3.interscience.wiley.com/journal/123213582/abstract?CRETRY=1&SRETRY=0
Also see this other study of the use of fish oil to present the onset of schizophrenia in a population of youth that had had one psychotic episode or similar reason to seek treatment. The p-values they got are ridiculous -- fish oil appears to be way more effective in reality than I would have expected.
http://archpsyc.ama-assn.org/cgi/content/short/67/2/146
Take your fish oil, people.
What about snake oil?
I don't know if your kidding or scoffing, but I will give a straight answer.
Richard Kunin, M.D., once analyzed snake oil and found that it is 20 or 25% omega-3 fatty acids.
The link is giving me trouble. Can you paste the whole abstract?
Effects of nutritional supplements on aggression, rule-breaking, and psychopathology among young adult prisoners
ABSTRACT Objective: In an earlier study, improvement of dietary status with food supplements led to a reduction in antisocial behavior among prisoners. Based on these earlier findings, a study of the effects of food supplements on aggression, rule-breaking, and psychopathology was conducted among young Dutch prisoners.
Methods: Two hundred and twenty-one young adult prisoners (mean age=21.0, range 18-25 years) received nutritional supplements containing vitamins, minerals, and essential fatty acids or placebos, over a period of 1-3 months.
Results: As in the earlier (British) study, reported incidents were significantly reduced (P=.017, one-tailed) in the active condition (n=115), as compared with placebo (n=106). Other assessments, however, revealed no significant reductions in aggressiveness or psychiatric symptoms. Conclusion: As the incidents reported concerned aggressive and rule-breaking behavior as observed by the prison staff, the results are considered to be promising. However, as no significant improvements were found in a number of other (self-reported) outcome measures, the results should be interpreted with caution. Aggr. Behav. 36:117-126, 2010. © 2009 Wiley-Liss, Inc.
Rationality comix!
Hover over the red button at the bottom (to the left of the RSS button and social bookmarking links) for a bonus panel.
Edit: "Whoever did the duplication" would be a better answer than "The guy who came first", admittedly. The duplicate and original would both believe themselves to be the original, or, if they are a rationalist, would probably withhold judgment.
More importantly, the question is terribly phrased - or just terrible. The philosopher could have started with "If you met the 'twins' afterwards, could someone tell them apart without asking anyone?", which has an obvious response of "no", and then gets followed by a actually interesting questions about, for example, what "memory" exactly is.
That version is a lot funnier, though!
Speaking as an engineer, I'd think he wasn't talking about subjective aspects: "The guy who came first" is the one which was copied (perfectly) to make the clone, and therefore existed before the clone existed.
Kaj_Sotala is doing a series of interviews with people in the SIAI house. The first is with Alicorn.
Edit: They are tagged as "siai interviews".
There an article in this month's Nature examining the statistical evidence for universal common descent. This is the first time someone has taken the massive amounts of genetic data and applied a Bayesian analysis to determine whether the existence of a universal common ancestor is the best model. Most of what we generally think of as evidence for evolution and shared ancestry is evidence for shared ancestry of large collections, such as mammals or birds, or for smaller groups. Some of the evidence is for common ancestry for a phylum. There is prior evidence for their shared ancestry based on primitive fossils and on the shared genetic code and extreme similarity of genomes across very different species. This is the first paper to make that last argument mathematically rigorous. When taken in this fashion, the paper more or less concludes that a Bayesian analysis using just the genetic and phylogenetic known data puts the universal common ancestor model as overwhelmingly more likely than other models. (The article is behind a paywall so until I get back to the university tomorrow I won't be able to comment on this in any substantial detail but this looks pretty cool and a good example how careful Bayesianism can help make something more precise).
Ok. Reading the paper now. Some aspects are bit technical and so I don't follow all of the arguments or genetic claims other than at a broad level. However, the money quote is "Therefore, UCA is at least 10^2,860 times more probable than the closest competing hypothesis." (I've replaced the superscript with a ^ becaause I don't know how to format superscripts). 10^2860 is a very big number.
What were they using for prior probabilities for the various candidate hypotheses? Uniform? Some form of complexity weighting? Other?
They have hypotheses concerning whether Eukarya, Archaea and Bacteria share a common ancestor or not, or possibly in pairs. All hypotheses were given equal prior likelyhood.
I take it "a universal common ancestor" doesn't mean a universal common ancestor, but means a universal common ancestral group?
As I said, I haven't had a chance to actually read the article itself, but as I understand it, this would indicate a universal common ancestor group of nearly genetically identical organisms. While there is suspicion that horizontal gene transfer was more common in the past than it is now, this supports the general narrative of all life arising from a single organism. These sorts of techniques won't distinguish between that and life arising from several genetically identical organisms.
I don't think that the math in Aumann's agreement theorem says what Aumann's paper says that it says. The math may be right, but the translation into English isn't.
Aumann's agreement theorem says:
Let N1 and N2 be partitions of Omega ... Ni is the information partition of i; that is, if the true state of the world is w [an element of Omega], then i is informed of that element Pi(w) of Ni that contains w.
Given w in Omega, an event E is called common knowledge at w if E includes that member of meet(N1, N2) that contains w.
Let A be an event, and let Qi denote the posterior probability p(A|Ni) of A given i's information; i.e., Qi(w) [ = p(A | Pi(w)) ] = p(A ^ Pi(w) / p(Pi(w)).
Proposition: Let w be in Omega ... If it is common knowledge at w that Q1 = q1 and Q2 = q2, then q1 = q2.
Proof: Let P be the member of meet(N1, N2) that contains w. Write P = union over all j of Pj, where the Pj are disjoint members of P1. Since Q1 = q1 throughout P, we have p(A ^ Pj) / p(Pj) = q1 for all j; hence p(A ^ Pj) = q1p(Pj), and so by summing over j we get p(A^P) = q1p(P). Similarly p(A^P) = q2p(P), and so q1=q2.
meet(N1, N2) is not an intersection; it's a very aggressive union of the subsets in the partitions N1 and N2 of Omega. It's generated this way:
M = {w}, Used = {}
while (M neq Used)
Note in particular that P is a member of meet(N1, N2) that contains elements of Omega taken from H2, that are not in H1. To say that Q1 = q1 throughout P means that, for every x in P, Q1(x) = p(A | P1(x)) = q1. This is used to infer that p(A ^ Pj) = p(Pj) q1 for every Pj in N1.
This is a very strange thing to believe, given the initial conditions. The justification (as Robin Hanson pointed out to me) is that "common knowledge at w that Q1=q1" is defined to mean just that: Q1(x) = q1 for all x in the member P of the meet(N1,N2) containing w.
Now comes the translation into English. Aumann says that this technical definition of "common knowledge at w of the posteriors" means the same as "agent 1 and agent 2 both know both of their posteriors". And the justification for that is this: "Suppose now that w is the true state of the world, P1 = P1(w), and E is an event. To say that 1 'knows' E means that E includes P1. To say that 1 knows that 2 knows E means that E includes all P2 in N2 that intersect P1. ..." et cetera, to closure.
And this, I think, is wrong. If 1 knows that 2 knows E, 1 knows that E includes P1 union some P2 that intersects with P1, not that E includes P1 union all P2 that intersect with P1. So the "common knowledge" used in the theorem doesn't mean the same thing at all that we mean in English when we say they "know each others' posteriors".
Also, Aumann adds after the proof that it implicitly assumes that the agents know each others' complete partition functions over all possible worlds. Which is several orders of magnitude of outlandish; so the theorem can never be applied to the real world.
I have an idea that may create a (small) revenue stream for LW/SIAI. There are a lot of book recommendations, with links to amazon, going around in LW, and many of them do not use an affiliate code. Having a script add a LessWrong affiliate code to those links that don't already have one may lead to some income, especially given that affiliate codes persist and may get credited for unrelated purchases later in the day.
I believe Posterous did this, and there was a minor PR hubbub about it, but the main issue was that they did not communicate the change properly (or at all). Also, given that LW/SIAI are not-for-profit endeavours, this is much easier to swallow. In fact, if it can be done in an easy-to-implement way, I think quite a few members with popular blogs may be tempted to apply this modification to their own blogs.
Does this sound viable?
Yes, under two conditions:
It is announced in advance and properly implemented.
It does not delete other affiliate codes if links are posted with affiliate codes.
Breaking both these rules is one of the many things which Livejournal has done wrong in the last few years, which is why I mention them.
The moral life of babies. This is an article that also recently appeared in the New York Times Magazine.
It covers various scientific experiments to explore the mental life of babies, finding evidence of moral judgements, theory of mind, and theory of things (e.g. when two dolls are placed behind a screen, and the screen is removed, 5-month-old babies expect to see two dolls).
Unlike many psychological experiments which produce more noise than signal, "these results were not subtle; babies almost always showed this pattern of response."
It also discusses various responses to the existence of innate morality, and the existence of "higher" adult morality -- caring about people who cannot possibly be of any benefit to oneself.
How should one reply to the argument that there is no prior probability for the outcome to some quantum event that that already happened and splits the world into two worlds, each with a different outcome to some test (say, a "quantum coin toss")? The idea is that if you merely sever the quantum event and consider different outcomes to the test (say, your quantum coin landed heads), and consider that the outcome could have been different (your quantum coin could have landed tails), there is no way to really determine who would be "you." Is it necessary to apply the SSA or some form of the SSSA? To me it seems that it should be allowed to rigidly maintain your identity while allowing the outcome of the quantum coin toss to vary across those two worlds. One could then base the prior probability of the coin landing heads in your world on the empirical evidence that quantum coin tosses of that type land heads with frequency 0.5 in any particular instance of a world history.
Cool paper: When Did Bayesian Inference Become “Bayesian”?
http://ba.stat.cmu.edu/journal/2006/vol01/issue01/fienberg.pdf
Most people's intuition is that assassination is worse than war, but simple utilitarianism suggests that war is much worse.
I have some ideas about why assassination isn't a tool for getting reliable outcomes-- leaders are sufficiently entangled in the groups that they lead that removing a leader isn't like removing a counter from a game, it's like cutting a piece out of a web which is going to rebuild itself in not quite the same shape-- but this doesn't add up to why assassination could be worse than war.
Is there any reason to think the common intuition is right?
TLDR: “War” is the inter-group version of “duel” (ie, lawful conflict). “Assassination” is the inter-group version of “murder” (ie, unlawful conflict).
My first “intuition about the intuition” is that it’s a historical consequence: During most history, things like freedom, and power and responsibility for enforcement of rules when conflicts (freedom vs. freedom) occur, were stratified. Conflicts between individuals in a family are resolved by the family (e.g. by the head thereof), conflicts between families (or individuals in different families) by tribal leaders or the kind. During feudalism the “scale” was formalized, but even before we had a large series of family → group → tribe → city → barony → kingdom → empire.
The key about this system is that attempts to “cross the borders” in this system, for instance punishing someone from a different group directly rather than invoking punishment from that group’s leadership is seen as an intrusion in that group’s affairs.
So assassination becomes seen as the between-group version of murder: going around the established rules of society. That’s something that is selected against in social environments (and has been discussed elsewhere).
By contrast, war is the “normal” result when there is no higher authority to recurse to, in a conflict of groups. Note that, analogously, for much of history duels were considered correct methods of conflict resolution between some individuals, as long as they respected some rules. So as long as, at least in theory, there are laws of war, war is considered a direct extension of that instinct. Assassination is seen as breaking rules, so it’s seen differently.
A few other points:
What an excellent analysis. I voted up. The only thing I can think of that could be added is that making a martyr can backfire.
Who thinks assassination is worse than war?
I could make an argument for it, though: If countries engaged regularly in assassination, it would never come to a conclusion, and would not reduce (and might increase) the incidence of war. Phrasing it as "which is worse" makes it sound like we can choose one or the other. This assumes that an assassination can prevent a war (and doesn't count the cases where it starts a war).
It seems to me that the vast majority of people think of war as a legitimate tool of national policy, but are horrified by assassination.
I've always assumed that the norm against assassination, causally speaking, exists mostly due to historical promotion by leaders who wanted to maintain a low-assassination equilibrium, now maintained largely by inertia. (Of course, it could be normatively supported by other considerations.)
It makes sense to me that people would oversimplify the effect of assassination in basically the way you describe, overestimating the indispensability of leaders. I know I've seen a study on the effects of assassination on terrorist groups, but can't find a link or remember the conclusions.
You know, lots of people claim to be good cooks, or know good cooks, or have an amazing recipe for this or that. But Alicorn's cauliflower soup... it's the first food that, upon sneakily shoveling a fourth helping into my bowl, made me cackle maniacally like an insane evil sorcerer high on magic potions of incredible power, unable to keep myself from alerting three other soup-enjoying people to my glorious triumph. It's that good.
Does Alicorn's presence prohibit me from applying for an SIAI fellowship?
I second Anna, but I will also note that we plan on moving into a biggg house or possibly two big houses, and this would hopefully minimize friction in the event that two Visiting Fellows don't quite get along. I hope you apply!
Nope. All applications are welcome.
Awwwww :D
PS: If this endorsement of house food quality encourages anyone to apply for an SIAI fellowship, note your inspiration in the e-mail! We receive referral rewards!
Would you be willing to post the recipe?
http://improvisationalsoup.wordpress.com/2009/05/31/cream-of-cauliflower-soup/
I have taken to also adding two or three parsnips per batch.
Can you describe that “better than bouillon” thing, for us non-US (I assume) readers?
Also, how much cream do you use, and what’s “a ton” of garlic? (In my kitchen, that could mean half a pound — we use garlic paste as ketchup around here...)
Better than Bouillon is paste-textured reduced stock. It's gloopy, not very pretty, and adds excellent flavor to just about any savory dish. Instead of water and BTB, you could use a prepared stock, or instead of just the BTB, use a bouillon cube, but I find they have dramatically inferior flavors unless you make your own stock at home. I haven't tried cooking down a batch of homemade stock to see if I could get paste, but I think it would probably take too long.
I guess on the cream until the color looks about right. I use less cream if I overshot on the water when I was cooking the veggies, more if it's a little too thick.
"A ton" of garlic means "lots, to taste". I'd put one bulb in a batch of cauliflower soup mostly because it's convenient to grab one bulb out of a bag of garlic bulbs. If you're that enthusiastic about garlic, go ahead and use two, three, four - it's kind of hard to overdo something that wonderful.
How long are the fellowships for?
As long as three months (and the possibility of sticking around after if everything goes swimmingly), but you could come for considerably shorter if you have scheduling constraints. We've also been known to have people over for a day or two just to visit and see how cool we are. Totally e-mail Anna if you have any interest at all! Don't be shy! She isn't scary!
Entertainment for out-of-work Judea Pearl fans: go to your local job site and search on the word "causal", and then imagine that all those ads aren't just mis-spelling the word "casual"...
No-name terrorists now CIA drone targets
http://www.cnn.com/2010/TECH/05/07/wired.terrorist.drone.strikes/index.html?hpt=C1
Is there a consensus on whether or not it's OK to discuss not-specifically-rationality-related politics on LW?
Doesn't bother me. I think the consensus is that we should probably try and stay at a meta-political level, looking at a much broader picture than that which is discussed on the nightly news. The community is now mature enough that anything political is not automatically taboo.
I posted this not to be political, but because people here are generally interested in killer robots and their escalation of use.
This looks like very expensive way to kill terrorists, like 100k$ per militant not counting sunken costs such as the 4.5 mil price tag per drone. And not trying to estimate the cost of civilian deaths.
Related, Obama authorizes assassination of US citizen. I'm amazed how little anybody seems to care.
The people who care are poorly represented by the news and by our political institutions. But they're out there.
I care, and approve, provided that Al-Awlaki can forestall it if he chooses by coming to the US to face charges.
I don't believe in treating everything with the slippery-slope argument. That way lies the madness I saw at the patent office, where every decision had to be made following precedent and procedure with syntactic regularity, without any contaminating element of human judgement.
Something problematic: if you're a cosmopolitan, as I assume most people here are, can you consistently object to assassinations of citizens if you don't object to assassinations of non-citizens?
Probably not, though you might be able to make a case that if a particular non-citizen is a significant perceived threat but there is no legal mechanism for prosecuting them then different rules apply. Most people are not cosmopolitan however and so I am more surprised at the lack of outrage over ordering the assassination of a US citizen than by the lack of outrage over the assassination of non-US citizens.
The drone targeting is worrisome in the very big picture and long term sense of establishing certain kinds of precedents for robotic warfare that might be troubling. The fact that it is happening in Pakistan honestly seems more problematic to me in terms of the badness that comes with not having "clearly defined parties who can verifiably negotiate". Did the US declare war on Pakistan without me noticing? Is Pakistan happy that we're helping them "maintain brutal law and order" in their country by bombing people in their back country? Are there even functioning Westphalian nation states in this area? (These are honest questions - I generally don't watch push media, preferring instead to formulate hypotheses and then search for news or blogs that can answer the hypothesis.)
The assassination story, if true, seems much more worrisome because it would imply that the fuzziness from the so-called "war on terror" is causing an erosion of the rule of law within the US. Moreover, it seems like something I should take responsibility for doing something about because it is happening entirely within my own country.
Does anyone know of an existing political organization working to put an end to the imprisonment and/or killing of US citizens by the US government without formal legal proceedings that include the right to a trail by jury? I would rather coordinate with other people (especially competent experts) if such a thing is possible.
I would ask Amnesty International.
I don't know if they have responded to this specific issue, but the ACLU is working against the breakdown of rule of law in the name of national defense.
Thanks for the link. I have sent them an email asking for advice as to whether this situation is as bad as it seems to be, and if so, what I can do to make things less bad. I have also added something to my tickler file so that on May 21 I will be reminded to respond here with a followup even if there is no response from the ACLU's National Security Project.
I think I have done my good deed for the day :-)
ETA: One thing to point out is that before sending the email I tried googling "Presidential Assassination Program" in google news and the subject seems to have had little coverage since then. This was the best followup I could find in the last few days, and it spoke of general apathy on the subject. This leading me to conclude that "not enough people had noticed" yet, so I followed through with my email.
Following up for the sake of reference...
I did not get a reply from the ACLU on this subject and just today sent a second email asking for another response. If the ACLU continues to blow me off by June 1st I may try forwarding my unanswered emails to several people at the ACLU (to see the blowoff was simply due to incompetance on the part of only the person monitoring the email).
If that doesn't work then I expect I'll try Amnesty International as suggested by Kevin. There will be at least one more comment with an update here, whatever happens, and possibly two or three :-)
This will be my final update on this subject. I received an email from a representative of the ACLU. He apologized for the delayed response and directed me to a series of links that I'm passing on here for the sake of completeness.
First, there is an April 7th ACLU press release about extra-judicial killings of US citizens, that press release notes that an FOIA request had already been filed which appears to ask for the details of the program to see specifically how it works in order to find out if it really violates any laws or not, preparatory to potential legal action.
Second, on April 19th the Washington Post published a letter for the ACLU's Executive Director on the subject. This confirms that the issue is getting institutional attention, recognition in the press, and will probably not "slip through the cracks".
Third, on April 28th the ACLU sent an open letter to President Barack Obama about extrajudicial killings which is the same date that the ACLU's update page for "targeted killings" was last updated. So it seems clear that steps have been taken to open negotiations with an individual human being who has the personal authority to cancel the program.
This appears to provide a good summary of the institutional processes that have already been put in motion to fix the problems raised in the parent posts. The only thing left to consider appears to be (1) whether violations of the constitution will be adequately prevented and (2) to be sure that we are not free riding on the public service of other people too egregiously.
In this vein, the ACLU has a letter writing campaign organized so that people can send messages to elected officials asking that they respect the rule of law and the text of treaties that the US has signed, in case the extra-judicial killings of US citizens are really being planned and accomplished by the executive branch without trail or oversight by the courts.
Sending letters like these may help solve the problem a little bit, is very unlikely to hurt anything, and may patch guilt over free riding :-)
In the meantime I think "joining the ACLU as a dues paying member" just bumped up my todo list a bit.
No, in general I think they are about as unhappy as you might expect US citizens to be if the Chinese government was conducting drone attacks on targets in the US with heavy civilian casualties. This was part of the basis for my prediction last year that there will be a major terrorist attack in the US with a connection to Pakistan. Let's hope that all would be attackers are as incompetent as Faisal Shahzad.
I don't believe anyone has challenged the truth of the story, it has just not been widely reported or received the same level of scrutiny as the extra-judicial imprisonment and torture conducted by the last administration. The article I linked links to a New York Times piece on the decision. The erosion of the rule of law within the US in response to supposed terrorist threats has been going on ever since 9/11 and Obama has if anything accelerated rather than slowed that process.
I imagine the assassination story would be a bigger deal if the target was still in the US.
It wouldn't happen. They'd arrest him.
Or, to put it another way - it would happen; it just wouldn't be called assassination, because it would be done using standard police procedure, and because other people would get killed. It would be like the standoffs with MOVE, or David Koresh's organization in Waco, or Ruby Ridge.
The word assassination is wrong for all these cases. These kinds of "assassination" are just the logical result of law enforcement. If you're enforcing the law, and you have police and courts and so on; and someone refuses to play along, eventually you have to use force. I don't see that the person being outside or inside America makes a big moral difference, when their actions are having effect inside America. A diplomatic difference, but not a moral difference.
I also think it's funny for people to have moral arguments in a forum where you get labeled an idiot if you admit you believe there are such things as morals.
Perhaps we should be grateful that technology hasn't advanced to the point where we can take these people out non-violently, because then we'd do it a lot more, for more trivial reasons.
Why shouldn't people argue over morals? The mainstream view here is that each person is arguing about what the fully-informed, fully-reflected-upon output of the other person's moral-evaluating computation would be. The presumption is that all of our respective moral-evaluating computational mechanisms would reach the same conclusion on the issue at hand in the limit of information and reflection.
Pakistan does not have anything close to a force monopoly in the region we're attacking. They've as much as admitted that, I believe. I actually think I'm okay with the attacks as far as international law goes.
I always hear this but no one ever tells me just what precedents for robotic warfare they find troubling.
It is a further dehumanization of the process of killing and so tends to undermine any inbuilt human moral repugnance produced by violence. To the extent that you think that killing humans is a bad thing I suggest that is something that should be of concern. It is one more level of emotional detachment for the drone operators beyond what can be observed in the Apache pilots in the recent Wikileaks collateral murder video.
ETA: This Dylan Rattigan clip discusses some of the concerns raised by the Wikileaks video. The same concerns apply to drone attacks, only more so.
I'm looking at the forecast for the next year on CNN Money for Google stock (which will likely be an outdated link very soon). But while it's relevant...
I don't know much economics, but this forecast looks absurd to me. What are the confidence intervals? According to this graph, am I pretty much guaranteed to make vast sums of money simply by investing all of what I have in Google stock? (I'm assuming that this is just an example of the world being mad. Unless I really should buy some stock?) What implications does this sort of thing have on very unsavvy investors who look at graphs like that and instantly invest thousands of dollars? Do they win at everything forever? What am I missing?
It's fairly well established that actively managed funds on average underperform their benchmarks. I'm not aware of specific research on investing based solely on analyst forecasts but I imagine performance would be even worse using such a strategy. Basically, you are right to be skeptical. All the evidence indicates that the best long term strategy for the average individual investor is to invest in a low cost index fund and avoid trying to pick stocks.
ETA: This recent paper appears relevant. They do indeed find that analysts' target prices are inaccurate and appear to suffer from consistent biases.
If we get forums, I'd like a projects section. A person could create a project, which is a form centered around a problem to work on with other people over an extended period of time.
This seems like the sort of activity Google Wave is (was?) meant for.
Self-forgiveness limits procrastination
Pre-commitment Strategies in Behavioral Economics - PowerPoint by Russell James. Not deep, which is sometimes a good thing.
First step in the AI take-over: gather funds. Yesterday's massive stock market spike took place in a matter of minutes, and it looks like it was in large part due to "glitches" in automatic trading programs. Accenture opened and closed at $41/share, but at one point was trading for $0.01/share. Anyone with $1000, lighting reflexes, and insider knowledge could've made $4.1M yesterday. For every $1000 they had.
http://www.npr.org/blogs/money/2010/05/the_market_just_flipped_out_ma.html
Next month: our new overlords reveal themselves?
In the same vein, a note from the machines:
Many of those trades will be cancelled.
A clever idea, but won't work: "The New York Stock Exchange and the Nasdaq OMX Group say they will cancel trades involving stocks that saw sharp volatility at the height of the market’s steep intraday decline Thursday afternoon."
No idea about the time lag-- my posts show up quickly-- but my intuition says that a fair coin has a 1/2 probability of being heads, and nothing about the experiment changes that.
Nope, new posts should show up immediately (or maybe with a half hour delay or so; I seem to recall that the sidebars are cached, but for far less than two days). Did it appear to post successfully, just not showing up? The only thing I can think of is that you might not have switched the "Post to" menu from "Drafts for neq1" to "LessWrong".
Ah, I think that's it (posted to drafts). Thanks. Not sure how I missed that.
Tough financial question about cryonics: I've been looking into the infinite banking idea, which actually has credible supporters, and basically involves using a mutual whole life insurance policy as a tax shelter for your earnings, allow you to accumulate dividends thereon tax free ("'cause it's to provide for the spouse and kids"), and withdraw from your premiums and borrow against yourself (and pay yourself back).
Would having one mutual whole life insurance policy keep you from having a separate policy of the kind of life insurance needed to fund a cryonic self-preservation project? Would the mutual whole life policy itself be a way to fund cryopreservation?
Don't know if anyone else was watching the stock market meltdown in realtime today but as the indices were plunging down the face of what looked a bit like an upside down exponential curve driven by HFT algorithms gone wild and the financial news sites started going down under the traffic I couldn't help thinking that this is probably what the singularity would look like to a human. Being invested in VXX made it particularly compelling viewing.
To save everyone the googling: VXX is an exchange traded fund (basically a stock) whose value tracks the level of the VIX index. The VIX index is a measure of the volatility of the markets, with higher values indicating higher volatility (volatility here generally implying lost market value). VIX stands at about 33 now, and was around 80 during the '08 crisis.
Does that mean VXX stock becomes more expensive/valuable when the volatility grows, or when it goes down?
VXX becomes more expensive when volatility grows.
Thanks, I meant to include a link to that. I'll edit it.
Neanderthal genome reveals interbreeding with humans:
http://www.newscientist.com/article/dn18869-neanderthal-genome-reveals-interbreeding-with-humans.
Whoooohooo! Awsomest thing in the last ten years of genetic news for me! YAAY! WHO HOO!!! /does a little dance / I want to munch on that delicious data!
Ahem.
Sorry about that.
But people 1 to 4% admixture! This is big! This gets an emotional response from me!That survived more than a thousand generations of selection, the bulk of it is probably neutral but think about how many perfectly usefull and working allels we may have today (since the Neanderthalls where close to us to start with). 600 000 or something years of speration these guys evolved sperate from us for nearly as long as the fictional Vampires in Blindsight.
It seems some of us are have a bit our ancestors picked of another species in our genes! Could this have anything to do with behavioural modernity that started off at about the same time the populations crossbred in the middle east ~100 000 years ago? Which adaptations did we pick up? Think of the possiblities!
Ok I'll stop the torrent of downvote magnet words and get back to reading about this. And then everything else my grubby little paws can get on Neanderthals, I need to brush up!
Edit: I just realized part of the reason why I got so excited is because it shows I may have a bit of exotic ancestry. Considering how much people, all else being equal, like to play up their "foreign" or "unusual" semimythical ancestors or even roots in conversation, national myths or on the census instead of the ethnicity of the majority of their ancestors this may be a more general bias, that I could of course quickly justify with a evo psych "just so" story but I'll refrain from that to search for what studies have to say about this.
I definitely think this is top-level post material but I didn't have enough to say to not piss the people off that think all top level posts need to be at least 500 words long.
I think this is very interesting but I'm not sure it should be a top-level post. Not due to the length but simply because it isn't terribly relevant to LW. Something can be very interesting and still not the focus here.
There is interesting discussion to be had that is relevant to LW.
How so? I'm not seeing it.
That's because there isn't a top-level post yet! :P
The point being that many, many more people read /new than read 400 comments deep in the open thread.
It is easier to convince people that there is an interesting discussion to be had relevant to LW if you can discuss its relevance to LW in an interesting fashion when you post it.
More seriously, if there isn't some barrier to posting, /new will be suffer a deluge of marginally interesting material, and after the transients die out nobody will be watching the posts there, either. I read most new posts because most new posts are substantive.
And the paper:
http://www.sciencemag.org/cgi/content/full/328/5979/710
The Cognitive Bias song:
http://www.youtube.com/watch?v=3RsbmjNLQkc
Not very good, but, you know, it's a song about cognitive bias, how cool is that?
The unrecognized death of speech recognition
Interesting thoughts about the limits encountered in the quest for better speech recognition, the implications for probabilistic approaches to AI, and "mispredictions of the future".
What do y'all think?
As noted in the comments, artificial natural speech recognition is in the 85-95% range, and human natural speech recognition is also around 95%. I was skeptical of that article when I first read it because it did not even mention how good humans are at it to compare to the machines.
Convergence: Threat or Menace? : How to Create the Ultimate TED Talk.
At 4:06 we see a slide that provides less than overwhelming support for the Red Bias hypothesis.
Apparently it is all too easy to draw neat little circles around concepts like "science" or "math" or "rationality" and forget the awesome complexity and terrifying beauty of what is inside the circles. I certainly did. I recommend all 1400 pages of "Molecular Biology Of The Cell" (well, at least the first 600 pages) as an antidote. A more spectacularly extensive, accessible, or beautifully illustrated textbook I have never seen.
Curiously, what happens when I refresh LW (or navigate to a particular LW page like the comments page) and I get the "error encountered" page with those little witticisms? Is the site 'busy' or being modified or something else ...? Also, does everyone experience the same thing at the same moment or is it a local phenomenon?
Thanks ... this will help me develop my 'reddit-page' worldview.
This has happened twice in the past two days - generally there is some specific comment which is broken and causes pages which would display it to crash. My analysis of the previous and current pattern here.
To test this hypothesis, the Recent Comments should work as soon as the bad comment moves to a new page..
I predict that it will with confidence - it has in previous instances.
And http://www.rokomijic.com/ doesn't work as well...
Is it possible to change the time zone in which LW displays dates/times?
I noticed something recently which might be a positive aspect of akrasia, and a reason for its existence.
Background: I am generally bad at getting things done. For instance, I might put off paying a bill for a long time, which seems strange considering the whole process would take < 5 minutes.
A while back, I read about a solution: when you happen to remember a small task, if you are capable of doing it right then, then do it right then. I found this easy to follow, and quickly got a lot better at keeping up with small things.
A week or two into it, I thought of something evil to do, and following my pattern, quickly did it. Within a few minutes, I regretted it and thankfully, was able to undo it. But it scared me, and I discontinued my habit.
I'm not sure how general a conclusion I can draw from this; perhaps I am unusually prone to these mistakes. But since then I've considered akrasia as a sort of warning: "Some part of you doesn't want to do this. How about doing something else?"
Now when the part of you protesting is the non-exercising part or the ice-cream eating part, then akrasia isn't being helpful. But... it's worth listening to that feeling and seeing why you are avoiding the action.
Continuing on the "last responsible moment" comment from one of the other responders - would it not be helpful to consider the putting off of a task until the last moment as an attempt to gather the largest amount of information persuant to the task without incurring any penalty?
Having poor focus and attention span I use an online todo-list for work and home life where I list every task as soon as I think of it, whether it is to be done within the next hour or year. The list soon mounts up, occassionally causing me anxiety, and I regularly have cause to carry a task over to the next day for weeks at a time - but what I have found is that a large number of tasks get removed because a change makes the task no longer necessary and a small proportion get notes added to them while they stay on the list so that the by the time the task gets actioned it has been enhanced by the extra information.
By having everything captured I can be sure no task will be lost, but by procrastinating I can ensure the highest level of efficiency in the tasks that I do eventually perform.
Thoughts?
I suspect it’s just a figure of speech, but can you elaborate on what you meant by “evil” above?
the most extreme example is depressed people having an increased risk of suicide if an antidepressant lifts their akrasia before it improves their mood.
I've also read that people with bipolar disorder are more likely to commit suicide as their depression lifts.
But antidepressant effects can be very complicated. I know someone who says one med made her really really want to sleep with her feet where her head normally went. I once reacted to an antidepressant by spending three days cycling through the thoughts, "I should cut off a finger" (I explained to myself why that was a bad idea) "I should cut off a toe" (ditto) "I should cut all the flesh from my ribs" (explain myself out of it again), then back to the start.
The akrasia-lifting explanation certainly seems plausible to me (although "mood" may not be the other relevant variable--it may be worldview and plans; I've never attempted suicide, but certainly when I've self-harmed or sabotaged my own life it's often been on "autopilot", carrying out something I've been thinking about a lot, not directly related to mood--mood and beliefs are related, but I've noticed a lag between one changing and the other changing to catch up to it; someone might no longer be severely depressed but still believe that killing themself is a good course of action). Still, I would also believe an explanation that certain meds cause suicidal impulses in some people, just as they can cause other weird impulses.
My antidepressant gave me a sweet tooth.
Interesting. Are you sure that is going on when antidepressants have paradoxical effects?
My mom is a psychiatrist, and she's given an explanation basically equivalent to that one - that people with very severe depression don't have the "energy" to do anything at all, including taking action to kill themselves, and that when they start taking medication, they get their energy back and are able to act on their plans.
Not absolutely certain. It's an impression I've picked up from mass media accounts, and it seems reasonable to me.
It would be good to have both more science and more personal accounts.
Thanks for asking.
Good observations.
Sometimes I procrastinate for weeks about doing something, generally non-urgent, only to have something happen that would have made the doing of it unnecessary. (For instance, I procrastinate about getting train tickets for a short trip to visit a client, and the day before the visit is due the client rings me to call it off.)
The useful notion here is that it generally pays to defer action or decision until "the last responsible moment"; it is the consequence of applying the theory of options valuation, specifically real options, to everyday decisions.
A top-level post about this would probably be relevant to the LW readership, as real options are a non-trivial instance of a procedure for decision under uncertainty. I'm not entirely sure I'm qualified to write it, but if no one else steps up I'll volunteer to do the research and write it up.
I work in finance (trading) and go through my daily life quantifying everything in terms of EV.
I would just caution in saying that, yes procrastinating provides you with some real option value as you mentioned but you need to weigh this against the probability of you exercising that option value as well as the other obvious costs of delaying the task.
Certain tasks are inherently valuable to delay as long as possible and can be identified as such beforehand. As an example, work related emails that require me to make a decison or choice I put off as long as is politely possible in case new information comes in which would influence my decision.
On the other hand, certain tasks can be identified as possessing little or no option value when weighted with the appropriate probabilities. What is the probability that delaying the payment of your cable bill will have value to you? Perhaps if you experience an emergency cash crunch. Or the off chance that your cable stops working and you decide to try to withhold payment (not that this will necessarily do you any good).
I'd be interested in reading it.
Geocities Less Wrong
Can you control the colors? Dark red on black is hard to read.
Nah. I just used this
Oh my dear God. Indeed, human values differ as much as values can differ.
If I hadn't started with sites that did quiet, geekish design, I would have fled the net and never come back.
Is Eliezer alive and well? He's not said anything here (or on Hacker News, for that matter) for a month...
Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.
After Yudkowsky at the beginning gives three different definitions of "the singularity" they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not "just simulating to have intelligence" but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that "well it may then just simulate intelligence, but maybe it is not actually having it". (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)
There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky's definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.
Pigliucci's point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?
This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci's where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between "traditional rationalist/skeptic/(cis-)humanist"-community and "LessWrong-style rationalist/trans-humanist".community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci's blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).
Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about "Is Elizer alive and well" (he surely looked so on BHTV), I hope it is not considered too much off-topic.
I'm going to have to remember to use the word cishumanism more often.
Welcome back.
I found this diavlog entertaining, but not particularly enlightening - the two of them seemed to mostly just be talking past each other. Pigliucci kept on conflating intelligence and consciousness, continually repeating his photosynthesis analogy, which makes sense in the context of consciousness, but not intelligence, and Eliezer would respond by explaining why that doesn't make sense in the context of intelligence, and then they'd just go in circles. I wish Eliezer had been more strict about forcing him to explicitly differentiate between intelligence/consciousness. Frustrating.... but worth watching regardless.
Note that I'm not saying I agree with Pigliucci's photosynthesis analogy, even when applied to consciousness, just that it seems at least to be coherent in that context, unlike in the context of intelligence, in which case it's just silly. Personally, I don't see any reason for consciousness to be substrate-dependant, but I feel much less confident in asserting that it isn't, just because I don't really know what consciousness is, so it seems more arrogant to make any definitive pronouncement about it.
That diavlog was a total shocker!
Pigliucci is not a nobody: he is a university professor, authored several books, holds 3 PhD's.
Still, he made an utterly confused impression on me. I don't think people must agree on everything, especially when it comes to hard questions like consciousness,but his views were so weak and incoherent that it was just too painful to watch. My head still aches... :(
You should post this as a top-level post for +10x karma.
random, possibly off-topic question:
Is there an index somewhere of all of Eliezer's appearances on BHTV? Or a search tool on the BHTV site that I can use to find them?
Direct link: http://bloggingheads.tv/search/?participant1=Yudkowsky,%20Eliezer
Thanks! I had tried using the search tool before, but I guess I hadn't tried searching for "Yudkowsky, Eliezer"
... oh, and it turns out that there was a note right beside the search box saying "NAME FORMAT = last, first". oops...
anyway, now I know, thanks :)
In general, google's site: operator is great for websites that have missing or uncooperative search functionality:
site:bloggingheads.tv eliezer
Orange button called "search" in the upper right hand corner.
SIAI may have built an automaton to keep donors from panicking
You can tell he's alive and well because he's posted several chapters in his Harry Potter fanfiction in that time; his author's notes lead me to believe that, as he stated long ago, he's letting LW drift so he has time to write his book.
Anyway, he can't be hurt; "Somebody would have noticed."
Well, he would've noticed, but he's not us...
Question: Who is moderating if Eliezer isn't?
The other moderators appear to be Robin Hanson, matt, and wmoore. None of them have posted in the past few days, but maybe at least one of them has been popping in to moderate from time to time. And/or maybe Eliezer is too, just not posting.
Harry Potter and the Methods of Rationality updated on Sunday; it could be that writing that story is filling much of his off time.
He's writing his book.
I have a (short) essay, 'Drug heuristics' in which I take a crack at combining Bostrom's evolutionary heuristics and nootropics - both topics I consider to be quite LW-germane but underdiscussed.
I'm not sure, though, that it's worth pursuing in any greater depth and would appreciate feedback.
Interesting essay.
I'd like to see this pursued further. In particular, I'd like to hear your thoughts on modafinil.
JustinShovelain's post on caffeine was similar, and upvoted.
Modafinil is now done: http://www.gwern.net/Drug%20heuristics#modafinil
As of the time I reply there is nothing about modafinil on that page.
I use aggressive caching settings on gwern.net since most of the content doesn't change very often. Force-refresh, and you'll see it.
Anything besides modafinil? In part I'm stuck because I don't know what else to discuss; Justin's post was similarly short, but it was mainly made of links.
I'd like to see it pursued further. where does alcohol fit in your schema?
You might find this paper interesting.
In a sentence, it suggests that people drink to signal trustworthiness.
I don't know terribly much about alcohol, so take this with a grain of salt.
I think I would probably put it as an out-of-date adaptation; my understanding is that alcoholic beverages would have been extremely energy-rich, and also hard to come by, and so is in the same category as sugars and fats - they are now bad for us though they used to be good. ('Superstimulus', I think, is the term.)
Given that, it's more harmful than helpful and to be avoided.
I'll admit that the issue of resveratrol confuses me. But assuming that it has any beneficial effect in humans, AFAIK one should be able to get it just by drinking grape juice - resveratrol is not created in the fermentation process.
Fermented beverages also had the advantage of usually being free of dangerous bacteria; ethanol is an antiseptic that kills the bacteria that cause most water-borne diseases. (And water-borne disease used to be very common.)
That's a good second way it's an out of date optimization.
By the way: getting crashes on the comments page again. Prior to 1yp8 works and subsequent to 1yp8 works; I haven't found the thread with the broken comment.
Edit: It's not any of the posts after 23andme genome analysis - $99 today only in Recent Posts, I believe.
Edit 2: Recent Comments still broken for me, but ?before=t1_1yp8 is no longer showing the most recent comments to me - ?before=t1_1yqo continues where the other is leaving off.
Edit 3: Recent Comments has now recovered for me.
Having Recent Comments problems again: after 1yyu and before 1yyu work. The sidebar "Recent Comments" circa 1yyw does not include 1yyu - skips straight from 1yyv to 1yyt.
No crashes are observed in the comment threads of "Antagonizing Opioid Receptors for (Prevention of) Fun and Profit" through "Possibilities for converting useless fun into utility in Online Gaming".
Edit: byrnema has discovered the guilty comment - it appears to have been on this post.
Having similar problems. Getting error messages when I click "Recent comments."
Usually the way these work is that any page which would include a specific comment fail with an error message. The "before 1yyu" page should show more recent comments than the broken one - if the most recent comments in the sidebar don't appear on that page, replace the "1yyu" at the end of the string with the identifier of a more recent comment or see if the plain old "Recent Comments" page has fixed itself.
What's the coding system for urls for the recent comments pages? Why "1yyu"?