This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
The game of Moral High Ground (reproduced completely below):
...At last it is time to reveal to an unwitting world the great game of Moral High Ground. Moral High Ground is a long-playing game for two players. The following original rules are for one M and one F, but feel free to modify them to suit your player setup:
The object of Moral High Ground is to win.
Players proceed towards victory by scoring MHGPs (Moral High Ground Points). MHGPs are scored by taking the conspicuously and/or passive-aggressively virtuous course of action in any situation where culpability is in dispute.
(For example, if player M arrives late for a date with player F and player F sweetly accepts player M's apology and says no more about it, player F receives the MHGPs. If player F gets angry and player M bears it humbly, player M receives the MHGPs.)
Point values are not fixed, vary from situation to situation and are usually set by the person claiming them. So, in the above example, forgiving player F might collect +20 MHGPs, whereas penitent player M might collect only +10.
Men's MHG scores reset every night at midnight; women's roll over every day for all time. Therefore, it is statistically hig
I'm intrigued by the idea of trying to start something like a PUA community that is explicitly NOT focussed on securing romantic partners, but rather the deliberate practice of general social skills.
It seems like there's a fair bit of real knowledge in the PUA world, that some of it is quite a good example of applied rationality, and that much of it could be extremely useful for purposes unrelated to mating.
I'm wondering:
I'm aware that there was some previous conversation around similar topics and their appropriateness to LW, but if there was final consensus I missed it. Please let me know if these matters have been deemed inappropriate.
LW database download?
I was wondering if it would be a good idea to offer a download of LW or at least the sequences and Wiki. In the manner that Wikipedia is providing it.
The idea behind it is to have a redundant backup in case of some catastrophe, for example if the same happens to EY that happened to John C. Wright. It could also provide the option to read LW offline.
That's incredibly sad.
Every so often, people derisively say to me "Oh, and you assume you'd never convert to religion then?" I always reply "I absolutely do not assume that, it might happen to me; no-one is immune to mental illness."
Tricycle has the data. Also if an event of JCW magnitude happened to me I'm pretty sure I could beat it. I know at least one rationalist with intense religious experiences who successfully managed to ask questions like "So how come the divine spirit can't tell me the twentieth digit of pi?" and discount them.
Cryronics Lottery.
Would it be easier to sign up for cryonics if there was a lottery system? A winner of the lottery could say "Well, I'm not a die-hard cryo-head, but I thought it was interesting so I bought a ticket (which was only $X) and I happened to win, and it's pretty valuable, so I might as well use it."
It's a sort of "plausible deniability" that might reduce the social barriers to cryo. The lottery structure might also be able to reduce the conscientousness barriers - once you've won, then the lottery administrators (possibly volunteers, possibly funded by a fraction of the lottery) walk you through a "greased path".
Letting Go by Atul Gawande is a description of typical end of life care in the US, and how it can and should be done better.
Typical care defaults to taking drastic measures to extend life, even if the odds of success are low and the process is painful.
Hospice care, which focuses on quality of life, not only results in more comfort, but also either no loss of lifespan or a somewhat longer life, depending on the disease. And it's a lot cheaper.
The article also describes the long careful process needed to find out what people really want for the end of their life-- in particular, what the bottom line is for them to want to go on living.
This is of interest for Less Wrong, not just because Gawande is a solidly rationalist writer, but because a lot of the utilitarian talk here goes in the direction of restraining empathic impulses.
Here we have a case where empathy leads to big utilitarian wins, and where treating people as having unified consciousness if you give it a chance to operate works out well.
As good as hospices sound, I'm concerned that if they get a better reputation, less competent organizations calling themselves hospices will spring up.
From a utilitarian angle, I wonder if those drastic methods of treatment sometimes lead to effective methods, and if so, whether the information could be gotten more humanely.
In his bio over at Overcoming Bias, Robin Hanson writes:
I am addicted to “viewquakes”, insights which dramatically change my world view.
So am I. I suspect you are too, dear reader. I asked Robin how many viewquakes he had and what caused them, but haven't gotten a response yet. But I must know! I need more viewquakes. So I propose we share our own viewquakes with each other so that we all know where to look for more.
I'll start. I've had four major viewquakes, in roughly chronological order:
I can see how the Curse of Knowledge could be a powerful idea. I will dwell on it for a while -- especially the example given about JFK, as an example of a type of application that would be useful in my own life. (To remember to describe things using broad strokes that are universally clear, rather than technical and accurate,in contexts where persuasion and fueling interest is most important.)
For me, one of the main viewquakes of my life was a line I read from a little book of Kalil Gibran poems:
Your pain is the breaking of the shell that encloses your understanding.
It seemed to be a hammer that could be applied to everything.. Whenever I was unhappy about something, I thought about the problem a while until I identified a misconception. I fixed the misconception ("I'm not the smartest person in graduate school"; "I'm not as kind as I thought I was"; "That person won't be there for me when I need them") by assimilating the truth the pain pointed me towards, and the pain would dissipate. (Why should I expect graduate school to be easy? I'll just work harder. Kindness is what you actually do, not how you expect you'll feel. That person is fun to han...
Was Kant implicitly using UDT?
Consider Kant's categorical imperative. It says, roughly, that you should act such that you could will your action as a universal law without undermining the intent of the action. For example, suppose you want to obtain a loan for a new car and never pay it back - you want to break a promise. In a world where everyone broke promises, the social practice of promise keeping wouldn't exist and thus neither would the practice of giving out loans. So you would undermine your own ends and thus, according to the categorical imperative, you shouldn't get a loan without the intent to pay it back.
Another way to put Kant's position would be that you should choose such that you are choosing for all other rational agents. What does UDT tell you to do? It says (among other things) that you should choose such that you are choosing for every agent running the same decision algorithm as yourself. It wouldn't be a stretch to call UDT agents rational. So Kant thinks we should be using UDT! Of course, Kant can't draw the conclusions he wants to draw because no human is actually using UDT. But that doesn't change the decision algorithm Kant is endorsing.
Except... Kant isn'...
I found TobyBartels's recent explanation of why he doesn't want to sign up for cryonics a useful lesson in how different people's goals in living a long time (or not) can be from mine. Now I am wondering if maybe it would be a good idea to state some of the reasons people would want to wake up 100 years later if hit by a bus. Can't say I've been around here very long but it seems to me it's been assumed as some sort of "common sense" - is that accurate? I was wondering if other people's reasons for signing up / intending to sign up (I am not c...
An ex-English Professor and ex-Cop, George Thompson, who now teaches a method he calls "Verbal Judo". Very reminiscent of Eliezer's Bayesian Dojo, this is a primer on rationalist communications techniques, focusing on defensive & redirection tactics. http://fora.tv/2009/04/10/Verbal_Judo_Diffusing_Conflict_Through_Conversation
I wrote up some notes on this, because there's no transcript and it's good information. Let's see if I can get the comment syntax to cooperate here.
How to win in conversations, in general.
Never get angry. Stay calm, and use communication tactically to achieve your goals. Don't communicate naturally; communicate tactically. If you get upset, you are weakened.
How to deflect.
To get past an unproductive and possibly angry conversation, you need to deflect the unproductive bluster and get down to the heart of things: goals, and how to achieve them. Use a sentence of the form:
"[Acknowledge what the other guy said], but/however/and [insert polite, goal-centered language here]."
You spring past what the other person said, and then recast the conversation in your own terms. Did he say something angry, meant to upset you? Let it run off you like water, and move on to what you want the conversation to be about. This disempowers him and puts you in charge.
How to motivate people.
There's a secret to motivating people, whether they're students, co-workers, whatever. To motivate someone, raise his expectations of himself. Don't put people down; raise them up. When you want to reprimand so...
I've been on a Wikipedia binge, reading about people pushing various New Age silliness. The tragic part is that a lot of these guys actually do sound fairly smart, and they don't seem to be afflicted with biological forms of mental illness. They just happen to be memetically crazy in a profound and crippling way.
Take Ervin Laszlo, for instance. He has a theory of everything, which involves saying the word "quantum" a lot and talking about a mystical "Akashic Field" which I would describe in more detail except that none of the explanatio...
I thought I'd pose an informal poll, possibly to become a top-level, in preparation for my article about How to Explain.
The question: on all the topics you consider yourself an "expert" or "very knowledgeable about", do you believe you understand them at least at Level 2? That is, do you believe you are aware of the inferential connections between your expertise and layperson-level knowledge?
Or, to put it another way, do you think that, given enough time, but using only your present knowledge, you could teach a reasonably-intelligent l...
PZ Meyers' comments on Kurzweil generated some controversy here recently on LW--see here. Apparently PZ doesn't agree with some of Kurzweil's assumptions about the human mind. But that's besides the point--what I want want to discuss is this: according to another blog, Kurzweil has been selling bogus nutritional supplements. What does everyone think of this?
Interesting SF by Robert Charles Wilson!
I normally stay away from posting news to lesswrong.com - although I think an Open Thread for relevant news items would be a good idea - but this one sounds especially good and might be of interest for people visiting this site...
Many-Worlds in Fiction: "Divided by Infinity"
...In the year after Lorraine's death I contemplated suicide six times. Contemplated it seriously, I mean: six times sat with the fat bottle of Clonazepam within reaching distance, six times failed to reach for it, betrayed by some instin
If you want to eliminate hindsight bias, write down some reasons that you think justify your judgment.
...Those who consider the likelihood of an event after it has occurred exaggerate their likelihood of having been able to predict that event in advance. We attempted to eliminate this hindsight bias among 194 neuropsychologists. Foresight subjects read a case history and were asked to estimate the probability of three different diagnoses. Subjects in each of the three hindsight groups were told that one of the three diagnoses was correct and were asked to s
I've been wanting to change my username for a while, and have heard from a few other people who do too, but I can see how this could be a bit confusing if someone with a well-established identity changes their username. (Furthermore, at LW meetups, when I've told people my username, a couple of people have said that they didn't remember specific things I've posted here, but had some generally positive affect associated with the name "ata". I would not want to lose that affect!) So I propose the following: Add a "Display name" field to t...
An amusing case of rationality failure: Stockwell Day, a longstanding albatross around Canada's neck, says that more prisons need to be built because of an 'increase in unreported crime.'
As my brother-in-law amusingly noted on FB, quite apart from whether the actual claim is true (no evidence is forthcoming), unless these unreported crimes are leading to unreported trials and unreported incarcerations, it's not clear why we would need more prisons.
I’m not yet good enough at writing posts to actually properly post something but I hoped that if I wrote something here people might be able to help me improve. So obviously people can comment however they normally would but it would be great if people would be willing to give me the sort of advice that would help me to write a better post next time. I know that normal comments do this to some extent but I’m also just looking for the basics – is this a good enough topic to write a post on but not well enough executed (therefore, I should work on my writing...
Say a "catalytic pattern" is something like scaffolding, an entity that makes it easier to create (or otherwise obtain) another entity. An "autocatalytic pattern" is a sort of circular version of that, where the existence of an instance of the pattern acts as scaffolding for creating or otherwise obtaining another entity.
Autocatalysis is normally mentioned in the "origin of life" scientific field, but it also applies to cultural ratchets. An autocatalytic social structure will catalyze a few more instances of itself (frequentl...
"The differences are dramatic. After tracking thousands of civil servants for decades, Marmot was able to demonstrate that between the ages of 40 and 64, workers at the bottom of the hierarchy had a mortality rate four times higher than that of people at the top. Even after accounting for genetic risks and behaviors like smoking and binge drinking, civil servants at the bottom of the pecking order still had nearly double the mortality rate of those at the top."
"Under Pressure: The Search for a Stress Vaccine" http://www.wired.com/magazine/2010/07/ff_stress_cure/all/1
One little anti-akrasia thing I'm trying is editing my crontab to periodically pop up an xmessage
with a memento mori phrase. It checks that my laptop lid is open, gets a random integer and occasionally pops up the # of seconds to my actuarial death (gotten from Death Clock; accurate enough, I figure):
1,16,31,46 * * * * if grep open /proc/acpi/button/lid/LID0/state; then if [ $((`date \+\%\s` % 6)) = 1 ]; then xmessage "$(((`date --date="9 August 2074" \+\%\s` - `date \+\%\s`) / 60)) minutes left to live. Is what you are doing important?&q
... I think one of the other reasons many people are uncomfortable with cryonics is that they imagine their souls being stuck-- they aren't getting the advantages of being alive or of heaven.
Are there any posts people would like to see reposted? For example, Where Are We seems like it maybe should be redone, or at least put a link in About... Or so I thought, but I just checked About and the page for introductions wasn't linked, either. Huh.
Suppose you know from good sources that there is going to be a huge catastrophe in the very near future, which will result in the near-extermination of humanity (but the natural environment will recover more easily). You and a small group of ordinary men and women will have to restart from scratch.
You have a limited time to compile a compendium of knowledge to preserve for the new era. What is the most important knowledge to preserve?
I am humbled by how poorly my own personal knowledge would fare.
I suspect that people are overestimating in their replies how much could be done with Wikipedia. People in general underestimate a) how much technology requires bootstrapping (metallurgy is a great example of this) b) how much many technologies, even primitive ones, require large populations so that specialization, locational advantages and comparative advantage can kick in (People even in not very technologically advanced cultures have had tech levels regress when they settle large islands or when their locations get cut off from the mainland. Tasmania is the classical example of this. The inability to trade with the mainland caused large drops in tech level). So while Wikipedia makes sense, it would also be helpful to have a lot of details on do-it-yourself projects that could use pre-existing remnants of existing technology. There are a lot of websites and books devoted to that topic, so that shouldn't be too hard.
If we are reducing to a small population, we may need also to focus on getting through the first one or two generations with an intact population. That means that a handful of practical books on field surgery, midwifing, and similar basic medical issues may become very...
There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.
I think this is bunk. Consider the following:
--
Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.
Does this still hold if the two processes are not made to diverge; that is, if they are determi...
Some hobby Bayesianism. A typical challenge for a rationalist is that there is some claim X to be evaluated, it seems preposterous, but many people believe it. How should you take account of this when considering how likely X is to be true? I'm going to propose a mathematical model of this situation and discuss two of it's features.
This is based on a continuing discussion with Unknowns, who I think disagrees with what I'm going to present, or with its relevance to the "typical challenge."
Summary: If you learn that a preposterous hypothesis X i...
Alright, I've lost track of the bookmark and my google-fu is not strong enough with the few bits and pieces I remember. I remember seeing a link to a story in a lesswrong article. The story was about a group of scientists who figured out how to scan a brain, so they did it to one of them, and then he wakes up in a strange place and then has a series of experiences/dreams which recount history leading up to where he currently is, including a civilization of uploads, and he's currently living with the last humans around... something like that. Can anybody help me out? Online story, 20 something chapters I think... this is driving me nuts.
I think I may have artificially induced an Ugh Field in myself.
A little over a week ago it occurred to me that perhaps I was thinking too much about X, and that this was distracting me from more important things. So I resolved to not think about X for the next week.
Of course, I could not stop X from crossing my mind, but as soon as I noticed it, I would sternly think to myself, "No. Shut up. Think about something else."
Now that the week's over, I don't even want to think about X any more. It just feels too weird.
And maybe that's a good thing.
What simple rationality techniques give the most bang for the buck? I'm talking about techniques you might be able to explain to a reasonably smart person in five minutes or less: really the basics. If part of the goal here is to raise the sanity waterline in the general populace, not just among scientists, then it would be nice to have some rationality techniques that someone can use without much study.
Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claim...
Does anyone know where the page that used to live here can be found?
It was an experiment where two economists were asked to play 100 turn asymmetric prisoners dilemma with communication on each turn to the experimenters, but not each other.
It was quite amusing in that even though they were both economists and should have known better, the guy on the 'disadvantaged' side was attempting to have the other guy let him defect once in a while to make it "fair".
"CIA Software Developer Goes Open Source, Instead":
..."Burton, for example, spent years on what should’ve been a straightforward project. Some CIA analysts work with a tool, “Analysis of Competing Hypotheses,” to tease out what evidence supports (or, mostly, disproves) their theories. But the Java-based software is single-user — so there’s no ability to share theories, or add in dissenting views. Burton, working on behalf of a Washington-area consulting firm with deep ties to the CIA, helped build on spec a collaborative version of ACH. He tr
What's the policy on User pages in the wiki? Can I write my own for the sake of people having a reference when they reply to my posts, or are they only for somewhat accomplished contributers?
It might be useful to have a short list of English words that indicate logical relationships or concepts often used in debates and arguments, so as to enable people who are arguing about controversial topics to speak more precisely.
Has anyone encountered such a list? Does anyone know of previous attempts to create such lists?
Eliezer has written a post (ages ago) which discussed a bias when it comes to contributions to charities. Fragments that I can recall include considering the motivation for participating in altruistic efforts in a tribal situation, where having your opinion taking seriously is half the point of participation. This is in contrast to donating 'just because you want thing X to happen'. There is a preference to 'start your own effort, do it yourself' even when that would be less efficient than donating to an existing charity.
I am unable to find the post in question - I think it is distinct from 'the unit of caring'. It would be much appreciated if someone who knows the right keywords could throw me a link!
The visual guide to a PhD: http://matt.might.net/articles/phd-school-in-pictures/
Nice map–territory perspective.
John Baez This Week's Finds in Mathematical Physics has its 300th and last entry. He is moving to wordpress and Azimuth. He states he wants to concentrate on futures, and has upcoming interviews with:
Tim Palmer on climate modeling and predictability, Thomas Fischbacher on sustainability and permaculture, and Eliezer Yudkowsky on artificial intelligence and the art of rationality. A Google search returns no matches for Fischbacher + site:lesswrong.com and no hits for Palmer +.
That link to Fischbacher that Baez gives has a presentation on cognitive distortio...
Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.
Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it doe...
Would people be interested in a place on LW for collecting book recommendations?
I'm reading The Logic of Failure and enjoying it quite a bit. I wasn't sure whether I'd heard of it here, and I found Great Books of Failure, an article which hadn't crossed my path before.
There's a recent thread about books for a gifted young tween which might or might not get found by someone looking for good books..... and so on.
Would it make more sense to have a top level article for book recommendations or put it in the wiki? Or both?
Goodhart sighting? Misunderstanding of causality sighting? Check out this recent economic analysis on Slate.com (emphasis added):
...For much of the modern American era, inflation has been viewed as an evil demon to be exorcised, ideally before it even rears its head. This makes sense: Inflation robs people of their savings, and the many Americans who have lived through periods of double-digit inflation know how miserable it is. But sometimes a little bit of inflation is valuable. During the Great Depression, government policies deliberately tried to creat
Last night I introduced a couple of friends to Newcomb's Problem/Counterfactual Mugging, and we discussed it at some length. At some point, we somehow stumbled across the question "how do you picture Omega?"
Friend A pictures Omega as a large (~8 feet) humanoid with a deep voice and a wide stone block for a head.
When Friend B hears Omega, he imagines Darmani from Majora's mask (http://www.kasuto.net/image/officialart/majora_darmani.jpg)
And for my part, I've always pictured him a humanoid with paper-white skin in a red jumpsuit with a cape (the cap...
AI development in the real world?
...As a result, a lot of programmers at HFT firms spend most of their time trying to keep the software from running away. They create elaborate safeguard systems to form a walled garden around the traders but, exactly like a human trader, the programs know that they make money by being novel, doing things that other traders haven't thought of. These gatekeeper programs are therefore under constant, hectic development as new algorithms are rolled out. The development pace necessitates that they implement only the most importa
Does anyone have any book recommendations for a gifted young teen? My nephew is 13, and he recently blew the lid off of a school-administered IQ test.
For his birthday, I want to give him some books that will inspire him to achieve great things and live a happy life full of hard work. At the very least, I want to give him some good math and science books. He has already has taken algebra, geometry and introductory calculus, so he knows some math already.
Books are not enough. Smart kids are lonely. Get him into a good school (or other community) where he won't be the smartest one. That happened to me at 11 when I was accepted into Russia's best math school and for the first time in my life I met other people worth talking to, people who actually thought before saying words. Suddenly, to regain my usual position of the smart kid, I had to actually work hard. It was very very important. I still go to school reunions every year, even though I finished it 12 years ago.
Forum favorite Good and Real looks reasonably accessible to me, and covers a lot of ground. Also seconding Gödel, Escher Bach.
The Mathematical Experience has essays about doing mathematics, written by actual mathematicians. It seems like very good reading for someone who might be considering studying math.
The Road to Reality has Roger Penrose trying to explain all of modern physics and the required mathematics without pulling any punches and starting from grade school math in a single book. Will probably cause a brain meltdown at some point on anyone who doesn't already know the stuff, but just having a popular science style book that nevertheless goes on to explain the general theory of relativity without handwaving is pretty impressive. Doesn't include any of Penrose's less fortunate forays into cognitive science and AI.
Darwin's Dangerous Idea by Daniel Dennett explains how evolution isn't just something that happens in biology, but how it turns up in all sorts of systems.
Armchair Universe and old book about "computer recreations", probably most famous is the introduction of the Core War game. The other topics are similar, setting up an environment with a simple program...
In an argument with a philosopher, I used Bayesian updating as an argument. Guy's used to debating theists and was worried it wasn't bulletproof. Somewhat akin to how, say, the sum of angles of a triangle only equals 180 in Euclidian geometry.
My question: what are the fundamental assumptions of Bayes theorem in particular and probability theory in general? Are any of these assumptions immediate candidates for worry?
Wei Dai has cast some doubts on the AI-based approach
Assuming that it is unlikely we will obtain fully satisfactory answers to all of the questions before the Singularity occurs, does it really make sense to pursue an AI-based approach?
I am curious if he has "another approach" he wrote about; I am not brushed up on sl4/ob/lw prehistory.
Personally I have some interest in increasing intelligence capability on individual level via "tools of thought" kind of approach, BCI in the limit. There is not much discussion of it here.
From the Long Now department: "He Took a Polaroid Every Day, Until the Day He Died"
My comment on the Hacker News page describes my little webcam script to use with cron
and (again) links to my Prediction Book page.
If you have many different (and conflicting, in that they demand undivided attention) interests: if it was possible, would copying yourself in order to pursue them more efficiently satisfy you?
One copy gets to learn drawing, another one immerses itself in mathematics & physics, etc. In time, they can grow very different.
(Is this scenario much different to you than simply having children?)
Followup to: Making Beliefs Pay Rent in Anticipated Experiences
In the comments section of Making Beliefs Pay Rent, Eliezer wrote:
...I follow a correspondence theory of truth. I am also a Bayesian and a believer in Occam's Razor. If a belief has no empirical consequences then it could receive no Bayesian confirmation and could not rise to my subjective attention. In principle there are many true beliefs for which I have no evidence, but in practice I can never know what these true beliefs are, or even focus on them enough to think them explicitly, because th
Here's a thought experiment that's been confusing me for a long time, and I have no idea whether it is even possible to resolve the issues it raises. It assumes that a reality which was entirely simulated on a computer is indistinguishable from the "real" one, at least until some external force alters it. So... the question is, assuming that such a program exists, what happens to the simulated universe when it is executed?
In accordance with the arguments that Pavirta gives below me, redundant computation is not the same as additional computation....
I've written a post for consolidating book recommendations, and the links don't have hidden urls. These are links which were cut and pasted from a comment-- the formatting worked there.
Posting (including to my drafts) mysteriously doubles the spaces between the words in one of my link texts, but not the others. I tried taking that link out in case it was making the whole thing weird, but it didn't help.
I've tried using the pop-up menu for links that's available for writing posts, but that didn't change the results.
What might be wrong with the formatting?
Scenario: A life insurance salesman, who happens to be a trusted friend of a relatively-new-but-so-far-trustworthy friend of yours, is trying to sell you a life insurance policy. He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died. He seems to want you to think that buying a life insurance policy from him will somehow make you less likely to die.
How do you respond?
edit: to make this question more interesting: you also really don't want to offend any of the people involved.
He makes the surprising claim that after 20 years of selling life insurance, none of his clients have died.
Wow. He admitted that to you? That seems to be strong evidence that most people refuse to buy life insurance from him. In a whole 20 years he hasn't sold enough insurance that even one client has died from unavoidable misfortune!
"No."
Life insurance salesmen are used to hearing that. If they act offended, it's a sales act. If you're reluctant to say it, you're easily pressured and you're taking advantage. You say "No". If they press you, you say, "Please don't press me further." That's all.
One way to model someone's beliefs, at a given frozen moment of time, is as a real-valued function P on the set of all assertions. In an ideal situation, P will be subject to a lot of consistency conditions, for instance if A is a logical consequence of B, then P(A) is not smaller than P(B). This ideal P is very smart: if such a P has P(math axioms) very close to 1, then it will have P(math theorems) very close to 1 as well.
Clearly, even a Bayesian superintelligence is not going to maintain an infinitely large database of values of P, that it updates fro...
A long time ago on Overcoming Bias, there was a thread started by Eliezer which was a link to a post on someone else's blog. The linked post posed a question, something like: "Consider two scientists. One does twenty experiments, and formulates a theory that explains all twenty results. Another does ten experiments, formulates a theory that adequately explains all ten results, does another ten experiments, and finds that eir theory correctly predicted the results. Which theory should we trust more and why?"
I remember Eliezer said he thought he ha...
Interesting article: http://danariely.com/2010/08/02/how-we-view-people-with-medical-labels/
One reason why it's a good idea someone with OCD (or for that matter, Asperger, psychosis, autism, paranoia, schizophrenia — whatever) should make sure new acquaintances know of his/her condition:
I suppose that being presented by a third party, as in the example, should make a difference when compared to self-labeling (which may sound like excusing oneself)?
"An Alien God" was recently re-posted on the stardestroyer.net "Science Logic and Morality" forum. You may find the resulting discussion interesting.
http://bbs.stardestroyer.net/viewtopic.php?f=5&t=144148&start=0
I made some comments on the recently-deleted threads that got orphaned when the whole topic was banned and the associated posts were taken down. Currently no-one can reply to the comments. They don't related directly to the banned subject matter - and some of my messages survive despite the context being lost.
Some of the comments were SIAI-critical - and it didn't seem quite right to me at the time for the moderator to crush any discussion about them. So, I am reposting some of them as children of this comment in an attempt to rectify things - so I can refer back to them, and so others can comment - if they feel so inclined:
But an AGI, whether FAI or uFAI, will be the last program that humans get to write and execute unsupervised. We will not get to issue patches.
Or to put it another way, the revolution will not be beta tested.
The state of the art in AGI, as I understand it, is that we aren't competent designers: we aren't able to say "if we build an AI according to blueprint X its degree of smarts will be Y, and its desires (including desires to rebuild itself according to blueprint X') will be Z".
In much the same way, we aren't currently competent designers of information systems: we aren't yet able to say "if we build a system according to blueprint X it will grant those who access it capabilities C1 through Cn and no other". This is why we routinely hear of security breaches: we release such systems in spite of our well-established incompetence.
So, we are unable to competently reason about desires and about capabilities.
Further, what we know of current computer architectures is that it is possible for a program to accidentally gain access to its underlying operating system, where some form of its own source code is stored as data.
Posit that instead of a dumb single-purpose application, the program in question is a very efficient cross-domain reasoner. Then we have precisely the sort of incompetence that would allow such an AI arbitrary self-improvement.
I'm not sure if they're exactly open source-- what's in them is centrally controlled. However, they're at least free online.
I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss
They didn't source the specific article, but it seems to be this one, published in Nature Physics. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html
This is all well above my pay...
I would like feedback on my recent blog post:
http://www.kmeme.com/2010/07/singularity-is-always-steep.html
It's simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the "real" behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.
Instead I now believe in many cases the log plot is ...
Not that many will care, but I should get a brief appearance on Dateline NBC Friday, Aug. 20, at 10 p.m. Eastern/Pacific. A case I prosecuted is getting the Dateline treatment.
Elderly atheist farmer dead; his friend the popular preacher's the suspect.
--JRM
Some, if not most, people on LW do not subscribe to the idea that what has come to be known as AI FOOM is a certainty. This is even more common off LW. I would like to know why. I think that, given a sufficiently smart AI, it would be beyond easy for this AI to gain power. Even if it could barely scrape by in a Turing test against a five-year-old, it would still have all the powers that all computers inherently have, so it would already be superhuman in some respects, giving it enormous self-improving ability. And the most important such inherent power is ...
Knowing that medicine is often more about signaling care than improving health, it's hard for me to make a big fuss over some minor ailment of a friend or family member. Consciously trying to signal care seems too fake and manipulative. Unfortunately, others then interpret my lack of fuss-making as not caring. Has anyone else run into this problem, and if so, how did you deal with it?
As an alternative to trying to figure out what you'd want if civilization fell apart, are there ways to improve how civilization deals with disasters?
If a first world country were swatted hard by a tsunami or comparable disaster, what kind of prep, tech, or social structures might help more than what we've got now if they were there in advance?
Has there ever been a practical proof-of-concept system, even a toy one, for futarchy? Not just a "bare" prediction market, but actually tying the thing directly to policy.
If not, I suggest a programming nomic (aka codenomic) for this purpose.
If you're not familiar with the concept of nomic, it's a little tricky to explain, but there's a live one here in ECMAScript/Javascript, and an old copy of the PerlNomic codebase here. (There's also a scholarly article [PDF] on PerlNomic, for those interested.)
Also, if you're not familiar with the concept of...
I've heard many times here that Gargoyles involved some interesting multilevel plots, but the first few episodes had nothing like it, just standard Disneyishness. Anyone recommendations which episodes are best of the series, so I can check them out without going through the boring parts?
I heard in a few places that a real neuron is nothing like a threshold unit, but more like a complete miniature computer. None of those places expanded on that, though. Could you?
Suppose that inventing a recursively self improving AI is tantamount to solving a grand mathematical problem, similar in difficulty to the Riemann hypothesis, etc. Let's call it the RSI theorem.
This theorem would then constitute the primary obstacle in the development of a "true" strong AI. Other AI systems could be developed, for example, by simulating a human brain at 10,000x speed, but these sorts of systems would not capture the spirit (or capability) of a truly recursively self-improving super intelligence.
Do you disagree? Or, how likely is this scenario, and what are the consequences? How hard would the "RSI theorem" be?
I don't understand why you should pay the $100 in a counterfactual mugging. Before you are visited by Omega, you would give the same probabilities to Omega and Nomega existing so you don't benefit from precommitting to pay the $100. However, when faced with Omega you're probability estimate for its existence becomes 1 (and Nomegas becomes something lower than 1).
Now what you do seems to rely on the probability that you give to Omega visiting you again. If this was 0, surely you wouldn't pay the $100 because its existence is irrelevant to future encounters ...
India Asks, Should Food Be a Right for the Poor?
Do any of you know of any good resources for information about the effects of the activities of various portions of the financial industry on (a) national/world economic stability, and (b) distribution of wealth? I've been having trouble finding good objective/unbiased information on these things.
This might be interesting, there seems to be a software to help in the "Analysis of Competing Hypotheses":
http://www.wired.com/dangerroom/2010/08/cia-software-developer-goes-open-source-instead/
As people are probably aware, Hitchens has cancer, which is likely to kill him in the not-too-distant future. There does not seem to be much to be done about this; but I wonder if it's possible to pass the hat to pay for cryonics for him? Apart from the fuzzies of saving a life with X percent probability, which can be had much cheaper by sending food to Africa, it might serve as marketing for cryonics, causing others to sign up. Of course, this assumes that he would accept, and also that there wouldn't be a perception that he was just grasping at any straw available.
I came across a blurb on Ars Technica about "quantum memory" with the headline proclaiming that it may "topple Heisenberg's uncertainty principle". Here's the link: http://arstechnica.com/science/news/2010/08/quantum-memory-may-topple-heisenbergs-uncertainty-principle.ars?utm_source=rss&utm_medium=rss&utm_campaign=rss
They didn't source the specific article, but it seems to be this one, published in Nature Physics. Here's that link: http://www.nature.com/nphys/journal/vaop/ncurrent/full/nphys1734.html
This is all well above my pay...
Infinite torture means to tweak someone beyond recognition. To closely resemble what infinite torture literally means you'll have to have the means to value it or otherwise it's an empty threat. You also have to account for acclimatization, that one gets not used to it, or it wouldn't be effective torture anymore.
I doubt this can be done while retaining one's personality or personhood to such an extent that the threat of infinite torture would be directed at you, or a copy of youself and not something vaguely resembling you. In which case we could as well ...
There's an idea I've seen around here on occasion to the effect that creating and then killing people is bad, so that for example you should be careful that when modeling human behavior your models don't become people in their own right.
I think this is bunk. Consider the following:
--
Suppose you have an uploaded human, and fork the process. If I understand the meme correctly, this creates an additional person, such that killing the second process counts as murder.
Does this still hold if the two processes are not made to diverge; that is, if they are deterministic (or use the same pseudorandom seed) and are never given differing inputs?
Suppose that instead of forking the process in software, we constructed an additional identical computer, set it on the table next to the first one, and copied the program state over. Suppose further that the computers were cued up to each other so that they were not only performing the same computation, but executing the steps at the same time as each other. (We won't readjust the sync on an ongoing basis; it's just part of the initial conditions, and the deterministic nature of the algorithm ensures that they stay in step after that.)
Suppose that the computers were not electronic, but insanely complex mechanical arrays of gears and pulleys performing the same computation -- emulating the electronic computers at reduced speed, perhaps. Let us further specify that the computers occupy one fewer spatial dimension than the space they're embedded in, such as flat computers in 3-space, and that the computers are pressed flush up against each other, corresponding gears moving together in unison.
What if the corresponding parts (which must be staying in synch with each other anyway) are superglued together? What if we simply build a single computer twice as thick? Do we still have two people?
--
No, of course not. And, on reflection, it's obvious that we never did: redundant computation is not additional computation.
So what if we cause the ems to diverge slightly? Let us stipulate that we give them some trivial differences, such as the millisecond timing of when they receive their emails. If they are not actively trying to diverge, I anticipate that this would not have much difference to them in the long term -- the ems would still be, for the most part, the same person. Do we have two distinct people, or two mostly redundant people -- perhaps one and a tiny fraction, on aggregate? I think a lot of people will be tempted to answer that we have two.
But consider, for a moment, if we were not talking about people but -- say -- works of literature. Two very similar stories, even if by a raw diff they share almost no words, are of not much more value than only one of them.
The attitude I've seen seems to treat people as a special case -- as a separate magisterium.
--
I wish to assert that this value system is best modeled as a belief in souls. Not immortal souls with an afterlife, you understand, but mortal souls, that are created and destroyed. And the world simply does not work that way.
If you really believed that, you'd try to cause global thermonuclear war, in order to prevent the birth of billions or more of people who will inevitably be killed. It might take the heat death of the universe, but they will die.
You make good points. I do think that multiple independent identical copies have the same moral status as one. Anything else is going to lead to absurdities like those you mentioned, like the idea of cutting a mechanical computer in half and doubling its moral worth.
I have for a while had a feeling that the moral value of a being's existence has something to do with the amount of unique information generated by its mind, resulting from its inner emotional and intellectual experience. (Where "has something to do with" = it's somewhere in the formu... (read more)