Comment author:MrHen
31 January 2010 06:01:18PM
3 points
[-]
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
Comment author:Alicorn
29 January 2010 08:30:44PM
12 points
[-]
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
Comment author:Unknowns
30 January 2010 09:05:16AM
1 point
[-]
It isn't that impressive to me. As far as I can see, what it shows is that she has been torturing herself for a long time, probably many years, over her issues with Christianity. She's just expressing her anger with the suffering it caused her.
Comment author:CassandraR
29 January 2010 02:00:10PM
1 point
[-]
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
Comment author:Wei_Dai
29 January 2010 06:32:50AM
1 point
[-]
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
Comment author:DaveInNYC
25 January 2010 03:38:00PM
0 points
[-]
Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.
Comment author:Kevin
25 January 2010 02:29:30PM
*
0 points
[-]
First, is there an agreed upon definition for person? We need to define that and make sure we agree before we should go much further, but I'll give it a try anyways.
All Turing tests are not intuition pumps. There should be other Turing tests to recognize a greater degree of personhood. Perhaps if the investigator can trigger an existential crisis in the chatbot? Or if the chatbot can be judged to be more self-aware than an average 18 year old?
What if the chatbot gets 1000 karma on Less Wrong?
It seems like this idea has probably been discussed before and that there is something I am missing, please link me if possible. http://yudkowsky.net/other/fiction/npc is all that comes to mind.
Comment author:RobinZ
25 January 2010 03:29:19PM
1 point
[-]
I think I'm confused: what I assumed you meant was a chatbot in the sense of ELIZA (a program which uses canned replies chosen and modified as per a cursory scan of the input text). Such a program is by definition not a person, and success in Turing tests does not grant it personhood.
As for my second sentence: Turing's imitation game was proposed as a way to get past the common intuition that only a human being could be a person by countering it with the intuition that someone you can talk to, you can hold an ordinary conversation with, is a person. It's an archetypal intuition pump, a very sensible and well-reasoned intuition pump, a perfectly valid intuition pump - but not a rigorous mathematical test. ELIZA, which is barely clever, has passed the Turing test several times. We know that ELIZA is no person.
Comment author:Kevin
25 January 2010 03:44:06PM
*
0 points
[-]
Sorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI trapped in the proverbial box.
I agree that a rigorous mathematical definition of personhood is important, but I doubt that I will be able to make a meaningful contribution in that area anytime in the next few years. For now, I think we should be able to think of some philosophical or empirical test of chatbot personhood.
I still feel confused about this and I think that's because we still don't have a good definition of what a person actually is; but we shouldn't need a rigorous mathematical mathematical test in order to gain a better understanding of what defines a person.
Comment author:RobinZ
25 January 2010 03:48:31PM
0 points
[-]
The Turing test isn't a horrible test of personhood, from that attitude, but without better understanding of 'personhood' I don't think it's appropriate to spend time trying to come up with a better one.
Comment author:Kevin
25 January 2010 12:29:09PM
*
0 points
[-]
I think "Rapture of the Geeks" is a meme that could catch on with the general public, but this community seems to have reluctance to engage in self-promotional activities. Is Eliezer actively avoiding publicity?
Comment author:Kevin
25 January 2010 12:52:58AM
*
0 points
[-]
Yeah, it's basically just pretty pictures. However, they're pretty pictures that are probably an interesting knowledge gap for many here.
Perhaps what is rationality related is why these orbitals are never taught to students. I suppose because so few atoms are actually configured in higher orbitals, but students of all ages should find the pictures themselves interesting and understandable.
In high school chemistry, our book went up to e orbitals, and actually said something about how the f orbitals are not shown because they are impossible or very difficult to describe, which is blatantly untrue. I found some pictures of the f orbitals on the internet and showed my teacher (who was one of my best high school teachers) and he was really interested and showed all of his classes those pictures.
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Gatsby) and I remember actually enjoying The Great Gatsby in high school. It's also a short novel so we could comfortably read it in a week or leisurely reread over the course of a month.
If it works, we can do one of Joyce's earlier works next, or whatever the club suggests. If we get good at this, a year from now we can do Ulysses.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually communicate anything: not even, "I'm hurt!" for to say that one is hurt presupposes that one is being hurt by something, some thing of which which we can speak, of which we can name predicates and say "It is so" or "It is not so." Even very sick and damaged creatures can be helped, as long their cries have enough structure for us to extrapolate a volition. But not all animate entities are creatures. Creatures have problems, problems we might be able to solve. Agonium just sits there, howling. You cannot help it; it can only be destroyed.
This is analysis is very well and good taken on its own terms, but it conceals---very cleverly conceals, I do compliment you, for surely, surely you had seen it yourself, or some part of you had---it conceals assumptions that do not apply to our own realm. Essences, discreteness, digitality---these are all artifacts born of optimizers; they play no part in the ontology of our continuous, reductionist world. There is no pure agonium, no thing-that-hurts without having any semblance of a reason for being hurt---such an entity would require a very masterful designer indeed, if it could even exist at all. In reality, there is no threshold. We face cries that fractionally have referents. And the quantitative extent to which these cries don't have enough structure for us to extrapolate a volition is exactly again the quantitative extent to which any stray stream of memes has license to reshape the entity, pushing it towards the strong attractor. You present us with this bugaboo of entities that we cannot help because they don't even have well-defined problems, but entities without problems don't have rights, either. So what's your problem? You just spray the entity with appropriate literature until it is a creature. Sculpt the thing like clay. That is: you help it by destroying it.
Comment author:Kevin
21 January 2010 01:44:43PM
2 points
[-]
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
I don't have any memory of a similar revelation, but one of my earliest memories is of asking my mother if there was a way to 'spell letters' - I understood that words could be broken down into parts and wanted to know if that was true of letters, too, and if so where the process ended - which implies that I was already doing a significant amount of abstract reasoning. I was three at the time.
Comment author:MrHen
21 January 2010 03:03:18PM
0 points
[-]
Strange, I have no such memory. The closest thing I can think of is my big Crisis of Faith when I was 17. I realized I had much more power over myself than I had previously thought. It scared me a lot, actually.
Comment author:Wei_Dai
21 January 2010 02:49:20AM
*
5 points
[-]
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe. In the mean time the N AIs are to negotiate amongst themselves, and if necessary, given help to enforce their agreements.
The advantages of this approach are:
AIs will need to know how to negotiate with each other anyway, so we can build on top of that "for free".
There seems little question that the scheme is fair, since everyone is given an equal amount of bargaining power.
Comment author:timtyler
04 June 2011 08:47:05PM
0 points
[-]
Create N AIs, one for each individual in the group, and program it with the utility function of that individual. [...] everyone is given an equal amount of bargaining power.
Do you think the more powerful group members are going to agree to that?!? They worked hard for their power and status - and are hardly likely to agree to their assets being ripped away from them in this way. Surely they will ridicule your scheme, and fight against it being implemented.
Comment author:Wei_Dai
05 June 2011 09:56:32PM
*
3 points
[-]
The main idea I wanted to introduce in that comment was the idea of using (supervised) bargaining to aggregate individual preferences. Bargaining power (or more generally, weighing of individual preferences) is a mostly orthogonal issue. If equal bargaining power turns out to be impractical and/or immoral, then some other distribution of bargaining power can be used.
Comment author:Wei_Dai
25 January 2010 03:17:13AM
*
0 points
[-]
I think that's what I implied: there is a supervisor process that governs the negotiation process and eventually picks a random AI to be released into the real world.
What exactly is "equal bargaining power" is vague. If you "instantiate" multiple AIs, their "bargaining power" may well depend on their "positions" relative to each other, the particular values in each of them, etc.
Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe.
Why this requirement? A cooperation of AIs might as well be one AI. Cooperation between AIs is just a special case of operation of each AI in the environment, and where you draw the boundary between AI and environment is largely arbitrary.
Comment author:Wei_Dai
22 January 2010 04:38:34PM
1 point
[-]
Why this requirement?
The idea is that the status quo (i.e., the outcome if the AIs fail to cooperate) is N possible worlds of equal probability, each shaped according to the values of one individual/AI. The AIs would negotiate from this starting point and improve upon it. If all the AIs cooperate (which I presume would be the case), then which AI gets randomly selected to take over the world won't make any difference.
What exactly is "equal bargaining power" is vague. If you "instantiate" multiple AIs, their "bargaining power" may well depend on their "positions" relative to each other, the particular values in each of them, etc.
In this case the AIs start from an equal position, but you're right that their values might also figure into bargaining power. I think this is related to a point Eliezer made in the comment I linked to: a delegate may "threaten to adopt an extremely negative policy in order to gain negotiating leverage over other delegates." So if your values make you vulnerable to this kind of threat, then you might have less bargaining power than others. Is this what you had in mind?
Letting a bunch of AIs with given values resolve their disagreement is not the best way to merge values, just like letting the humanity go on as it is is not the best way to preserve human values. As extraction of preference shouldn't depend on the actual "power" or even stability of the given system, merging of preference could also possibly be done directly and more fairly when specific implementations and their "bargaining power" are abstracted away. Such implementation-independent composition/interaction of preference may turn out to be a central idea for the structure of preference.
Comment author:andreas
24 January 2010 01:06:47AM
1 point
[-]
There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.
If we already have a given preference, it will only retell itself as an answer to the query "What preference should result [from combining A and B]?", so that's not how the game is played. "What's a fair way of combining A and B?" may be more like it, but of questionable relevance. For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be, rather than on how to point to the particular object representing the given imperfect agent.
Comment author:Wei_Dai
25 January 2010 04:15:14AM
*
0 points
[-]
For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be
What is/are your approach(es) for attacking this problem, if you don't mind sharing?
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be
What is/are your approach(es) for attacking this problem, if you don't mind sharing?
Since I don't have self-contained results, I can't describe what I'm searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent's algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).
Comment author:Wei_Dai
30 January 2010 01:25:07AM
0 points
[-]
I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
Thanks. I look forward to that.
Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe.
I don't understand what you mean here, and I think maybe you misunderstood something I said earlier. Here's what I wrote in the UDT1 post:
More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on.
(Note that of course this utility function has to be represented in a compressed/connotational form, otherwise it would be infinite in size.) If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse. There is uncertainty about "which universes, i.e., programs, we're in", but that's a problem we already have a handle on, I think.
So, I don't know what you're referring to by "true laws of the universe", and I can't find an interpretation of it where your quoted statement makes sense to me.
If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse.
I don't believe that directly posing this "hypothesis" is a meaningful way to go, although computational paradigm can find its way into description of the environment for the AI that in its initial implementation works from within a digital computer.
Comment author:andreas
24 January 2010 05:50:38PM
0 points
[-]
Here is a revised way of asking the question I had in mind: If our preferences determine which extraction method is the correct one (the one that results in our actual preferences), and if we cannot know or use our preferences with precision until they are extracted, then how can we find the correct extraction method?
Asking it this way, I'm no longer sure it is a real problem. I can imagine that knowing what kind of object preference is would clarify what properties a correct extraction method needs to have.
Going meta and using the (potentially) available data such as humans in form of uploads, is a step made in attempt to minimize the amount of data (given explicitly by the programmers) to the process that reconstructs human preference. Sure, it's a bet (there are no universal preference-extraction methods that interpret every agent in a way it'd prefer to do itself, so we have to make a good enough guess), but there seems to be no other way to have a chance at preserving current preference. Also, there may turn out to be a good means of verification that the solution given by a particular preference-extraction procedure is the right one.
Comment author:pdf23ds
23 January 2010 12:51:10PM
*
1 point
[-]
So you know how to divide the pie? There is no interpersonal "best way" to resolve directly conflicting values. (This is further than Eliezer went.) Sure, "divide equally" makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there's more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.
here is no interpersonal "best way" to resolve directly conflicting values.
I'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect emptiness). The possibilities open for the search of "off-line" resolution of conflict (with abstract transformation of preference) are wider than those for the "on-line" method (with AIs fighting/arguing it over) and so the "best" option, for any given criterion of "best", is going to be better in "off-line" case.
Comment author:Wei_Dai
23 January 2010 12:39:09AM
*
0 points
[-]
Letting a bunch of AIs with given values resolve their disagreement is not the best way to merge values
[Edited] I agree that it is probably not the best way. Still, the idea of merging values by letting a bunch of AIs with given values resolve their disagreement seems better than previous proposed solutions, and perhaps gives a clue to what the real solution looks like.
BTW, I have a possible solution to the AI-extortion problem mentioned by Eliezer. We can set a lower bound for each delegate's utility function at the status quo outcome, (N possible worlds with equal probability, each shaped according to one individual's utility function). Then any threats to cause an "extremely negative" outcome will be ineffective since the "extremely negative" outcome will have utility equal to the status quo outcome.
Comment author:Alicorn
21 January 2010 02:56:37AM
3 points
[-]
Unless you can directly extract a sincere and accurate utility function from the participants' brains, this is vulnerable to exaggeration in the AI programming. Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be willing to back off to 6 in exchange for concessions regarding Y from other AIs that don't want much X.
Comment author:Wei_Dai
21 January 2010 03:28:46AM
0 points
[-]
I had also mentioned this in an earlier comment on another thread. It turns out that this is a standard concern in bargaining theory. See section 11.2 of this review paper.
So, yeah, it's a problem, but it has to be solved anyway in order for AIs to negotiate with each other.
Comment author:wedrifid
21 January 2010 03:14:04AM
*
1 point
[-]
This does not seem to be the case when the AIs are unable to read each other's minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility function from the participants' brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don't (necessarily) have to care about future me.
Comment author:CassandraR
21 January 2010 12:39:47AM
1 point
[-]
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that action and it is causing me serious distress and pain.
So over the next few years that I have left in college I am going to make a desperate effort to find an outlet where I can effectively channel this overwhelming need to do something. Right now though I feel so over my head that I can't even see the surface.
Comment author:wedrifid
21 January 2010 01:03:42AM
2 points
[-]
So I am back in college and I am trying to use my time to my best advantage.
Socialise a lot. Learn the skills of social influence and the dynamics of power at both the academic level and practical.
AnnaSalamon made this and other suggestions when Calling for SIAI fellows. I imagine that the skills useful for SIAI wannabes could have significant overlap with those needed for whatever project you choose to focus on. Specific technical skills may vary somewhat.
Comment author:whpearson
21 January 2010 12:02:34AM
*
4 points
[-]
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.
Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them each an easy puzzle, and they will both do well. Paul will complete it quickly and smile proudly at how well he performed. Matt will complete it quickly and be satisfied that he has mastered the skill involved.
Now give them each a difficult puzzle. Paul will jump in gamely, but it will soon become clear he cannot overcome it as impressively as he did the last one. The opportunity to show off has disappeared, and Paul will lose interest and give up. Matt, on the other hand, when stymied, will push harder. His early failure means there's still something to be learned here, and he will persevere until he does so and solves the puzzle.
While a performance orientation improves motivation for easy challenges, it drastically reduces it for difficult ones. And since most work worth doing is difficult, it is the mastery orientation that is correlated with academic and professional success, as well as self-esteem and long-term happiness.
When I learned about performance and mastery orientations, I realized with growing horror just what I'd been doing for most of my life. Going through school as a "gifted" kid, most of the praise I'd received had been of the "Wow, you must be smart!" variety. I had very little ability to follow through or persevere, and my grades tended to be either A's or F's, as I either understood things right away (such as, say, calculus) or gave up on them completely (trigonometry). I had a serious performance orientation. And I was reinforcing it every time I played an RPG.
Comment author:Kaj_Sotala
20 January 2010 03:08:13PM
1 point
[-]
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the current outlook on the problem, so some mistakes may be present.)
"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still relevant, but were mostly made obsolete by some of the posts on Overcoming Bias. "An Introduction to Goal Systems" is hand-made expected utility maximisation, "Design of Friendship systems" is mostly premature nontechnical speculation that doesn't seem to carry over to how this thing could be actually constructed (but at the time could be seen as intermediate step towards a more rigorous design). "Policy implications" is mostly wrong.
I'll be more careful with "Ban this IP" option in the future, which I used to uncheck during the spam siege a few months back, but didn't in this case. Apparently the IP is only blocked for a day or so. I've removed it from the block list, please check if it works and write back if it doesn't.
Comment author:MrHen
19 January 2010 11:03:42PM
0 points
[-]
It works again.
Honestly, I have no problem not editing the wiki for a few days if it helps block spammers. It's not like I am adding anything critical. I was just confused.
It'd only be necessary to block spammers by IP if they actually relapse (and after a captcha mod was installed, spammers are not a problem), but the fact that you share IP with a spammer suggests that you should check your computer's security.
Comment author:MrHen
19 January 2010 11:23:34PM
0 points
[-]
Well, in the last week I've probably had at least three IP address assigned to my computer while editing the wiki. It is hard to know where to begin. I think someone I know has a good program to detect outgoing traffic... that may work.
But how many users do you expect sit on the same IP? And thus, what is the prior probability that basically the only spammer in weeks (there was only one another) would happen to have the same IP as one of the few dozen (or less) of users active enough to notice a day's IP block? This explanation sounds like a rationalization of a hypothesis privileged because of availability.
Comment author:mattnewport
20 January 2010 12:58:55AM
0 points
[-]
I didn't know the background spamming rate but it does seem a little unlikely doesn't it? A chance reuse of the same IP address does seem improbable but a better explanation doesn't spring to mind at the moment.
Comment author:mattnewport
19 January 2010 07:18:54PM
2 points
[-]
Assuming you were using your own computer at home and not a public Wi-Fi hotspot or public computer then it could be that you use the same ISP and you were assigned an IP address previously used by another user. Given the relatively low number of users on lesswrong though this seems like a somewhat unlikely coincidence.
Comment author:MrHen
19 January 2010 07:21:06PM
1 point
[-]
Hmm... I was at a coffee shop the other day. I don't see how anyone else there (or anyone else in the entire city I live in) would have ever heard of LessWrong. The block appears to have been created today, however, which makes even less sense.
Comment author:komponisto
19 January 2010 08:26:27AM
*
1 point
[-]
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
Comment author:ciphergoth
19 January 2010 09:16:16AM
4 points
[-]
I quite like swearing, but I don't think it primes people to think and respond rationally in general, and is usually best avoided. Like wedrifid, I'm inclined to argue for an exception for "bullshit", which is a term of art.
Comment author:wedrifid
19 January 2010 06:23:41AM
2 points
[-]
I advocate the use of the term Bullshit. Both because it a good description of a significant form of bias and because the profanity is entirely appropriate. I really, really don't like seeing the truth distorted like that.
More generally I don't particularly object to swearing but as RobinZ notes it can be distracting. I don't usually find much use for it.
Comment author:CassandraR
18 January 2010 11:51:40PM
1 point
[-]
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and would likely require at least some basic rationality to bootstrap into existence but I am going to try to define the problem. What is this necessary cognitive foundation? And then break it down into pieces. I suspect that much of this lies in subverbal emotional and procedural cues but if so how can they be more effectively trained?
Comment author:CassandraR
19 January 2010 11:33:57AM
2 points
[-]
I have read pretty much everything more than once. It is pretty difficult to turn reading into action though. Which is why I feel like there is something I am missing. Yep.
Comment author:Alicorn
19 January 2010 12:33:58AM
*
1 point
[-]
I think your phrasing of your question is confusing. Are you asking for help putting yourself into a mindset conducive to learning and developing rationality skills?
Comment author:CassandraR
19 January 2010 12:57:02AM
0 points
[-]
Let me see if I can be more clear. In my experience I have an emotional framework with which I hang beliefs from. Each belief has specific emotional reinforcement or structure that allows me to believe it. If I revoke that reinforcement then very soon after I find that I no longer hold that belief. I guess the question I should ask first is that is this emotional framework real? Did I make it up? And it is real then how can I use it to my advantage?
How did I build this framework and how do I revoke emotional support? I have good reason to think that the framework isn't simply natural to me since it has changed so much over time.
One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they're the sort of actions I'd take if I "truly" believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don't know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I've used this for things like "animal suffering is actually bad", "FAI is actually important", and "I actually need to practice to write good UIs".
Comment author:CassandraR
19 January 2010 01:31:28AM
*
1 point
[-]
This is similar to my experience. Perhaps a better way to express my problem is this. What are the some safe and effective way to construct and dismantle identity? And what sorts of identity are most able to incorporate new information and process them into rational beliefs? One strategy I have used in the past is to simply not claim ownership of any belief so that I might release it more easily but in this I run into a lack of motivation when I try to act on these beliefs. On the other hand if I define my identity based on a set of beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive foundation that motivates me but is not painfully threatened by counter evidence?
Comment author:orthonormal
19 January 2010 05:37:31AM
2 points
[-]
The litany of Tarski and the litany of Gendlin exemplify a pretty good attitude to cultivate. (Check out the posts linked in the Litany of Gendlin wiki article; they're quite relevant too. After that, the sequence on How to Actually Change Your Mind contains still more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to emphasize that it's OK and normal to have trouble with this, that you don't have to get everything right on the first try (and to watch out if you think you do), and that eventually the world will start making sense again and you'll see it was well worth the struggle.
Comment author:Alicorn
19 January 2010 01:10:05AM
1 point
[-]
The emotional framework of which you speak doesn't seem to resemble anything I can introspectively access in my head, but maybe I can offer advice anyway. Some emotional motivations that are conducive to rationality are curiosity, and the powerful need to accomplish some goal that might depend on you acting rationally.
This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
That is pretty ridiculous - enough to make me want to check the original study for effect size and statistical significance. Writing newspaper articles on research without giving the original paper title ought to be outlawed.
Comment author:MrHen
18 January 2010 06:27:09PM
*
4 points
[-]
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
Comment author:CarlShulman
18 January 2010 08:48:57PM
0 points
[-]
People can read them from the sequences page and Google searches, so I'd suggest a). A follow-up post linking to the old article is also a possibility!
Comment author:orthonormal
18 January 2010 08:26:59PM
1 point
[-]
I think each has their advantages. If you post a comment on the open thread, it's more likely to be read and discussed now; if you post one on the original thread, it's more likely to be read by people investigating that particular issue some time from now.
I'll ask the question again here -- does anyone know of some more extensive writing on the subject of cognitive flaws related to gaming? Or something recent on the psychology of rewards?
Comment author:roland
16 January 2010 05:17:02PM
0 points
[-]
I've been downvoted quite often recently and since I'm actually here to learn something I would like to better understand the reasons behind it.
Specifically I would like to hear your opinion on the following comment of mine:
"I'll be the judge of that." This was given as an answer to someone suggesting how I should use my time.
Comment author:Kevin
16 January 2010 03:44:42AM
0 points
[-]
As far as candidates for making AI other than the Singularity Institute, is there any more likely than Google? Surely they want to make one.
They have a lot of really smart AI researchers working on hard problems within the world's largest dataset, and who knows what can happen when you combine that with 20% time. Does Google controlling the AI scare you?
The US military or any government making the AI seems a recipe for certain destruction, but I'm not so sure about Google.
Comment author:Kevin
16 January 2010 03:51:15AM
*
0 points
[-]
Thanks for the link... also just googled my way to Peter Norvig speaking at the Singularity Summit saying they aren't anywhere close to AGI and aren't trying. http://news.cnet.com/8301-10784_3-9774501-7.html
So I think it depends on 20% time for now which isn't exactly conducive to solving the hard problem, not to mention 20% time at Google isn't what it used to be.
Comment author:RobinZ
16 January 2010 01:37:29AM
1 point
[-]
Interesting heuristic - I would be curious to find if anyone else has followed something similar to good effect, but it sounds conceptually reasonable.
Comment author:Kevin
16 January 2010 12:26:19AM
1 point
[-]
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for them to do it, they probably did it.
On the other hand, I know the conditions of factory farming and it seems quite plausible and even very likely for such a virus to spontaneously mutate and cross species. So I put the probability at an H1N1 conspiracy at 10%. However, my friend's argument makes a certain amount of sense to me.
Comment author:ciphergoth
16 January 2010 12:40:03AM
1 point
[-]
Any such conspiracy would have to be known by quite a few people and so would stand an excellent chance of having the whistle blown on it. Every case I can think of where large Western companies have been caught doing anything like that outrageously evil, they have started with a legitimate profit-making plan, and then done the outrageous evil to hide some problem with it.
Comment author:Kevin
16 January 2010 12:38:24AM
*
0 points
[-]
They're almost made up, which makes any attempt at Bayesian analysis not all that meaningful... I'd welcome other tools. He gave me the 80% probability number so I felt obligated to give my own probability.
Consider the numbers to have very wide bounds, or to be more meaningful expressed in words -- he thinks there is a conspiracy, I don't think there is a conspiracy, but neither of us are absolutely confident about it.
Comment author:roland
16 January 2010 12:43:24AM
0 points
[-]
he thinks there is a conspiracy, I don't think there is a conspiracy, but neither of us are absolutely confident about it.
Exactly. I think there is no rational basis for answering your question.
Again, he thinks that because it was possible for them to do it, they probably did it.
Your friend has a distrust of corporate leaders(here I agree with him) and his theory is probably based on his feeling of disgust for their practices. So his theory has probably more of an emotional basis than a rational one. That doesn't mean it is wrong, just there aren't any rational reasons for believing it.
Comment author:[deleted]
14 January 2010 08:50:57PM
1 point
[+]
(1
child)
Comment author:[deleted]
14 January 2010 08:50:57PM
1 point
[-]
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
It's often said around here that Bayesian priors and Solomonoff induction and such things describe the laws of physics of the universe. The simpler the description, the more likely that laws-of-physics is. This is more or less true, but it is not the truth that we want to be saying. What we're trying to describe is our observations. If I had a theory stating that every computable event happens, sure, that explains all phenomena, but in order for it to describe our observations, you need to add a string specifying which of these computable events are the ones we observe, which makes this theory completely useless.
In theory, this provides a solution to anthropic reasoning: simply figure out which paths through the universe are the simplest, and assign those the highest probability. Again, in theory, this provides a solution to quantum suicide. But please don't ask me what these solutions are.
Comment author:Wei_Dai
15 January 2010 02:54:56AM
2 points
[-]
Does anyone understand the last two paragraphs of the comment that I'm responding to? I'm having trouble figuring out whether Warrigal has a real insight that I'm failing to grasp, or if he is just confused.
Comment author:Cyan
14 January 2010 09:16:44PM
0 points
[-]
In the toy problem in the link, as long as we know the rule that people use to write down their guesses (e.g., write down the hypothesis with maximum posterior probability; if 50-50, write down what the last person wrote,) at each stage we can treat the previous sequence as a latent variable about which we have partial information. The solution is straightforward to set up.
Comment author:mattnewport
14 January 2010 09:23:18PM
0 points
[-]
My intuition is that if you assume everyone before you has written down the correct most likely answer based on the sequence they observe (and using the same assumption) then you fairly quickly reach a point where additional people's guesses add no new information. Can anyone confirm or refute that and save me trying to do the math?
Comment author:pengvado
14 January 2010 11:29:48PM
*
4 points
[-]
If the tiebreak strategy is "agree with the previous person's guess", then you reach that point immediately. The first person's draw determines everyone's guess: If the second person's draw is the same as the first, then of course they agree, and if not then they're at a 50/50 posterior and thus also agree.
If the tiebreak strategy is "write down your own draw (i.e. maximize the information given to subsequent players)", then information can be collected only so long as the number of each color drawn remains tied or +/-1. As soon as one color is ahead by 2 draws, all future draws are ignored and the guesses so far suffice to determine everyone else's guess.
If the draws are with replacement, then the probability that what you get locked into is the right guess is 4/5. (Assume WLOG that the urn is primarily white. Consider two draws: WW is 4/9 and determines the right answer; RR is 1/9 and determines the wrong answer; WR or RW have no net likelihood change, so recurse.)
If the draws are without replacement, then it's... 80.6%. (Very close to 4/5 since with very high probability you'll run into a cascade one way or the other before the non-replacement changes the ball proportions much.)
Otoh, "tally all the votes at the end of the pulling, and that determines the group’s Urn choice" is an entirely different question, and doesn't have the same strategy as maximizing your individual chance of correctness.
Comment author:Morendil
15 January 2010 07:06:33AM
1 point
[-]
Nice - I hadn't gotten so far as analyzing the other tiebreak policy.
"Prior information" in this kind of problem includes a bunch of rather unlikely assumptions, such as that every player is maximally rational and that the rules of the game reward picking the true choice of urn.
Unfortunately there is no reason to prefer one tiebreak policy over the other. Does it make the problem more determinate if we assume the game scores per Bayesian Truth Serum, that is, you get more points for a contrarian choice that happens to be right ?
Comment author:pengvado
15 January 2010 08:32:54AM
1 point
[-]
Since the total evidence you can get from examining all previous guesses (assuming conventional strategy and rewards as before) gives you only a 4/5 accuracy, and you can get 2/3 by ignoring all previous guesses and looking only at your own draw: Yes, rewarding correct contrarians at least 20% more than correct majoritarians would provide enough incentive to break the information cascade. Only until you've accumulated enough extra information to make the majoritarian answer confident enough to overcome the difference between rewards, of course, but it would still equilibrate at a higher accuracy.
Comment author:Wei_Dai
11 January 2010 10:23:45PM
8 points
[-]
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?
Comment author:HalFinney
14 January 2010 11:01:48PM
*
0 points
[-]
I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.
Comment author:Wei_Dai
15 January 2010 07:25:21PM
1 point
[-]
We should pay people to adopt and advocate independent views, to their own detriment.
I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.
Comment author:Psy-Kosh
11 January 2010 09:33:21PM
0 points
[-]
Reading it now, thanks.
Okay, from the initial description, looks like MML looks at TOTAL length, where the message includes both the theory and the additional info needed to reconstruct the total data, while MDL ignores aspects of the description of the theory for the purposes of measuring the length.
Comment author:Cyan
11 January 2010 11:42:08PM
*
0 points
[-]
I'm a bit confused on that point myself. Before finding that document, my understanding was that MML averaged over the prior, while MDL avoided having a prior by using some kind of minimax approach, but the paper I pointed you to doesn't seem to say anything about that.
Comment author:pdf23ds
11 January 2010 10:02:24AM
*
0 points
[-]
Hey, exactly 500 comments.
So, elsewhere someone just brought up moral luck. I'm wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I'd like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck--specifically the extent of the validity or otherwise of the Control Principle--with reference to Yudkowsky's framework.
Let's say someone gravely declares, of some moral dilemma [...] that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck. Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?
Lately I've actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It seems like a lot of discussions of utilitarianism versus deontology treat them as two arbitrary viewpoints or positions, but insofar as my thinking has trended utilitarian lately, it hasn't been because I'm attracted to a utilitarian position, but because Cox's theorem [edit: sic] forces it. Even if I draw up a set of rights that I think must not be violated, I'm still going to have to make decisions under uncertainty, which I would guess means acting to minimize the expected number of rights-violations.
Comment author:PhilGoetz
11 January 2010 10:32:04PM
1 point
[-]
Isn't that what people have always done? Maybe not explicitly. To explicitly make the split you're speaking of would just help people to deny reality, and do what they need to do, albeit in highly suboptimal and destructive ways, while still holding on to incoherent moral codes that continue to harm them in other ways.
But it beats letting ourselves be wiped out. I worry about the fact that Western civilization is saying that an increasing number of rights must not be violated under any circumstances, at a time when we are facing an increasing number of existential risks. There are some things that we don't let ourselves see, because seeing them would mean acknowledging that somebody's rights will have to be violated.
For instance, plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide. Reality might accommodate them (eg., if we discover an alternative energy source that impoverishes the other middle eastern states). But I think it's more likely that it won't.
Comment author:PhilGoetz
12 January 2010 11:22:06PM
0 points
[-]
As technology advances, it takes fewer and fewer resources to wreak an equivalent amount of devastation. Soon, small groups of people will be able to annihilate nations. In most cultures, only a very small percentage of people would like to do so; trying to detect and control those individuals may be a workable strategy.
Israel, however, is near several cultures where most people would like to kill everyone in Israel (based on, among other things, public rejoicing instead of statements of regret when Israelis are killed for any reason, opinion polls showing that most people in some countries say they have positive opinions of Al Qaeda, and the success in popular elections of groups including Hezbollah and Hamas which have the destruction of Israel as part of their platform). The annihilation of Israel is not a goal for a few crazy individuals, but a mainstream cultural goal.
Comment author:Cyan
12 January 2010 02:54:26PM
*
0 points
[-]
Demographic threat. Twenty-seven words: if Israel stays where it is, the growth of Arab citizenry will pose a threat to its existence as a Jewish state with a Jewish demographic majority.
Comment author:PhilGoetz
12 January 2010 11:31:13PM
*
0 points
[-]
I would consider that one of the better possible outcomes. As long as it leads to a conversion from a race-based state to a pluralistic society, rather than cattle cars and smokestacks.
Comment author:Cyan
12 January 2010 11:40:51PM
0 points
[-]
It's not really a race-based state, in the sense that one can't arbitrarily choose one's race, but under the Law of Return one can choose to convert to Judaism and instantly gain Israeli citizenship upon immigrating.
Comment author:Technologos
11 January 2010 09:31:49PM
2 points
[-]
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
Comment author:Technologos
11 January 2010 09:41:54PM
1 point
[-]
It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
Comment author:Technologos
12 January 2010 05:37:27AM
0 points
[-]
I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.
Comment author:pdf23ds
11 January 2010 12:17:31PM
*
1 point
[-]
I don't think that's quite the same usage of "moral luck". According to the technical term, it's when you, for example, judge someone who was driving drunk and hit a person more harshly than someone who was driving drunk and didn't hit anyone, all else being equal. In other words, things entirely outside of your control that make the same action more or less blameworthy. Another example, from the link:
For example, consider Nazi collaborators in 1930's Germany who are condemned for committing morally atrocious acts, even though their very presence in Nazi Germany was due to factors beyond their control (Nagel 1979). Had those very people been transferred by the companies for which they worked to Argentina in 1929, perhaps they would have led exemplary lives. If we correctly morally assess the Nazi collaborators differently from their imaginary counterparts in Argentina, then we have a case of circumstantial moral luck.
Comment author:komponisto
11 January 2010 01:23:39PM
0 points
[-]
I don't see the difference between this usage and Zack's/Eliezer's: the definition given in the SEP link is:
Moral luck occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.
A situation where all of an agent's options are blameworthy seems quite clearly to fall within this category.
Comment author:pdf23ds
12 January 2010 05:24:03AM
0 points
[-]
OK, I suppose it counts as an instance, though I'm not convinced Eliezer intended the phrase in that sense. But it's certainly one of the instances I'm less interested in.
Comment author:lunchbox
11 January 2010 01:29:38AM
0 points
[-]
How do people here consume Less Wrong? I just started reading and am looking for a good way to stay on top of posts and comments. Do you periodically check the website? Do you use an RSS feed? (which?) Or something else?
When I'm actively following the site (visiting 3+ times a day), I primarily follow the new comments page. I only read top posts when I see that there's an interesting discussion going on about one of them, or if the post's title seems particularly interesting. (I do wind up reading a large portion of the top posts sooner or later, though.)
I have the 'recent posts' RSS feed in my reader for when I'm not actively following the site, but I only click through if something seems very interesting.
Comment author:LucasSloan
11 January 2010 01:43:44AM
1 point
[-]
I read new posts as soon as I see them. I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly. I also reread posts to get any comments I miss and to get a better sense of how the discussions are preceding.
Comment author:byrnema
11 January 2010 02:05:32AM
0 points
[-]
I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly.
I click on "Recent Comments" and read as far back as I have to until I've caught up. Reading backwards can be mentally tiring ... so I'm actually just skimming for interesting comments. When I find one that seems interesting, I read through that thread for the continuity of the discussion.
Comment author:PhilGoetz
09 January 2010 06:17:32AM
*
3 points
[-]
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit was; but I regard it as an interesting correlate of processing power, without any demonstrated or even argued logical relationship to consciousness. Tononi has published a lot of papers on it - and they became more widely-cited when he started saying they were about consciousness instead of saying they were about information integration - but he didn't AFAIK make any arguments that the thing he measures with information integration has something to do with consciousness.
Comment author:byrnema
09 January 2010 06:14:00PM
*
1 point
[-]
It's a very interesting question. I think it's pretty straight-forward that 'ourselves' is a composite of 'awarenesses' with non-overlapping mutual awareness.
Some data with respect to inebriation:
drunk people would pass a Turing test, but the next morning when events are recalled, it feels like someone else' experiences. But then when drunk again, the experiences again feel immediate.
when I lived in France, most of my socialization time was spent inebriated. For years thereafter, whenever I was intoxicated, I felt like it was more natural to speak in French than English. Even now, my French vocabulary is accessible after a glass of wine.
Comment author:PhilGoetz
10 January 2010 12:24:22AM
1 point
[-]
That is interesting, but not what I was trying to ask. I was trying to ask if there could be separate, smaller, less-complex, non-human consciousnesses inside every human, It seems plausible (not probable, plausible) that there are, and that we currently have no way of detecting whether that is the case.
Comment author:PhilGoetz
09 January 2010 05:49:11PM
*
-5 points
[-]
It's a very important question, if you hope for a future that contains consciousness. You aren't going to be the singleton. You're going to be a piece of a singleton.
Edited later for niceness. But not because of your downvotes, which I also do not respect. I felt like a hypocrite for having told people to be nice.
Comment author:MrHen
09 January 2010 12:23:04AM
*
3 points
[-]
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
Comment author:thomblake
22 January 2010 04:59:59PM
-1 points
[-]
Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.
This isn't an example of a logical fallacy; it could be read that way if the conclusion was "their way must be right" or something like that. As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".
If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.
Affirming the consequent, in general, is a good heuristic.
Comments (725)
What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.
I try to avoid having more than one post of mine on the sidebar at the same time.
Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.
Particularly so given the confounding factors in the case in question.
And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.
I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.
It's cool, you're like our friendly mascot theist.
It appears as long as I stoop to the correct level of self-depreciation I get enough karma to allow me to keep bashing myself over the head.
:D Isn't language/linguistics fun?
"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.
It isn't that impressive to me. As far as I can see, what it shows is that she has been torturing herself for a long time, probably many years, over her issues with Christianity. She's just expressing her anger with the suffering it caused her.
I wish it were possible to mail her and tell her she doesn't have to apologise!
This woman is a model unto the entire human species.
Thank you for posting that. It's an inspiration.
I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.
I'm jealous. Don Gotterbarn is at that school.
Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)
I may have a substantive disagreement with point two, but that's a post in it's own right.
Laser fusion test results raise energy hopes: http://news.bbc.co.uk/2/hi/science/nature/8485669.stm
I'll track down the paper from Science on request.
For the "How LW is Perceived" file:
Here is an excerpt from a comments section elsewhere in the blogosphere:
I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...
Prisoner's Dilemma on Amazon Mechanical Turk: http://blog.doloreslabs.com/2010/01/altruism-on-amazon-mechanical-turk/
Ask Peter Norvig anything: http://www.reddit.com/r/programming/comments/auvxf/ask_peter_norvig_anything/
Garry Kasparov: The Chess Master and the Computer
http://www.nybooks.com/articles/23592
Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.
Does a Turing chatbot deserve recognition as a person?
(Turing chatbot = bot that can pass the Turing test... 50% of the time? 95% of the time? 99% of the time?)
No. The Turing test is an intuition pump, not a person-predicate.
First, is there an agreed upon definition for person? We need to define that and make sure we agree before we should go much further, but I'll give it a try anyways.
All Turing tests are not intuition pumps. There should be other Turing tests to recognize a greater degree of personhood. Perhaps if the investigator can trigger an existential crisis in the chatbot? Or if the chatbot can be judged to be more self-aware than an average 18 year old?
What if the chatbot gets 1000 karma on Less Wrong?
How would you Turing test an oracle chatbot? http://lesswrong.com/lw/1lf/open_thread_january_2010/1i6u
It seems like this idea has probably been discussed before and that there is something I am missing, please link me if possible. http://yudkowsky.net/other/fiction/npc is all that comes to mind.
I think I'm confused: what I assumed you meant was a chatbot in the sense of ELIZA (a program which uses canned replies chosen and modified as per a cursory scan of the input text). Such a program is by definition not a person, and success in Turing tests does not grant it personhood.
As for my second sentence: Turing's imitation game was proposed as a way to get past the common intuition that only a human being could be a person by countering it with the intuition that someone you can talk to, you can hold an ordinary conversation with, is a person. It's an archetypal intuition pump, a very sensible and well-reasoned intuition pump, a perfectly valid intuition pump - but not a rigorous mathematical test. ELIZA, which is barely clever, has passed the Turing test several times. We know that ELIZA is no person.
Sorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI trapped in the proverbial box.
I agree that a rigorous mathematical definition of personhood is important, but I doubt that I will be able to make a meaningful contribution in that area anytime in the next few years. For now, I think we should be able to think of some philosophical or empirical test of chatbot personhood.
I still feel confused about this and I think that's because we still don't have a good definition of what a person actually is; but we shouldn't need a rigorous mathematical mathematical test in order to gain a better understanding of what defines a person.
The Turing test isn't a horrible test of personhood, from that attitude, but without better understanding of 'personhood' I don't think it's appropriate to spend time trying to come up with a better one.
Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).
I think "Rapture of the Geeks" is a meme that could catch on with the general public, but this community seems to have reluctance to engage in self-promotional activities. Is Eliezer actively avoiding publicity?
http://en.wikipedia.org/wiki/Chantek
I recently found an article that may be of interest to Less Wrong readers:
Blame It on the Brain
The latest neuroscience research suggests spreading resolutions out over time is the best approach
The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.
(should I repost this link to next month's open thread? not many people are likely to see it here)
Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm
In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.
That is really, really cool. Not particularly rationality-related (except as regards the display format), but really cool.
Yeah, it's basically just pretty pictures. However, they're pretty pictures that are probably an interesting knowledge gap for many here.
Perhaps what is rationality related is why these orbitals are never taught to students. I suppose because so few atoms are actually configured in higher orbitals, but students of all ages should find the pictures themselves interesting and understandable.
In high school chemistry, our book went up to e orbitals, and actually said something about how the f orbitals are not shown because they are impossible or very difficult to describe, which is blatantly untrue. I found some pictures of the f orbitals on the internet and showed my teacher (who was one of my best high school teachers) and he was really interested and showed all of his classes those pictures.
Inorganic dust with lifelike qualities: http://www.sciencedaily.com/releases/2007/08/070814150630.htm
I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.
Inspired by this comment by Michael Vassar:
http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments
Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.
Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Gatsby) and I remember actually enjoying The Great Gatsby in high school. It's also a short novel so we could comfortably read it in a week or leisurely reread over the course of a month.
If it works, we can do one of Joyce's earlier works next, or whatever the club suggests. If we get good at this, a year from now we can do Ulysses.
It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually communicate anything: not even, "I'm hurt!" for to say that one is hurt presupposes that one is being hurt by something, some thing of which which we can speak, of which we can name predicates and say "It is so" or "It is not so." Even very sick and damaged creatures can be helped, as long their cries have enough structure for us to extrapolate a volition. But not all animate entities are creatures. Creatures have problems, problems we might be able to solve. Agonium just sits there, howling. You cannot help it; it can only be destroyed.
This is analysis is very well and good taken on its own terms, but it conceals---very cleverly conceals, I do compliment you, for surely, surely you had seen it yourself, or some part of you had---it conceals assumptions that do not apply to our own realm. Essences, discreteness, digitality---these are all artifacts born of optimizers; they play no part in the ontology of our continuous, reductionist world. There is no pure agonium, no thing-that-hurts without having any semblance of a reason for being hurt---such an entity would require a very masterful designer indeed, if it could even exist at all. In reality, there is no threshold. We face cries that fractionally have referents. And the quantitative extent to which these cries don't have enough structure for us to extrapolate a volition is exactly again the quantitative extent to which any stray stream of memes has license to reshape the entity, pushing it towards the strong attractor. You present us with this bugaboo of entities that we cannot help because they don't even have well-defined problems, but entities without problems don't have rights, either. So what's your problem? You just spray the entity with appropriate literature until it is a creature. Sculpt the thing like clay. That is: you help it by destroying it.
Did I miss something?
No. (Exploratory commentary seemed appropriate for Open Thread.)
How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?
I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.
Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?
(This is inspired by Shannon's mention of her child exploring her sense of self) http://lesswrong.com/lw/1n8/london_meetup_the_friendly_ai_problem/1hm4
I don't have any memory of a similar revelation, but one of my earliest memories is of asking my mother if there was a way to 'spell letters' - I understood that words could be broken down into parts and wanted to know if that was true of letters, too, and if so where the process ended - which implies that I was already doing a significant amount of abstract reasoning. I was three at the time.
Strange, I have no such memory. The closest thing I can think of is my big Crisis of Faith when I was 17. I realized I had much more power over myself than I had previously thought. It scared me a lot, actually.
Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)
Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe. In the mean time the N AIs are to negotiate amongst themselves, and if necessary, given help to enforce their agreements.
The advantages of this approach are:
Comments?
ETA: I found a very similar idea mentioned before by Eliezer.
Do you think the more powerful group members are going to agree to that?!? They worked hard for their power and status - and are hardly likely to agree to their assets being ripped away from them in this way. Surely they will ridicule your scheme, and fight against it being implemented.
The main idea I wanted to introduce in that comment was the idea of using (supervised) bargaining to aggregate individual preferences. Bargaining power (or more generally, weighing of individual preferences) is a mostly orthogonal issue. If equal bargaining power turns out to be impractical and/or immoral, then some other distribution of bargaining power can be used.
I think that's what I implied: there is a supervisor process that governs the negotiation process and eventually picks a random AI to be released into the real world.
What exactly is "equal bargaining power" is vague. If you "instantiate" multiple AIs, their "bargaining power" may well depend on their "positions" relative to each other, the particular values in each of them, etc.
Why this requirement? A cooperation of AIs might as well be one AI. Cooperation between AIs is just a special case of operation of each AI in the environment, and where you draw the boundary between AI and environment is largely arbitrary.
The idea is that the status quo (i.e., the outcome if the AIs fail to cooperate) is N possible worlds of equal probability, each shaped according to the values of one individual/AI. The AIs would negotiate from this starting point and improve upon it. If all the AIs cooperate (which I presume would be the case), then which AI gets randomly selected to take over the world won't make any difference.
In this case the AIs start from an equal position, but you're right that their values might also figure into bargaining power. I think this is related to a point Eliezer made in the comment I linked to: a delegate may "threaten to adopt an extremely negative policy in order to gain negotiating leverage over other delegates." So if your values make you vulnerable to this kind of threat, then you might have less bargaining power than others. Is this what you had in mind?
Letting a bunch of AIs with given values resolve their disagreement is not the best way to merge values, just like letting the humanity go on as it is is not the best way to preserve human values. As extraction of preference shouldn't depend on the actual "power" or even stability of the given system, merging of preference could also possibly be done directly and more fairly when specific implementations and their "bargaining power" are abstracted away. Such implementation-independent composition/interaction of preference may turn out to be a central idea for the structure of preference.
There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.
If we already have a given preference, it will only retell itself as an answer to the query "What preference should result [from combining A and B]?", so that's not how the game is played. "What's a fair way of combining A and B?" may be more like it, but of questionable relevance. For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be, rather than on how to point to the particular object representing the given imperfect agent.
What is/are your approach(es) for attacking this problem, if you don't mind sharing?
In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.
Since I don't have self-contained results, I can't describe what I'm searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.
Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent's algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).
Thanks. I look forward to that.
I don't understand what you mean here, and I think maybe you misunderstood something I said earlier. Here's what I wrote in the UDT1 post:
(Note that of course this utility function has to be represented in a compressed/connotational form, otherwise it would be infinite in size.) If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse. There is uncertainty about "which universes, i.e., programs, we're in", but that's a problem we already have a handle on, I think.
So, I don't know what you're referring to by "true laws of the universe", and I can't find an interpretation of it where your quoted statement makes sense to me.
I don't believe that directly posing this "hypothesis" is a meaningful way to go, although computational paradigm can find its way into description of the environment for the AI that in its initial implementation works from within a digital computer.
Here is a revised way of asking the question I had in mind: If our preferences determine which extraction method is the correct one (the one that results in our actual preferences), and if we cannot know or use our preferences with precision until they are extracted, then how can we find the correct extraction method?
Asking it this way, I'm no longer sure it is a real problem. I can imagine that knowing what kind of object preference is would clarify what properties a correct extraction method needs to have.
Going meta and using the (potentially) available data such as humans in form of uploads, is a step made in attempt to minimize the amount of data (given explicitly by the programmers) to the process that reconstructs human preference. Sure, it's a bet (there are no universal preference-extraction methods that interpret every agent in a way it'd prefer to do itself, so we have to make a good enough guess), but there seems to be no other way to have a chance at preserving current preference. Also, there may turn out to be a good means of verification that the solution given by a particular preference-extraction procedure is the right one.
So you know how to divide the pie? There is no interpersonal "best way" to resolve directly conflicting values. (This is further than Eliezer went.) Sure, "divide equally" makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there's more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.
I'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect emptiness). The possibilities open for the search of "off-line" resolution of conflict (with abstract transformation of preference) are wider than those for the "on-line" method (with AIs fighting/arguing it over) and so the "best" option, for any given criterion of "best", is going to be better in "off-line" case.
[Edited] I agree that it is probably not the best way. Still, the idea of merging values by letting a bunch of AIs with given values resolve their disagreement seems better than previous proposed solutions, and perhaps gives a clue to what the real solution looks like.
BTW, I have a possible solution to the AI-extortion problem mentioned by Eliezer. We can set a lower bound for each delegate's utility function at the status quo outcome, (N possible worlds with equal probability, each shaped according to one individual's utility function). Then any threats to cause an "extremely negative" outcome will be ineffective since the "extremely negative" outcome will have utility equal to the status quo outcome.
Unless you can directly extract a sincere and accurate utility function from the participants' brains, this is vulnerable to exaggeration in the AI programming. Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be willing to back off to 6 in exchange for concessions regarding Y from other AIs that don't want much X.
I had also mentioned this in an earlier comment on another thread. It turns out that this is a standard concern in bargaining theory. See section 11.2 of this review paper.
So, yeah, it's a problem, but it has to be solved anyway in order for AIs to negotiate with each other.
This does not seem to be the case when the AIs are unable to read each other's minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.
On a similar note, being able to directly extract a sincere and accurate utility function from the participants' brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don't (necessarily) have to care about future me.
So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that action and it is causing me serious distress and pain.
So over the next few years that I have left in college I am going to make a desperate effort to find an outlet where I can effectively channel this overwhelming need to do something. Right now though I feel so over my head that I can't even see the surface.
Socialise a lot. Learn the skills of social influence and the dynamics of power at both the academic level and practical.
AnnaSalamon made this and other suggestions when Calling for SIAI fellows. I imagine that the skills useful for SIAI wannabes could have significant overlap with those needed for whatever project you choose to focus on. Specific technical skills may vary somewhat.
Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).
Good link!
Ray Kurzweil Responds to the Issue of Accuracy of His Predictions
http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html
Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).
Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.
How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.
(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the current outlook on the problem, so some mistakes may be present.)
"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still relevant, but were mostly made obsolete by some of the posts on Overcoming Bias. "An Introduction to Goal Systems" is hand-made expected utility maximisation, "Design of Friendship systems" is mostly premature nontechnical speculation that doesn't seem to carry over to how this thing could be actually constructed (but at the time could be seen as intermediate step towards a more rigorous design). "Policy implications" is mostly wrong.
For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:
Any idea how this happens and how I can prevent from happening again?
I'll be more careful with "Ban this IP" option in the future, which I used to uncheck during the spam siege a few months back, but didn't in this case. Apparently the IP is only blocked for a day or so. I've removed it from the block list, please check if it works and write back if it doesn't.
It works again.
Honestly, I have no problem not editing the wiki for a few days if it helps block spammers. It's not like I am adding anything critical. I was just confused.
It'd only be necessary to block spammers by IP if they actually relapse (and after a captcha mod was installed, spammers are not a problem), but the fact that you share IP with a spammer suggests that you should check your computer's security.
Well, in the last week I've probably had at least three IP address assigned to my computer while editing the wiki. It is hard to know where to begin. I think someone I know has a good program to detect outgoing traffic... that may work.
"Bella" was blocked for adding spam links. Could your computer be a zombie?
Mmm... it's a Mac so I never think about it. I have no idea where I would have picked it up. Does anyone know a way to check? (On a Mac.)
A spam bot using your ISP is not unlikely, that's probably what's happened.
My ISP? Or my IP address? I assume the latter.
Most ISPs recycle IP addresses between subscribers periodically. So someone using the same ISP as you could have ended up with the same IP address.
But how many users do you expect sit on the same IP? And thus, what is the prior probability that basically the only spammer in weeks (there was only one another) would happen to have the same IP as one of the few dozen (or less) of users active enough to notice a day's IP block? This explanation sounds like a rationalization of a hypothesis privileged because of availability.
I didn't know the background spamming rate but it does seem a little unlikely doesn't it? A chance reuse of the same IP address does seem improbable but a better explanation doesn't spring to mind at the moment.
Not a reason to privilege a known-false hypothesis. It's how a lot of superstition actually survives: "But do you have a better explanation? No?".
Ah, okay. I completely misinterpreted your previous comment.
Assuming you were using your own computer at home and not a public Wi-Fi hotspot or public computer then it could be that you use the same ISP and you were assigned an IP address previously used by another user. Given the relatively low number of users on lesswrong though this seems like a somewhat unlikely coincidence.
Hmm... I was at a coffee shop the other day. I don't see how anyone else there (or anyone else in the entire city I live in) would have ever heard of LessWrong. The block appears to have been created today, however, which makes even less sense.
Strange fact about my brain, for anyone interested in this kind of thing:
Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.
This doesn't make much sense, though it might not be a bad thing.
What are/ought to be the standards here for use of profanity?
I quite like swearing, but I don't think it primes people to think and respond rationally in general, and is usually best avoided. Like wedrifid, I'm inclined to argue for an exception for "bullshit", which is a term of art.
I don't know of an official policy, but swearing can be distracting. Avoid?
I advocate the use of the term Bullshit. Both because it a good description of a significant form of bias and because the profanity is entirely appropriate. I really, really don't like seeing the truth distorted like that.
More generally I don't particularly object to swearing but as RobinZ notes it can be distracting. I don't usually find much use for it.
I'd propose to use the word "bulshytt" instead. ;)
Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.
I understand that is likely a fairly meta topic and would likely require at least some basic rationality to bootstrap into existence but I am going to try to define the problem. What is this necessary cognitive foundation? And then break it down into pieces. I suspect that much of this lies in subverbal emotional and procedural cues but if so how can they be more effectively trained?
How much of the Sequences have you read? A lot of them are about, essentially, how to feel like a rationalist.
I have read pretty much everything more than once. It is pretty difficult to turn reading into action though. Which is why I feel like there is something I am missing. Yep.
I think your phrasing of your question is confusing. Are you asking for help putting yourself into a mindset conducive to learning and developing rationality skills?
Let me see if I can be more clear. In my experience I have an emotional framework with which I hang beliefs from. Each belief has specific emotional reinforcement or structure that allows me to believe it. If I revoke that reinforcement then very soon after I find that I no longer hold that belief. I guess the question I should ask first is that is this emotional framework real? Did I make it up? And it is real then how can I use it to my advantage?
How did I build this framework and how do I revoke emotional support? I have good reason to think that the framework isn't simply natural to me since it has changed so much over time.
One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they're the sort of actions I'd take if I "truly" believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don't know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I've used this for things like "animal suffering is actually bad", "FAI is actually important", and "I actually need to practice to write good UIs".
This is similar to my experience. Perhaps a better way to express my problem is this. What are the some safe and effective way to construct and dismantle identity? And what sorts of identity are most able to incorporate new information and process them into rational beliefs? One strategy I have used in the past is to simply not claim ownership of any belief so that I might release it more easily but in this I run into a lack of motivation when I try to act on these beliefs. On the other hand if I define my identity based on a set of beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive foundation that motivates me but is not painfully threatened by counter evidence?
The litany of Tarski and the litany of Gendlin exemplify a pretty good attitude to cultivate. (Check out the posts linked in the Litany of Gendlin wiki article; they're quite relevant too. After that, the sequence on How to Actually Change Your Mind contains still more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to emphasize that it's OK and normal to have trouble with this, that you don't have to get everything right on the first try (and to watch out if you think you do), and that eventually the world will start making sense again and you'll see it was well worth the struggle.
The emotional framework of which you speak doesn't seem to resemble anything I can introspectively access in my head, but maybe I can offer advice anyway. Some emotional motivations that are conducive to rationality are curiosity, and the powerful need to accomplish some goal that might depend on you acting rationally.
I've just reached karma level 1337. Please downvote me so I can experience it again!
I (un)voted this post 1000 times up and back. :)
From Pharyngula: Bertrand Russell on God. Some of the things he says about what to believe and why seem rather familiar...
This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)
Same researchers, somewhat similar effect:
"Distortion of Price Discount Perceptions: The Right Digit Effect"
Pretty amazing material! A demonstration "in the wild" would be more convincing to marketers, though.
That is pretty ridiculous - enough to make me want to check the original study for effect size and statistical significance. Writing newspaper articles on research without giving the original paper title ought to be outlawed.
"Small Sounds, Big Deals: Phonetic Symbolism Effects in Pricing", DOI: 10.1086/651241
http://www.journals.uchicago.edu/doi/pdf/10.1086/651241
Whether you'll be able to access it I know not.
What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?
(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up there plenty of people will get on board.
People can read them from the sequences page and Google searches, so I'd suggest a). A follow-up post linking to the old article is also a possibility!
I'm not aware of any policy - I tend to do (a).
I think each has their advantages. If you post a comment on the open thread, it's more likely to be read and discussed now; if you post one on the original thread, it's more likely to be read by people investigating that particular issue some time from now.
There, I figure (a).
HN discussion of cognitive flaws related to gaming: http://news.ycombinator.com/item?id=1057351
I'll ask the question again here -- does anyone know of some more extensive writing on the subject of cognitive flaws related to gaming? Or something recent on the psychology of rewards?
I've been downvoted quite often recently and since I'm actually here to learn something I would like to better understand the reasons behind it.
Specifically I would like to hear your opinion on the following comment of mine: "I'll be the judge of that." This was given as an answer to someone suggesting how I should use my time.
http://lesswrong.com/lw/1lv/the_wannabe_rational/1gea?context=1#comments
Do you think that a downvote was justified and if so why?
Not that I mean to nitpick (though I guess that's what we do here!) but should this be here or in the meta-thread?
Maybe it should be in the meta-thread. What should I do now? Write a new one in the meta-thread or can we transfer this somehow?
As far as candidates for making AI other than the Singularity Institute, is there any more likely than Google? Surely they want to make one.
They have a lot of really smart AI researchers working on hard problems within the world's largest dataset, and who knows what can happen when you combine that with 20% time. Does Google controlling the AI scare you?
The US military or any government making the AI seems a recipe for certain destruction, but I'm not so sure about Google.
I mentioned something along these lines before.
Thanks for the link... also just googled my way to Peter Norvig speaking at the Singularity Summit saying they aren't anywhere close to AGI and aren't trying. http://news.cnet.com/8301-10784_3-9774501-7.html
So I think it depends on 20% time for now which isn't exactly conducive to solving the hard problem, not to mention 20% time at Google isn't what it used to be.
Paul Bucheit -- Evaluating risk and opportunity (as a human)
http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html
Interesting heuristic - I would be curious to find if anyone else has followed something similar to good effect, but it sounds conceptually reasonable.
What's the right prior for evaluating an H1N1 conspiracy theory?
I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for them to do it, they probably did it.
On the other hand, I know the conditions of factory farming and it seems quite plausible and even very likely for such a virus to spontaneously mutate and cross species. So I put the probability at an H1N1 conspiracy at 10%. However, my friend's argument makes a certain amount of sense to me.
Any such conspiracy would have to be known by quite a few people and so would stand an excellent chance of having the whistle blown on it. Every case I can think of where large Western companies have been caught doing anything like that outrageously evil, they have started with a legitimate profit-making plan, and then done the outrageous evil to hide some problem with it.
Where do those numbers come from? 80%, 10%???
They're almost made up, which makes any attempt at Bayesian analysis not all that meaningful... I'd welcome other tools. He gave me the 80% probability number so I felt obligated to give my own probability.
Consider the numbers to have very wide bounds, or to be more meaningful expressed in words -- he thinks there is a conspiracy, I don't think there is a conspiracy, but neither of us are absolutely confident about it.
Exactly. I think there is no rational basis for answering your question.
Your friend has a distrust of corporate leaders(here I agree with him) and his theory is probably based on his feeling of disgust for their practices. So his theory has probably more of an emotional basis than a rational one. That doesn't mean it is wrong, just there aren't any rational reasons for believing it.
Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?
Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.
Link please?
http://www.vimeo.com/7397629
Why is the news media comfortable with lying about science?
http://arstechnica.com/science/news/2010/01/why-is-the-news-media-comfortable-with-lying-about-science.ars
I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)
I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.
It's often said around here that Bayesian priors and Solomonoff induction and such things describe the laws of physics of the universe. The simpler the description, the more likely that laws-of-physics is. This is more or less true, but it is not the truth that we want to be saying. What we're trying to describe is our observations. If I had a theory stating that every computable event happens, sure, that explains all phenomena, but in order for it to describe our observations, you need to add a string specifying which of these computable events are the ones we observe, which makes this theory completely useless.
In theory, this provides a solution to anthropic reasoning: simply figure out which paths through the universe are the simplest, and assign those the highest probability. Again, in theory, this provides a solution to quantum suicide. But please don't ask me what these solutions are.
Does anyone understand the last two paragraphs of the comment that I'm responding to? I'm having trouble figuring out whether Warrigal has a real insight that I'm failing to grasp, or if he is just confused.
Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?
Yes, that would be an information cascade.
In the toy problem in the link, as long as we know the rule that people use to write down their guesses (e.g., write down the hypothesis with maximum posterior probability; if 50-50, write down what the last person wrote,) at each stage we can treat the previous sequence as a latent variable about which we have partial information. The solution is straightforward to set up.
My intuition is that if you assume everyone before you has written down the correct most likely answer based on the sequence they observe (and using the same assumption) then you fairly quickly reach a point where additional people's guesses add no new information. Can anyone confirm or refute that and save me trying to do the math?
If the tiebreak strategy is "agree with the previous person's guess", then you reach that point immediately. The first person's draw determines everyone's guess: If the second person's draw is the same as the first, then of course they agree, and if not then they're at a 50/50 posterior and thus also agree.
If the tiebreak strategy is "write down your own draw (i.e. maximize the information given to subsequent players)", then information can be collected only so long as the number of each color drawn remains tied or +/-1. As soon as one color is ahead by 2 draws, all future draws are ignored and the guesses so far suffice to determine everyone else's guess.
If the draws are with replacement, then the probability that what you get locked into is the right guess is 4/5. (Assume WLOG that the urn is primarily white. Consider two draws: WW is 4/9 and determines the right answer; RR is 1/9 and determines the wrong answer; WR or RW have no net likelihood change, so recurse.)
If the draws are without replacement, then it's... 80.6%. (Very close to 4/5 since with very high probability you'll run into a cascade one way or the other before the non-replacement changes the ball proportions much.)
Otoh, "tally all the votes at the end of the pulling, and that determines the group’s Urn choice" is an entirely different question, and doesn't have the same strategy as maximizing your individual chance of correctness.
Nice - I hadn't gotten so far as analyzing the other tiebreak policy.
"Prior information" in this kind of problem includes a bunch of rather unlikely assumptions, such as that every player is maximally rational and that the rules of the game reward picking the true choice of urn.
Unfortunately there is no reason to prefer one tiebreak policy over the other. Does it make the problem more determinate if we assume the game scores per Bayesian Truth Serum, that is, you get more points for a contrarian choice that happens to be right ?
Since the total evidence you can get from examining all previous guesses (assuming conventional strategy and rewards as before) gives you only a 4/5 accuracy, and you can get 2/3 by ignoring all previous guesses and looking only at your own draw: Yes, rewarding correct contrarians at least 20% more than correct majoritarians would provide enough incentive to break the information cascade. Only until you've accumulated enough extra information to make the majoritarian answer confident enough to overcome the difference between rewards, of course, but it would still equilibrate at a higher accuracy.
The math is pretty simple: as soon as the line has a red/blue discrepancy of more than one ball, ignore your ball and vote with the line.
Why not just do the math ?
Primarily because I'm at work and secondarily because I'm lazy.
Downvoted for laziness.
Would we (Earth) show up in our universe's stats pages?
http://www.gabrielweinberg.com/blog/2010/01/would-we-earth-show-up-in-our-universes-stats-pages.html
Paul Graham -- How to Disagree
http://www.paulgraham.com/disagree.html
The Edge Annual Question 2010: How is the internet changing the way you think?
http://www.edge.org/q2010/q10_print.html#responses
"Top Contributors" is now sorted correctly. (Kudos to Wesley Moore at Tricycle.)
I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.
Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?
The movie is also a good example of existential risk in fiction (in this case, a genetically engineered biological agent).
I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.
I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.
Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?
I've looked at the wikipedia pages for both, and I'm still not getting it.
Thanks.
Try this.
Reading it now, thanks.
Okay, from the initial description, looks like MML looks at TOTAL length, where the message includes both the theory and the additional info needed to reconstruct the total data, while MDL ignores aspects of the description of the theory for the purposes of measuring the length.
Did I get that right or am I misunderstanding?
I'm a bit confused on that point myself. Before finding that document, my understanding was that MML averaged over the prior, while MDL avoided having a prior by using some kind of minimax approach, but the paper I pointed you to doesn't seem to say anything about that.
Hey, exactly 500 comments.
So, elsewhere someone just brought up moral luck. I'm wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I'd like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck--specifically the extent of the validity or otherwise of the Control Principle--with reference to Yudkowsky's framework.
Yudkowsky briefly addressed moral luck:
Lately I've actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It seems like a lot of discussions of utilitarianism versus deontology treat them as two arbitrary viewpoints or positions, but insofar as my thinking has trended utilitarian lately, it hasn't been because I'm attracted to a utilitarian position, but because Cox's theorem [edit: sic] forces it. Even if I draw up a set of rights that I think must not be violated, I'm still going to have to make decisions under uncertainty, which I would guess means acting to minimize the expected number of rights-violations.
Isn't that what people have always done? Maybe not explicitly. To explicitly make the split you're speaking of would just help people to deny reality, and do what they need to do, albeit in highly suboptimal and destructive ways, while still holding on to incoherent moral codes that continue to harm them in other ways.
But it beats letting ourselves be wiped out. I worry about the fact that Western civilization is saying that an increasing number of rights must not be violated under any circumstances, at a time when we are facing an increasing number of existential risks. There are some things that we don't let ourselves see, because seeing them would mean acknowledging that somebody's rights will have to be violated.
For instance, plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide. Reality might accommodate them (eg., if we discover an alternative energy source that impoverishes the other middle eastern states). But I think it's more likely that it won't.
Interesting. Do you have 20 words on why these are mutually exclusive?
As technology advances, it takes fewer and fewer resources to wreak an equivalent amount of devastation. Soon, small groups of people will be able to annihilate nations. In most cultures, only a very small percentage of people would like to do so; trying to detect and control those individuals may be a workable strategy.
Israel, however, is near several cultures where most people would like to kill everyone in Israel (based on, among other things, public rejoicing instead of statements of regret when Israelis are killed for any reason, opinion polls showing that most people in some countries say they have positive opinions of Al Qaeda, and the success in popular elections of groups including Hezbollah and Hamas which have the destruction of Israel as part of their platform). The annihilation of Israel is not a goal for a few crazy individuals, but a mainstream cultural goal.
Demographic threat. Twenty-seven words: if Israel stays where it is, the growth of Arab citizenry will pose a threat to its existence as a Jewish state with a Jewish demographic majority.
I would consider that one of the better possible outcomes. As long as it leads to a conversion from a race-based state to a pluralistic society, rather than cattle cars and smokestacks.
It's not really a race-based state, in the sense that one can't arbitrarily choose one's race, but under the Law of Return one can choose to convert to Judaism and instantly gain Israeli citizenship upon immigrating.
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
It seems as though you're reading this hypothetical utility function properly.
It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
It seems like a non-sequitur in response to Roko's illustration of what a utility function can be used to represent.
I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.
Cox's theorem doesn't deal with utility, only plausibility. The utility stuff comes from looking at preference relations -- some big names there are von Neumann, Morgenstern and L.J. Savage.
Also keyword, "Dutch book".
Right, I knew that. Thanks.
I don't think that's quite the same usage of "moral luck". According to the technical term, it's when you, for example, judge someone who was driving drunk and hit a person more harshly than someone who was driving drunk and didn't hit anyone, all else being equal. In other words, things entirely outside of your control that make the same action more or less blameworthy. Another example, from the link:
I don't see the difference between this usage and Zack's/Eliezer's: the definition given in the SEP link is:
A situation where all of an agent's options are blameworthy seems quite clearly to fall within this category.
OK, I suppose it counts as an instance, though I'm not convinced Eliezer intended the phrase in that sense. But it's certainly one of the instances I'm less interested in.
Agreed.
How do people here consume Less Wrong? I just started reading and am looking for a good way to stay on top of posts and comments. Do you periodically check the website? Do you use an RSS feed? (which?) Or something else?
When I'm actively following the site (visiting 3+ times a day), I primarily follow the new comments page. I only read top posts when I see that there's an interesting discussion going on about one of them, or if the post's title seems particularly interesting. (I do wind up reading a large portion of the top posts sooner or later, though.)
I have the 'recent posts' RSS feed in my reader for when I'm not actively following the site, but I only click through if something seems very interesting.
I use RSS for top level posts, and have an easily accessible bookmark to the comments page which I check more frequently than I should.
Same here.
I read new posts as soon as I see them. I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly. I also reread posts to get any comments I miss and to get a better sense of how the discussions are preceding.
I click on "Recent Comments" and read as far back as I have to until I've caught up. Reading backwards can be mentally tiring ... so I'm actually just skimming for interesting comments. When I find one that seems interesting, I read through that thread for the continuity of the discussion.
Link pointer: http://www.eurekalert.org/pub_releases/2010-01/hu-qcc010810.php Quantum computer calculates exact energy of molecular hydrogen. http://www.nature.com/nchem/journal/vaop/ncurrent/abs/nchem.483.html
The submitter on Hacker News: "This is arguably one of the most important breakthroughs ever in the field of computing."
Imagine how much easier this comment would be to browse if it was part of a subreddit here.
James Hughes - with a (IMO) near-incoherent Yudkowsky critique:
http://ieet.org/index.php/IEET/more/hughes20100108/
Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?
Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit was; but I regard it as an interesting correlate of processing power, without any demonstrated or even argued logical relationship to consciousness. Tononi has published a lot of papers on it - and they became more widely-cited when he started saying they were about consciousness instead of saying they were about information integration - but he didn't AFAIK make any arguments that the thing he measures with information integration has something to do with consciousness.
It's a very interesting question. I think it's pretty straight-forward that 'ourselves' is a composite of 'awarenesses' with non-overlapping mutual awareness.
Some data with respect to inebriation:
drunk people would pass a Turing test, but the next morning when events are recalled, it feels like someone else' experiences. But then when drunk again, the experiences again feel immediate.
when I lived in France, most of my socialization time was spent inebriated. For years thereafter, whenever I was intoxicated, I felt like it was more natural to speak in French than English. Even now, my French vocabulary is accessible after a glass of wine.
That is interesting, but not what I was trying to ask. I was trying to ask if there could be separate, smaller, less-complex, non-human consciousnesses inside every human, It seems plausible (not probable, plausible) that there are, and that we currently have no way of detecting whether that is the case.
A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:
Oops.
This isn't an example of a logical fallacy; it could be read that way if the conclusion was "their way must be right" or something like that. As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".
If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.
Affirming the consequent, in general, is a good heuristic.