This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.
...Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.
But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.
I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.
It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifeti
Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.
I imagine lots of kids play Farmville already.
An Alternative To "Recent Comments"
For those who may be having trouble keeping up with "Recent Comments" or finding the interface a bit plain, I've written a Greasemonkey script to make it easier/prettier. Here is a screenshot.
Explanation of features:
To install, first get Greasemonkey, then click here. Once that's done, use this link to get to the reader interface.
ETA: I've placed the script is in the public domain. Chrome is not supported.
Not sure what the current state of this issue is, apologies if it's somehow moot.
I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.
Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.
So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.
I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.
My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.
I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?
Do people think that a discussion forum on the moderation and deletion policies would be beneficial?
I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.
As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.
Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.
It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.
Edit: typo correction - insert missing words
How about an informed consent form:
If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.
I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.
I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.
I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.
I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.
Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.
I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.
I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:
Interesting: The prospect of having to tackle the problem should excite the player. Very abstract or dry problems would not work; very low-interaction problems wouldn't work either, even if cleverly presented (i.e. you could do Newcomb's problem as a game with plenty of lovely art and window dressing... but the game itself would still only be a single binary choice, which would quickly bore the player).
Dramatic in outcome: The difference between success and failure should be great. A problem in which being rational gets you 10 points but acting typically gets you 8 points would not work; the advantage of applying rationality needs to be very noticeable.
Not rigged (or not obviously so): The
RPGs (and roguelikes) can involve a lot of optimization/powergaming; the problem is that powergaming could be called rational already. You could
Sorry if this isn't very fleshed-out, just a possible direction.
Here's an idea I've had for a while: Make it seem, at first, like a regular RPG, but here's the kicker -- the mystical, magic potions don't actually do anything that's indistinguishable from chance.
(For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you. If you think this would be too obvious, rot13: In the game Earthbound, bar vgrz lbh trg vf gur Pnfrl Wbarf ong, naq vgf fgngf fnl gung vg'f ernyyl cbjreshy, ohg vg pna gnxr lbh n ybat gvzr gb ernyvmr gung vg uvgf fb eneryl gb or hfryrff.)
Set it in an environment like 17th-century England where you have access to the chemicals and astronomical observations they did (but give them fake names to avoid tipping off users, e.g., metallia instead of mercury/quicksilver), and are in the presence of a lot of thinkers working off of astrological and alchemical theories. Some would suggest stupid experiments ("extract aurum from urine -- they're both yellow!") while others would have better ideas.
To advance, you have to figure out the laws governing these things (which would be isomorphic to real science) and put this knowledge to practical use. The insights that had to be made back then are far removed from the clean scientific laws we have now, so it would be tough.
It would take a lot of work to e.g. make it fun to discover how to use stars to navigate, but I'm sure it could be done.
For example, you might have some herb combination that "restores HP", but whenever you use it, you strangely lose HP that more than cancels what it gave you.
What if instead of being useless (by having an additional cancelling effect), magical potions etc. had no effect at all? If HP isn't explicitly stated, you can make the player feel like he's regaining health (e.g. by some visual cues), but in reality he'd die just as often.
The problem is that, simply put, such games generally fail on the "fun" meter.
There is a game called "The Void," which begins with the player dying and going to a limbo like place ("The Void"). The game basically consists of you learning the rules of the Void and figuring out how to survive. At first it looks like a first person shooter, but if you play it as a first person shooter you will lose. Then it sort of looks like an RPG. If you play it as an RPG you will also lose. Then you realize it's a horror game. Which is true. But knowing that doesn't actually help you to win. What you eventually have to realize is that it's a First Person Resource Management game. Like, you're playing StarCraft from first person as a worker unit. Sort of.
The world has a very limited resource (Colour) and you must harvest, invest and utilitize Colour to solve all your problems. If you waste any, you will probably die, but you won't realize that for hours after you made the initial mistake.
Every NPC in the game will tell you things about how the world works, and every one of those NPCs (including your initial tutorial) is lying to you about at least one thing.
The game i...
I'm a translator between people who speak the same language, but don't communicate.
People who act mostly based on their instincts and emotions, and those who prefer to ignore or squelch those instincts and emotions[1], tend to have difficulty having meaningful conversations with each other. It's not uncommon for people from these groups to end up in relationships with each other, or at least working or socializing together.
On the spectrum between the two extremes, I am very close to the center. I have an easier time understanding the people on each side than their counterparts do, it frustrates me when they miscommunicate, and I want to help. This includes general techniques (although there are some good books on that already), explanations of words or actions which don't appear to make sense, and occasional outright translation of phrases ("When they said X, they meant what you would have called Y").
Is this problem, or this skill, something of interest to the LW community at large? In the several days I've been here it's come up on comment threads a couple times. I have some notes on the subject, and it would be useful for me to get feedback on them; I'd like to some day...
I've been on the other side of this, so I definitely understand why people react that way--now let's see if I understand it well enough to explain it.
For most people, being willing to answer a question or identify a belief is not the same thing as wanting to debate it. If you ask them to tell you one of their beliefs and then immediately try to engage them in justifying it to you, they feel baited and switched into a conflict situation, when they thought they were having a cooperative conversation. You've asked them to defend something very personal, and then are acting surprised when they get defensive.
Keep in mind also that most of the time in our culture, when one person challenges another one's beliefs, it carries the message "your beliefs are wrong." Even if you don't state that outright--and even in the probably rare cases when the other person knows you well enough to understand that isn't your intent--you're hitting all kinds of emotional buttons which make you seem like an aggressor. This is the result of how the other person is wired, but if you want to be able to have this kind of conversation, it's in your interest to work with it.
The corollary to the implied ...
[tl;dr: quest for some specific cryo data references]
I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.
I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.
In the spirit of "the world is mad" and for practical use, NYT has an article titled Forget what you know about good study habits.
Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010
More information including speakers at http://summit.singinst.org.au.
Register here.
I just discovered (when looking for a comment about an Ursula Vernon essay) that the site search doesn't work for comments which are under a "continue this thread" link. This makes site search a lot less useful, and I'm wondering if that's a cause of other failed searches I've attempted here.
The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.
I participated in a survey directed at atheists some time ago, and the report has come out. They didn't mention me by name, but they referenced me on their 15th endnote, which regarded questions they said were spiritual in nature. Specifically, the question was whether we believe in the possibility of human minds existing outside of our bodies. From the way they worded it, apparently I was one of the few not-spiritual people who believed there were perfectly naturalistic mechanisms for separating consciousness from bodies.
I'm taking a grad level stat class. One of my classmates said something today that nearly made me jump up and loudly declare that he was a frequentist scumbag.
We were asked to show that a coin toss fit the criteria of some theorem that talked about mapping subsets of a sigma algebra to form a well-defined probability. Half the elements of the set were taken care of by default (the whole set S and its complement { }), but we couldn't make any claims about the probability of getting Heads or Tails from just the theorem. I was content to assume the coin wa...
I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.
Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.
But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still see...
The journalistic version:
...[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest fo
I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?
This is the intro/summary:
...How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?
As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resou
In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.
Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.
Another problem with the Web, that wasn't discussed in "The Sh...
It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.
Can anyone suggest any blogs giving advice for serious romantic relationships? I think a lot of my problems come from a poor theory of mind for my partner, so stuff like 5 love languages and stuff on attachment styles has been useful.
Thanks.
Relevant to our akrasia articles:
If obese individuals have time-inconsistent preferences then commitment mechanisms, such as personal gambles, should help them restrain their short-term impulses and lose weight. Correspondence with the bettors confirms that this is their primary motivation. However, it appears that the bettors in our sample are not particularly skilled at choosing effective commitment mechanisms. Despite payoffs of as high as $7350, approximately 80% of people who spend money to bet on their own behaviour end up losing their bets.
http...
This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.
...I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial featur
Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?
In light of the news that apparently someone or something is hacking into automated factory control systems, I would like to suggest that the apocalypse threat level be increased from Guarded (lots of curious programmers own fast computers) to Elevated (deeply nonconclusive evidence consistent with a hard takeoff actively in progress).
In light of XFrequentist's suggestion in "More Art, Less Stink," would anyone be interested in a post consisting of a summary & discussion of Cialdini's Influence?
This is a brilliant book on methods of influencing people. But it's not just Dark Arts - it also includes defense against the Dark Arts!
Idea - Existential risk fighting corporates
People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.
Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might ...
I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.
Does anyone have something that could turn into a top level post? or even a open thread comment?
I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).
Is there enough interest for it to be worth creating a top level post for an open thread discussing Eliezer's Coherent Extrapolated Volition document? Or other possible ideas for AGI goal systems that aren't immediately disastrous to humanity? Or is there a top level post for this already? Or would some other forum be more appropriate?
The Onion parodies cyberpunk by describing our current reality: http://www.theonion.com/articles/man-lives-in-futuristic-scifi-world-where-all-his,17858/
An observer is given a box with a light on top, and given no information about it. At time t0, the light on the box turns on. At time tx, the light is still on.
At time tx, what information can the observer be said to have about the probability distribution of the duration of time that the light turns on? Obviously the observer has some information, but how is it best quantified?
For instance, the observer wishes to guess when the light will turn off, or find the best approximation of E(X | X > tx-t0), where X ~ duration of light being on. This is guarant...
Finally Prompted by this, but it would be too offtopic there
http://lesswrong.com/lw/2ot/somethings_wrong/
The ideas really started forming around the recent 'public relations' discussions.
If we want to change people's minds, we should be advertising.
I do like long drawn out debates, but most of the time they don't accomplish anything and even when they do, they're a huge use of personal resources.
There is a whole industry centered around changing people's minds effectively. They have expertise in this, and they do it way better than we do.
How about a turn-based strategy game where the object is to get deep enough into the singularity to upload yourself before a uFAI shows up and turns the universe into paper clips?
I don't think that would be very helpful. Advocating rationality (even through Harry Potter fanfiction) helps because people are better at thinking about the future and existential risks when they care about and understand rationality. But spreading singularity memes as a kind of literary genre won't do that. (With all due respect, your idea doesn't even make sense: I don't think "deep enough into the singularity" means anything with respect to what we actually talk about as the "singularity" here (successfully launching a Friendly singularity probably means the world is going to be remade in weeks or days or hours or minutes, and it probably means we're through with having to manually save the world from any remaining threats), and if a uFAI wants to turn the universe into paperclips, then you're screwed anyway, because the computer you just uploaded yourself into is part of the universe.)
Unfortunately, I don't think we can get people excited about bringing about a Friendly singular...
Nine years ago today, I was just beginning my post-graduate studies. I was running around campus trying to take care of some registration stuff when I heard that unknown parties had flown two airliners into the WTC towers. It was surreal -- at that moment, we had no idea who had done it, or why, or whether there were more planes in the air that would be used as missiles.
It was big news, and it's worth recalling this extraordinarily terrible event. But there are many more ordinary terrible events that occur every day, and kill far more people. I want to kee...
The Science of Word Recognition, by a Microsoft researcher, contains tales of reasonably well done Science gone persistently awry, to the point that the discredited version is today the most popular one.
I have recently had the experience of encountering an event of extremely low probability.
Did I just find a bug in the Matrix?
Apologies if this question seems naive but I would really appreciate your wisdom.
Is there a reasonable way of applying probability to analogue inference problems?
For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If...
Since the Open Thread is necessarily a mixed bag anyway, hopefully it's OK if I test Markdown here
test deleted
I have been following this site for almost a year now and it is fabulous, but I haven't felt an urgent need to post to the site until now. I've been working on a climate change project with a couple of others and am in desperate need of some feedback.
I know that climate change isn't a particularly popular topic on this website (but I'm not sure why, maybe I missed something, since much of the website seems to deal with existential risk. Am I really off track here?), but I thought this would be a great place to air these ideas. Our approach tries to tackl...
The gap between inventing formal logic and understanding human intelligence is as large as the gap between inventing formal grammars and understanding human language.
Friday's Wondermark comic discusses a possible philosophical paradox that's similar to those mentioned at Trust in Bayes and Exterminating life is rational.
Recently there was a discussion regarding Sex at Dawn. I recently skimmed this book at a friend's house, and realized that the central idea of the book is dependent on a group selection hypothesis. (The idea being that our noble savage bonobo-like hunter-gatherer ancestors evolved a preference for paternal uncertainty as this led to better in group cooperation.) This was never stated in the sequence of posts on the book. Can someone who has read the book confirm/deny the accuracy of my impression that the book's thesis relies on a group selection hypothesis?
Since Eliezer has talked about the truth of reductionism and the emptiness of "emergence", I thought of him when listening to Robert Laughlin on EconTalk (near the end of the podcast). Laughlin was arguing that reductionism is experimentally wrong and that everything, including the universal laws of physics, are really emergent. I'm not sure if that means "elephants all the way down" or what.
A question about modal logics.
Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.
It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.
What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. L...
Someone made a page that automatically collects high karma comments. Could someone point me at it please?
The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.
Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)
This time, you get to see Landsburg:
(Sorry, XiXiDu, I'll reply to you on his blog if my posting priv...
Is the Open Thread now deprecated in favour of the Discussion section? If so, I suggest an Open Thread over there for questions not worked out enough for a Discussion post. (I have some.)
NYT magazine covers engineers & terrorism: http://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t.html
How diverse is Less Wrong? I am under the impression that we disproportionately consist of 20-35 year old white males, more disproportionately on some axes than on others.
We obviously over-represent atheists, but there are very good reasons for that. Likewise, we are probably over-educated compared to the populations we are drawn from. I venture that we have a fairly weak age bias, and that can be accounted for by generational dispositions toward internet use.
However, if we are predominately white males, why are we? Should that concern us? There's nothing...
This sounds like the same question as why are there so few top-notch women in STEM fields, why there are so few women listed in Human Accomplishment's indices*, why so few non-whites or non-Asians score 5 on AP Physics, why...
In other words, here be dragons.
* just Lady Murasaki, if you were curious. It would be very amusing to read a review of The Tale of Genji by Eliezer or a LWer. My own reaction by the end was horror.
Konkvistador:
After I read this it struck me that you may value a much smaller space of diversity than I do. And that you probably value the very particular kinds of diversity (race, gender,some types of culture) much more or even perhaps to the exclusion of others (non-neurotypical, ideological and especially values).
There is a fascinating question that I've asked many times in many different venues, and never received anything approaching a coherent answer. Namely, among all the possible criteria for categorizing people, which particular ones are supposed to have moral, political, and ideological relevance? In the Western world nowadays, there exists a near-consensus that when it comes to certain ways of categorizing humans, we should be concerned if significant inequality and lack of political and other representation is correlated with these categories, we should condemn discrimination on the basis of them, and we should value diversity as measured by them. But what exact principle determines which categories should be assigned such value, and which not?
I am sure that a complete and accurate answer to this question would open a floodgate of insight about the modern society....
With your first example, I think you're on to an important politically incorrect truth, namely that the existence of diverse worldviews requires a certain degree of separation, and "diversity" in the sense of every place and institution containing a representative mix of people can exist only if a uniform worldview is imposed on all of them.
Let me illustrate using a mundane and non-ideological example. I once read a story about a neighborhood populated mostly by blue-collar folks with a strong do-it-yourself ethos, many of whom liked to work on their cars in their driveways. At some point, however, the real estate trends led to an increasing number of white collar yuppie types moving in from a nearby fancier neighborhood, for whom this was a ghastly and disreputable sight. Eventually, they managed to pass a local ordinance banning mechanical work in front yards, to the great chagrin of the older residents.
Therefore, when these two sorts of people lived in separate places, there was on the whole a diversity of worldview with regards to this particular issue, but when they got mixed together, this led to a conflict situation that could only end up with one or another view being imposed on everyone. And since people's worldviews manifest themselves in all kinds of ways that necessarily create conflict in case of differences, this clearly has implications that give the present notion of "diversity" at least a slight Orwellian whiff.
Wow! I just lost 50 points of karma in 15 minutes. I haven't made any top level posts, so it didn't happen there. I wonder where? I guess I already know why.
Have there been any articles on what's wrong with the Turing test as a measure of personhood? (even in it's least convenient form)
In short the problems I see are: False positives, false negatives, ignoring available information about the actual agent, and not reliably testing all the things that make personhood valuable.
I'm interested in video game design and game design in general, and also in raising the rationality waterline. I'd like to combine these two interests: to create a rationality-focused game that is entertaining or interesting enough to become popular outside our clique, but that can also effectively teach a genuinely useful skill to players.
I imagine that it would consist of one or more problems which the player would have to be rational in some particular way to solve. The problem has to be:
Interesting: The prospect of having to tackle the problem should
Question about Solomonoff induction: does anyone have anything good to say about how to associate programs with basic events/propositions/possible worlds?
Looks like an interesting course from MIT:
Reflective Practice: An Approach for Expanding Your Learning Frontiers
Is anyone familiar with the approach, or with the professor?
I am working on a new approach to creating knowledge management systems. An idea that I backed into as part of this work is the context principle.
Traditionally, the context principle states that a philosopher should always ask for a word's meaning in terms of the context in which it is being used, not in isolation.
I've redefined this to make it more general: Context creates meaning and in its absence there is no meaning.
And I've added the corollary: Domains can only be connected if they have contexts in common. Common contexts provide shared meani...
Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.
Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions...
Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.
Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.
That ...
Shangri-La dieters: So I just recently started reading through the archives of Seth Roberts' blog, and it looks like there's tons of benefits of getting 3 or so tablespoons of flax seed oil a day (cognitive performance, gum health, heart health, etc.). That said, it also seems to reduce appetite/weight, neither of which I want. I haven't read through Seth's directory of related posts yet, but does anyone have any advice? I guess I'd be willing to set alarms for myself so that I remembered to eat, but it just sounds really unpleasant and unwieldy.
NYT article on good study habits: http://www.nytimes.com/2010/09/07/health/views/07mind.html?_r=1
I don't have time to look into the sources but I am very interested in knowing the best way to learn.
I have a basic understanding of Markov Chains but I'm curious as to how they're used in artificial intelligence. My main two guesses are:
1.) They are used to make decisions (eg. Markov decision process) - By factoring in an action component to the Markov Chain you can use Markov Chains to make decisions in situations where that decision won't have a definite outcome but will instead adjust the probability of outcomes.
2.) They are used to evaluate the world (eg. Markov Chain Monte Carlo) - As the way the world develops at a high level can seem probabilistic...
Recently remembered this old Language Log post on the song of the Zebra Finch, though it might be relevant here. Whether or not the idea does apply to human languages, I think it's an interesting demonstration of what sort of surprising things evolution can work with. A highly constrained song is indirectly encoded by a much simpler bias in learning.
An Introduction to Probability and Inductive Logic by Ian Hacking
Have any of you read this book?
I have been invited to join a reading group based around it for the coming academic year and would like the opinions of this group as to whether it's worth it.
I may join in just for the section on Bayes. I might even finally discover the correct pronunciation of "Bayesian". ("Bay-zian" or "Bye-zian"?)
Here's a link to the book: http://www.amazon.co.uk/Introduction-Probability-Inductive-Logic/dp/0521775019/ref=sr_1_2?ie=UTF8&s=boo...
Idea - Existential risk fighting corporates
People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.
Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might ...
I'd like to discuss, with anyone who is interested, the ideas of Metaphysics Of Quality, by Robert Pirsig (laid out in Lila, An enquiry into Morals)
There are many aspects to MOQ that might make a rationalist cringe, like moral realism and giving evolution a path and purpose. But there are many interesting concepts which i heard for the first time when I read MOQ. The fourfold division of inorganic, biological, social and intellectual static patterns of quality is quite intruiging. Many things that the transhumanist community talks about actually interact a...
Does anyone else ever browse through comments, spot one and think "why is the post upvoted to 1?" and then realise that the vote was from you? I seem to do that a lot. (In nearly every case I leave the votes stand.)
Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.
Consider two beliefs:
The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.
Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).
Let 10^N represent the average across our future Everett branches of the...
Omega comes up to you and tells you that if you believe in science it will make your life 1000 utilons better. He then goes on to tell you that if you believe in god, it will make your afterlife 1 million utilons better. And finally, if you believe in both science and god, you won't get accepted into the afterlife so you'll only get the 1000 utilons.
If it were me, I would tell omega that he's not my real dad and go on believing in science and not believing in god.
Am I being irrational?
EDIT: if omega is an infinitely all-knowing oracle, the answer may be...
That's absolutely true. I've worked for two US National Labs, and both were monocultures. At my first job, the only woman in my group (20 or so) was the administrative assistant. At my second, the numbers were better, but at both, there were literally no non-whites in my immediate area. The inability to hire non-citizens contributes to the problem---I worked for Microsoft as well, and all the non-whites were foreign citizens---but it's not as if there aren't any women in the US!
It is a nearly intractable problem, and I think I understand it fairly well, but I would very much like to hear the opinion of LWers. My employers have always been very eager to hire women and minorities, but the numbers coming out of computer science programs are abysmal. At Less Wrong, a B.S. or M.S. in a specific field is not a barrier to entry, so our numbers should be slightly better. On the other hand, I have no idea how to go about improving them.
The Tale of Genji has gone on my list of books to read. Thanks!
Yes, but we are even more extreme in some respects; many CS/philosophy/neurology/etc. majors reject the Strong AI Thesis (I've asked), while it is practically one of our dogmas.
I realize that I was a bit of a tease there. It's somewhat off topic, but I'll include (some of) the hasty comments I wrote down immediately upon finishing:
The prevalence of poems & puns is qui... (read more)