An interesting natural experiment happened on the Pacific Theater of WWII. American and Canadian forces attacked an island which had been secretly abandoned by the Japanese weeks prior. Their unopposed landing resulted in dozens of casualties from friendly fire and dozens of men lost in the jungle. Presumably, a similar rate of attrition occurred in every other landing, on top of casualties inflicted by the deliberate efforts of enemy troops.
It seems like the rate of friendly-fire casualties might be less when fighting a real enemy. (Super-crude toy model: soldier fire randomly at whoever they see. If no one is on the island apart from the attackers, then those are all going to turn into friendly fire cases. If most of the people on the island are the ones you're trying to attack, then they're going to sustain most of those casualties.)
If the landing had been peaceful & uneventful, perhaps we wouldn't have heard about it. So there might be a selection effect.
I'm radically cutting any source of information out from my life as soon as I get the feeling that I never use the information or don't get some measure of enjoyment from it. This reduced the time I spend on catching up from multiple hours a day to less than an hour. My mind feels much quieter in a good way. I still get a "noisy" sensation in my mind ("I just had a thought but have already forgotten it") but it feels contentless ("There is something on my mind") and the sensation weakens every day. Replacing the time spent on reading useless drivel with actual books and Wikipedia feels much more satisfactory.
I fear that this might lead to my perspective narrowing, but I act against it by having a couple of information dense blogs in my feed and still meeting with people and having Wikipeda to seek new avenues of information. And of course LessWrong.
Tim Ferriss talks about this in the Four Hour Work Week, he calls it the "information diet". Since I read it, I pretty much stopped listening to all news.
This is a (the?) standard challenge to the idea of adopting an information diet for personal gain, and it's presented lucidly.
Another implication: The threat imposed by a news reading public (who are itching to be frenzied), is a powerful incentive for prominent (and usually powerful) individuals to act in accord with public sentiment. Perversely, if the threat is effective, then the actual threat mechanism may appear useless (because it is never used).
This isn't always good, because the public can be wrong, but there seem to be morally mundane cases.
An example: If you live in California, should you read a story about a corrupt and powerful mayor in a small town in Iowa? It really does seem like the "media frenzy" is a primary vector for handling this type of situation, which may otherwise continue because the actors directly involved don't have enough power.
This also justifies the seeming capriciousness of the news cycle: Why this particular outrage at this particular time? Why not this other, slightly more deserving, outrage? Because this is a coordination game, and the exact focal point isn't as important as the fact that we all agree to coordinate.
A single reading of an economics text book for example will make anyone who I should want to be able to vote more informed than the same amount of news.
For context, there are about eight econ textbooks in my line of sight at this very moment. I've even read some of them. The kind of knowledge you get from consuming such a textbook is certainly useful, but for practical purposes it's highly contingent on what kind of world you're living in. The textbook probably won't tell you that, but an equivalent amount of news almost certainly would.
Philosopher Richard Chapell gives a positive review of Superintelligence.
An interesting point made by Brandon in the comments (the following quote combines two different comments):
...I think there's a pretty straightforward argument for taking this kind of discussion seriously, on general grounds independent of one's particular assessment of the possibility of AI itself. The issues discussed by Bostrom tend to be limit-case versions of issues that arise in forming institutions, especially ones that serve a wide range of purposes. Most of the things Bostrom discusses, on both the risk and the prevention side, have lower-level, less efficient efficient analogues in institution-building.
A lot of the problems -- perverse instantiation and principal agent problems, for instance -- are standard issues in law and constitutional theory, and a lot of constitutional theory is concerned with addressing them. In checks and balances, for instance, we are 'stunting' and 'tripwiring' different institutions to make them work less efficiently in matters where we foresee serious risks. Enumeration of powers is an attempt to control a government by direct specification, and political theories going ba
Many of you are probably familiar with the Alpha Course, which uses the evangelistic technique of identifying common philosophical questions people might have about their life ("what's the point of it all?", "how can I be truly happy?", etc.) and answering it with something about finding the everlasting love of Jesus Christ.
It occurs to me that many aspiring rationalists probably have an analogous set of questions turning around in their heads before they find a like-minded group. For example: "I notice that a lot of people make silly mistakes when thinking about things; how can I stop myself from making these same mistakes?"
Hypothetically, if we (as in the broader rationalist community) were to construct an effective campaign to capture people in this state, what would it look like?
Others have already explained what it stands for and provided a link; it may be worth adding that the author of that blog is also known on LW as Yvain, who once upon a time was one of the best and most prolific LW contributors; his particularly highly rated posts include one on the notion of "disease", one about metacontrarianism, one clarifying what it means when a model says something is almost certainly true, one on efficient charity, one introducing prospect theory, one about buying houses, one about Schelling fences, one about the worst argument in the world. (He's still one of the best but participates rather little.) He's also the guy who does the annual Less Wrong survey.
Has anyone on LessWrong noticed this new Elon Musk interview yet? Even through the intermediation of the reporter he seems to convey the gist of the concepts of existential risk, the Fermi paradox, and the great filter and simulation arguments.
Does anyone know of any studies that show that people tend to regard their enemies as innately evil?
I've seen it claimed a lot here but haven't been able to find a source beyond Eliezer's post.
What are some online (or offline but generally accessible) clusters that would appeal or be valuable to a typical lesswrong reader, but that have little obvious intersection with lesswrong memespace?
What does it mean if there aren't any? Does a cluster just expand to it's natural limits? I wonder if the space of the general contemporaneous approaches to "thinking about thinking" ultimately map down to just a few personality types.
Some clusters that seem related but not much discussed on LW:
The "aspiring Mentat" cluster, which includes the entire mnemonics subculture, various brain-training groups, the mental math subculture, and some parts of the magic tricks / mentalism subculture and professional gambling subculture. Some weirder parts are the lucid dreaming groups, the hypnosis groups, and the tulpamancy groups. Slightly overlapping subcultures are those around various games, e.g. chess and speed-solving of Rubik's cubes. For an example, see the book Mind Performance Hacks, or the Mentat Wiki. This overlaps with some very obscure Russian inventions, such as the TRIZ system of innovation, the theory of "psychonetics", and the Trachtenberg system of speed mathematics. There's also some overlaps with conlang subculture, such as Ithkuil and Lojban.
The "aspiring Ubermenschen" cluster. Some names that come to mind as prototypical: Tim Ferriss, Jason Shen, Sebastian Marshall. This is a part of the larger productivity culture, which includes e.g. Cal Newport, the GTD people, etc. They tend to monetize their writings, for obvious reasons. There's a spectrum here from the saner gr
Is there any set of issues this argument will not work with? From Leaving LW
It’s amazing how quickly you spot the flaws in a community once you stop thinking of yourself as a part of it. The ridiculous emphasis on cryonics and fear of death which the community inherited from Eliezer. The fact that only about 10% of the community is vegn, when vegnism is pretty much the best litmus test I know for whether someone actually follows arguments where they lead.
("veg*n" = vegetarian/vegan)
The writer self-identifies as an animal rights activist. Hence, "veg*nism is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while cryonics is a cult. If you are closer to the LW core, you can conveniently reverse it with no loss: "cryonics is pretty much the best litmus test I know for whether someone actually follows arguments where they lead," while veg*nism is a cult. Or insert your own pet issue: existential risk, feminism, monarchism, ethical altruism, Objectivism, communism, pretty much any -ism. Whichever one you believe in most is the best test for whether someone seriously follows arguments to their logic...
Damn. Ralph Whelan, a former cryonicist and Alcor employee in the early 1990's, died in his sleep the other day at age 46, and his parents plan to bury him conventionally.
Apparently he wore his Alcor bracelet, but he let his funding lapse.
That sucks. I knew him slightly back then, and I hadn't talked to him for years.
Any recommendations for introductory overviews of cognitive models of categorisation (e.g. prototype theory. exemplar theory, etc.)?
I'm trying to develop a high-level view of how people go wrong when reasoning about groups. I understand this well enough from the positions of statistical inference and categorical logic. What I'm looking for is convenient literature on theories of how human brains put objects into categories.
I have had the loose intuition for a while that I don't form habits in the sense that other people describe habits; doing something daily or more doesn't reduce the cognitive load in doing it, even after maintaining the pattern for >10 months with minor deviations (this has been true of my Soylent Orange diet). Additionally, even when I have a pattern of behavior that has kept up consistently for >1 year, less than a week of skipping it is enough to destroy all my inertia for that "habit" (this was my experience with Anki).
Two questions: Does this seem like a genuine significant discrepancy from baseline, and has anyone else experienced something like it?
This post and the ensuing discussion led me to construct the following hypothetical scenario.
In the port there are three old ships which are magically exactly the same. One is owned by Mr.Grumpy, one is owned by Mr.Happy, and one is owned by Mr.Doc. The three ships are about to go on (yet another) transatlantic voyage and the owners are considering whether to send for a refit instead.
Mr.Grumpy is a worrywart and the question of his ship's seaworthiness has been at the forefront of his thoughts for a while. His imagination drew him awful pictures of his shi...
The effective altruist survey was announced here a while ago and many participated. When announced it was expected to produce results in September or October if more time was needed. It's now October. Does anyone with ties to the survey know when results will be published?
How do you (EDIT: that is, you personally) pronounce AIXI? I find myself reading it with (pseudo-)Chinese phonetics as Aye-She.
I was thinking about making a new blog, maybe using an anagram of my name for the blog title. Here are the possibilities:
Burial Vim -- has a nice dark flavor, but how many people actually know the meaning of "vim"? I never heard it before
Via Librum -- has a nice Latin sound, but it's probably gramatically incorrect. could someone please check this for me?
I Rival Bum -- uhm... I guess I'll skip this one...
I'm having trans issues and would like to talk a trans person who has some experience coming out. Send me a PM if you can talk. Thanks.
How are the Hong Kong protesters able to overcome their collective action problems? The marginal value of one extra protester in terms of changing what's going to happen via China has to be close to zero, yet each protester faces serious risk of death or suffering long term negative consequences because they have to expect that China is carefully keeping track of who is participating. Is this a case of irrationality giving the protesters an advantage, or are there private gains for the protesters?
Robin Hanson claims aside, some people want to make the world a better place. If someone is always cynical they will often be wrong about things like this (though to be fair they'd probably do well on average)
Has anyone written a post on arguing by what I'd call Socratic Judo?
In the Socratic method, you question every assertion somebody makes. It's a very obnoxious form of argument, but if somebody doesn't disengage it can ruthlessly uncover their inconsistencies and unstated assumptions.
Socratic Judo, by question, lays out a set of premises that you know the interlocutor DOES agree with, in a way and tone they agree with, then attempts to show that these premises lead to something you want them to believe. Now, instead of the argument being centered on the iss...
This sounds like presenting an argument for a thing from shared premises - the most ordinary form of trying to convince someone.
I'm looking for feedback on my blog drafts & posts - I'm not writing for specifically rationalist audience, but I'd appreciate intelligent feedback on accuracy, additional ideas to possibly include, as well as feedback on how I communicate.
Where is a good place to get such feedback? LessWrong has a lot of the right sort of people, but posting lots of draft posts to the open thread may not be popular.
My blog is Habitua - it's on self-improvement, attempting to be evidence-based as much as practicable.
Alien-wise, most of the probability-mass not in the "Great Filter" theory is in the "they're all hiding" theory, right? Are there any other big events in the outcome space?
I intuitively feel like the "they're all hiding" theories are weaker and more speculative than the Great Filter theories, perhaps because including agency as a "black box" within a theory is bad, as a rule of thumb.
But, if most of the proposed candidates for the GF look weak, how do the "they're all hiding" candidates stack up? What is there, besides the Planetarium Hypothesis and Simulationism? Are there any that don't require a strong Singleton?
Duke University question:
I am applying for a job at Duke University, in the library. This job interests me greatly because it is exactly the sort of position I have been training myself for. It is a position that I know I am qualified for and that I know I could make a worthwhile impact in. My chief concern is lack of networking opportunities.
I do not and have not attended Duke and have no networking contacts in Duke (my closest contact is a graduate of Chapel Hill). Since I also do not live in North Carolina at the moment, I know these two things (distanc...
We still have plenty of space for people to attend the END DEATH Cryonics Convention in Laughlin, Nevada, next month. And Mr. Don Laughlin, the owner of the Riverside Resort, has worked with the Venturists to make the convention very affordable, compared with the similar event Alcor holds every few years:
Is there any discussion on the uses of friendliness theory outside of AI?
My first thought was that It seems like it could be useful in governance in politics, corporations, and companies.
I heard about DAO's (decentralized autonomous organizations) which are weak AI's that can piggy back off of human general intelligence if designed correctly and thought that it would be useful for those things too especially because it has a lot of the same problems that good old fashioned AGI have.
Online editing jobs:
Does anyone have any good resources for finding work online as an editor? I'm not sure what resources, organizaitons, or platforms are available. I figured with the self-movers at LW, someone would have gone hunting around and found some useful resources before.
EDIT: Because the question came up from Lumifer, here is my experience so far in editing, as outlined in a reply to their questions:
I have worked as an editor for a civil rights museum finding aid, a series of creative writing theses, a newspaper, and a biochem research project....
Here's a prediction about the future, that I will make because am going to help to build it. People are going to automatically construct world knowledge databases about things like people, events, companies and so on by hooking up NLP systems to large text corpora like Google Books and newspapers, and extracting/inferring information about the entities directly from the text. This will take the place of manually curated knowledge bases like Freebase.
More anecdata:
and some not-so-reasonable ones to see how it copes a little further out of the box:
I recently came by some cash. What would be a worthwhile way to spend/invest ~3000 USD? I'm especially interested in unorthodox advice.
I am capable of letting the money sit for an extended period of time (4+ years).
No EA suggestions please, I have a separate budget for that.
While this is by no means an unconventional suggestion, I would consider putting it in an index fund. The fees are very low and barring societal collapse, your money will grow in the long-term without you having to do much of anything about it.
At a more meta level, the boring, conventional choice is generally the best one unless you have a compelling reason to believe otherwise.
Would you (or anyone else) have good suggestions for index funds for those living and earning in the UK/Europe? Thanks!
Don't buy bitcoin miners. I know a lot about this industry. It is basically impossible at this point in time to buy off-the-shelf miners and outperform simply buying bitcoins right now. It is certainly impossible without sweet deals from the manufacturers that you only get by buying in bulk, much larger than $3k. Cloud hashing is an order of magnitude worse.
Any recommendations for some books or online resources on management?
I've recently became a team leader of a small(5 people) group of software developers. I haven't had management experience before, so I want to learn something about it. But I suspect that most of literature in this sphere is bullshit, not based on good evidence. I am interested to know what information on management LW users found useful.
This might sound unusually specific, but here it goes.
When attending teaching seminars I unusually often encounter Russian authors and notice that the publication dates lie before the fall of the Soviet Union. As I am currently learning Russian and suspect that there are plenty of high quality didactics materials yet to be translated I ask if someone knows if and how I could dig these docments up.
Alternatively, point me to a comprehensive translation of the materials. A more specific question I'd like to have answered, in addition to discovering something ...
Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
I had this meme roaming around my mind ever since I was a child that a dripping faucet is a major waste of water (not sure where exactly I got it from), so I decided to Fermi estimate how much water it actually wastes. (The answer is left as an exercise to the reader.)
I am looking for a website that presents bite-size psychological insights. Does anyone know such a thing?
I found the site http://www.psych2go.net/ in the past few days and I find the idea very appealing, since it is a very fast and efficient way to learn or refresh knowledge of psychological facts. Unfortunately, that website itself doesn't seem all that good since most of its feed is concerned with dating tips and other noise rather than actual psychological insights. Do you know something that is like it, but better and more serious?
In all the substantial programming projects I've undertaken, what I think of the language itself has never been a consideration.
One of these projects needed to run (client-side) in any web browser, so (at that time) it had to be written in Java.
Another project had to run an a library embedded in software developed by other people and also standalone at the command line. I wrote it in C++ (after an ill-considered first attempt to write it in Perl), mainly because it was a language I knew and performance was an essential requirement, ruling out Java (at that time).
My current employment is developing a tool for biologists to use; they all use Matlab, so it's written in Matlab, a language for which I even have a file somewhere called "Reasons I hate Matlab".
If I want to write an app to run on OSX or iOS, the choices are limited to what Apple supports, which as far as I know is Objective C, C++, or (very recently) Swift.
For quick pieces of text processing I use Perl, because that happens to be the language I know that's most suited to doing that. I'm sure Python would do just as well, but knowing Perl, I don't need Python, and I don't care about the Perl/Python wars.
A curious ...
Why there aren't any serious proposals to ban space colonization?
That is, successful attempt to establish a colony will most likely create society that blames Earth for their misery, and "self-sufficient" colony probably requires nuclear technology (Zubrin's plan states this explicitly). They will have both motive and means to nuke Earth for good. Colonization greatly increases extinction risk, contrary to what space advocates say.
If the reason is like "that is far-future problem", why it does not work for things like nanotechnology (there are organizations that want ban it right now)?
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.