Meetup : Chicago Meetup At State and Elm, August 27th
Discussion article for the meetup : Chicago Meetup At State and Elm, August 27th
We'll be meeting at the cafe on the first floor of the State and Elm Barnes & Noble, 1130 N State St, at 3:30pm CST on Saturday, August 27th. The tentative topic of discussion is "recent posts on LessWrong."
Discussion article for the meetup : Chicago Meetup At State and Elm, August 27th
Topic Search Poll Results and Short Reports
At the end of June, I asked Less Wrong to vote for "What topic[s] would be best for an investigation and brief post?" in order to direct a search for topics to examine here. My thanks to everyone that participated (especially since the comments hint that the poll format was not well-liked). The most-wanted topics follow, and the complete list can be found on Google Docs -- maps and graphs related to the poll are also available on All Our Ideas. A score for a topic in the results below is an "estimated [percent] chance that it will win against a randomly chosen idea."
- Systems theory -- 71.6
- Leadership -- 70.7
- Linguistics (general) -- 70.7
- Finance -- 67.0
- Bayesian approach to business -- 60.7
- Lisp (Programming language) -- 59.7
- Anthropology (general) -- 59.4
- Sociology (general) -- 59.2
- Political Science (general) -- 58.5
- Historiography (the methods of history) -- 58.3
- Logistics -- 56.8
- Sociology of Political Organizations -- 56.0
- Military Theory -- 52.1
- Diplomacy -- 51.1
Systems theory, in first place, is a topic that I found while rummaging through online sources, including Wikipedia, for items to add to the poll; it's described there as the "study of systems in general, with the goal of elucidating principles that can be applied to all types of systems in all fields of research. [....] In this context the word systems is used to refer specifically to self-regulating systems, i.e. that are self-correcting through feedback." Leadership seems to fall into both the social and "being effective" categories of interest, but has only lightly been touched on in previous discussion here despite a lot of ink spilled on the topic elsewhere -- the top Google results for "leadership" on this site are currently Calcsam's post on community roles and a book review for the Arbinger Institute's Leadership and Self Deception. "To Lead, You Must Stand Up" also comes to mind.
How to Use It
The spreadsheet includes columns for "Currently Investigated By" and "Writeup URLs" -- feel free to add your name or writeup links. If you already know a thing or two about one of the above topics, share your knowledge in a comment below or in a discussion post as appropriate, similar to the earlier "What can you teach us?" If you want to survey what currently exists on a topic, grab a few books, investigate, and then let us know what you found. When a related post instead of just a comment is appropriate, I recommend the tag "topic_search" As mentioned previously, even investigations that end in a comment to this post that a topic isn't useful for LW is still itself useful for the search.
Please vote -- What topic would be best for an investigation and brief post?
Followup to: Systematic Search for Useful Ideas
I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."
ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.
New York Times on Arguments and Evolution [link]
I saw this in the Facebook "what's popular" box, so it's apparently being heavily read and forwarded. There's nothing earthshattering for long-time LessWrong readers, but it's a bit interesting and not too bad a condensation of the topic:
Now some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. Certitude works, however sharply it may depart from the truth. -- Cohen, Patricia "Reason Seen More as Weapon Than Path to Truth"
A glance at the comments [at the Times], however, seems to indicate that most people are misinterpreting this, and at least one person has said flatly that it's the reason his political opponents don't agree with him.
ETA: Oops, I forgot the most import thing. The article is at http://www.nytimes.com/2011/06/15/arts/people-argue-just-to-win-scholars-assert.html
Proposal: Systematic Search for Useful Ideas
LessWrong is a font of good ideas, but the topics and interests usually expressed and explored here tend to cluster over few areas. As such, high-value topics may still be present for the community in other fields which can be systematically explored, rather than waiting for a random encounter. Additionally, there seems to be interest here in examining a wider variety of topics. In order to do this, I suggest creating a community list of areas to look into (besides the usual AI, Cog Sci, Comp Sci, Econ, Math, Philosophy, Psych, Statistics, etc.) and then reading a bit on the basics of these fields. In additional to potentially uncovering useful ideas per se, this also might offer the opportunity to populate the textbooks resource list and engage in not-random acts of scholarship.
Everyone Split Up, There’s a A Lot of Ideosphere to Cover
A rough sketch of how I think the project will work follows. I’ll be proceeding with this and tackling at least one or two subjects as long as there’s at least a few other people interested in working on it too.Step 1, Community Evaluation: Using All Our Ideas or similar, generate a list of fields to investigate.
Step 2, Sign-Up: People have the best sense of what they already know and their abilities, so at this point anyone that wants to can pick a subject that’s best for them to look into.
Step 3, Study: I imagine this will mostly involve self-directed reading of a handful of texts, watching some online videos, and maybe calling up one or two people -- in other words, nothing too dramatic. If a vein of something interesting is found, it’s probably better that it’s “marked” for further follow-up rather than further examined alone.
Step 4, Post: Some these investigations will not reveal anything -- that’s actually a good thing (explained below); for these, a short “Looked into it, nothing here” sort of comment should suffice. Subjects with bigger findings should get bigger, more detailed comments/posts.
Evaluation of Proposal
As a first step, I’ll use a variation of the Heilmeier questions which is an (admittedly idiosyncratic) mix of the original version and gregv’s enhanced version.- What are you trying to do? Articulate your objectives using absolutely no jargon.
Produce comments or posts providing very brief overviews of fields of knowledge, not previously discussed here, with notes pertaining to Less Wrong topics and interests. - Who cares? How many people will benefit?
This post is partially an attempt to determine that, but there seems to be at least some interest in more variety on the site (see above). Additionally, the posts should be a good general resource for anyone that stumbles across them, and might even make good content for search purposes. - Why hasn't someone already solved this problem? What makes you think what stopped them won't stop you?
The idea is roughly book club meets Wikipedia, but with an emphasis on creating a small evaluative body of knowledge rather than a massive descriptive encyclopedia, and with a LessWrong twist. The sharper focus should make the results more useful to go through than just hitting “random page” in yon encyclopedia. - How much have projects like this cost (time equivalent)?
Some have the ability to take on “whole fields of knowledge in mere weeks” but that’s not typical -- investigating a subject in this case is roughly comparable in complexity to taking an introductory class or two, which people without any previous training normally accomplish over a period of about three to four months at a pace which is not especially strenuous, and with fairly light monetary costs beyond tuition/fees (which aren't applicable here). - What are the midterm and final "exams" to check for success?
For each individual investigation, a good “midterm” check would be for the person looking into a field to have an list of resources or texts they’re working on. The final “exam” is a posting indicating if anything useful or interesting was found, and if so, what. - If y [this community search] fails to solve x [uncover useful knowledge in fields previously under-examined on LessWrong], what would that teach you that you (hopefully) didn't know at the beginning?
Quite possibly, this could be a good thing -- it indicates that the mix of topics on LessWrong is approximately right, and things can continue on. In this case, we’d end up seeing a bunch of short “nothing interesting here” comments, and can rest more or less assured that further investigation into even more minute detail in unnecessarily. This is conditional on not-terrible scholarship and a reasonably good priority list from step 1.
Schneier talks about The Dishonest Minority [Link]
Evolution. Morality. Strategy. Security/Cryptography. This hits so many topics of interest, I can't imagine it not being discussed here. Bruce Schneier blogs about his book-in-progress, The Dishonest Minority:
Humans evolved along this path. The basic mechanism can be modeled simply. It is in our collective group interest for everyone to cooperate. It is in any given individual's short-term self interest not to cooperate: to defect, in game theory terms. But if everyone defects, society falls apart. To ensure widespread cooperation and minimal defection, we collectively implement a variety of societal security systems.
I am somewhat reminded of Robin Hanson's Homo Hypocritus writings from the above, although it is not the same. Schneier says that the book is basically a first draft at this point, and might still change quite a bit. Some of the comments focus on whether "dishonest" is actually the best term to use for defecting from social norms.
Scott Sumner on Utility vs Happiness [Link]
A distinction that some people grok right away and some others may not realize exists:
Imagine a country called “Lanmindia,” where much of the population has seen its legs blown off in horrible accidents. Does that sound like a pretty miserable place? Happiness research suggests not. The claim is that there is a sort of natural “set-point” for happiness, and that after winning a lottery one is happy for a short time, and then you revert right back to your natural happiness level. I find that plausible. They also claim that if someone loses a limb, then they are unhappy for a short period and then revert back to normal. I find that implausible, but if the evidence says it is the case then I guess I need to accept that.
My claim is that although Lanmindia is just as happy as America, it has much lower utility. Let’s define ’utility’ as ”that which people maximize.” People very much don’t want to have their legs blown off, and hence emigrate from Lanmindia in droves. People behave as if they care about utility, not happiness.
-Scott Sumner, "Nonsense on stilts: Part 1. What if utility and happiness are unrelated?" TheMoneyIllusion
This is also somewhat a reply to Hanson's "Lift Up Your Eyes" on Overcoming Bias. Some people on LessWrong are careful to make the distinction between ordinal utility, cardinal utility, and fuzzies, and others aren't quite so much. The above sentence on accepting evidence and the post script that he is not serious about one part of the post might also make interesting conversation -- part two is advice to move next door to a child molester for cheaper housing if you don't have a kid and part three is about The Fed taking advantage of banks.
The Long Now
It's surprised me that there's been very little discussion of The Long Now here on Less Wrong, as there are many similarities between the groups, although the approach and philosophy between them are quite different. At a minimum, I believe that a general awareness might be beneficial. I'll use the initials LW and LN below. My perspective on LN is simply that of someone who's kept an eye on their website from time to time and read a few of their articles, so I'd also like to admit that my knowledge is a bit shallow (a reason, in fact, I bring the topic up for discussion).
Similarities
Most critically, long-term thinking appears as a cornerstone of both the LW and LN thought, explicitly as the goal for LN, and implicitly here on LW whenever we talk about existential risk or decades-away or longer technology. It's not clear if there's an overlap between the commenters at LW and the membership of LN or not, but there's definitely a large number of people "between" the two groups -- statements by Peter Thiel and Ray Kurzweil have been recent topics on the LN blog and Hillis, who founded LN, has been involved in AI and philosophy of mind. LN has Long Bets, which I would loosely describe as to PredictionBook as InTrade is to Foresight Exchange. LN apparently had a presence at some of the past SIAI's Singularity Summits.
Differences
Signaling: LN embraces signaling like there's no tomorrow (ha!) -- their flagship project, after all, is a monumental clock to last thousands of years, the goal of which is to "lend itself to good storytelling and myth" about long-term thought. Their membership cards are stainless steel. Some of the projects LN are pursuing seem to have been chosen mostly because they sound awesome, and even those that aren't are done with some flair, IMHO. In contrast, the view among LW posts seems to be that signaling is in many cases a necessary evil, in some cases just an evolutionary leftover, and reducing signaling a potential source for efficiency gains. There may be something to be learned here -- we already know FAI would be an easier sell if we described it as project to create robots that are Presidents of the United States by day, crime-fighters by night, and cat-people by late-night.
Structure: While LW is a project of SIAI, they're not the same, so by extension the comparison between LN and LW is just a bit apples-to-kumquats. It'd be a lot easier to compare LW to a LN discussion board, if it existed.
The Future: Here on LW, we want our nuclear-powered flying cars, dammit! Bad future scenarios that are discussed on LW tend to be irrevocably and undeniably bad -- the world is turned into tang or paperclips and no life exists anymore, for example. LN seems more concerned with recovery from, rather than prevention of, "collapse of civilization" scenarios. Many of the projects both undertaken and linked to by LN focus on preserving knowledge in a such a scenario. Between the overlap in the LW community and cryonics, SENS, etc, the mental relationship between the median LW poster and the future seems more personal and less abstract.
Politics: The predominant thinking on LW seems to be a (very slightly left-leaning) technolibertarianism, although since it's open to anyone who wanders in from the Internet, there's a lot of variation (if either SIAI or FHI have an especially strong political stance per se, I've not noticed it). There's also a general skepticism here regarding the soundness of most political thought and of many political processes. LN seems further left on average and more comfortable with politics in general (although calling it a political organization would be a bit of a stretch). Keeping with this, LW seems to have more emphasis on individual decision making and improvement than LN.
Thoughts?
[LINK] Creationism = High Carb? Or, The Devil Does Atkins
Based on the community's continuing interests in diet and religion, I'd like to point out this blog post by the coauthor of Protein Power, Michael Eades, wherein he suggests that biblical literalism tends toward a low-fat approach to nutrition over a low-carb philosophy, by essentially throwing out a bunch of evidence on the matter:
Why, you might ask, is this scientist so obdurate in the face of all the evidence that’s out there? Perhaps because much of the evidence isn’t in accord with his religious beliefs. I try never to mention a person’s religious faith, but when it impacts his scientific thinking it at least needs to be made known. Unless he’s changed his thinking recently, Dr. Eckel apparently is one of the few academic scientists who are literal interpreters of the bible. I assume this because Dr. Eckel serves on the technical advisory board of the Institution for Creation Research, an organization that believes that not only is the earth only a few thousand years old , but that the entire universe in only a few thousand years old. And they believe that man was basically hand formed by God on the sixth day of creation. And Dr. Eckel’s own writings on the subject appear to confirm his beliefs
[.....]
Of all the evidence that exists, I think the evolutionary/natural selection data and the anthropological data are the most compelling because they provide the largest amount of evidence over the longest time. To Dr. Eckel, however, these data aren’t applicable because in his worldview prehistoric man didn’t exist and therefore wasn’t available to be molded by the forces of natural selection. I haven’t a clue as to what he thinks the fossil remains of early humans really were or where they came from. Perhaps he believes – as I once had it explained to me by a religious fundamentalist – these fossilized remains of dinosaurs, extinct ancient birds and mammals and prehistoric man were carefully buried by the devil to snare the unwary and the unbeliever. If this is the case, I guess I’ll have to consider myself snared.
In Dr. Eckel’s view, man was created post agriculturally. In fact, in his view, there was never an pre-agricultural era, so how could man have failed to adapt to agriculture?
While there's a clear persuasive agenda here and I won't present a full analysis of the situation, Eades also mentions biasing use of language earlier in the article. In particular, beware applause lights and confirmation bias in evaluating.
Rationality Power Tools
Summary: Rationalists should win; however, it could take a really long time before a technological singularity or uploading provide powerful technology to aid rationalists in achieving their goals. It's possible today to create assistant computer software to help direct human effort and provide "hints" for clearer thinking. We should catalog such software when it exists and create it when it doesn't.
The Problem
We may be waiting awhile for a Friendly AI or similar “world changing” technology to appear. While technology continues to improve, the process of creating a Friendly AI seems extremely tricky, and there’s no solid ETA on the program. Uploading is still years to decades away. In the meantime, aspiring rationalists still have to get on with our lives.
Rationality is hard. Merely knowing about a bias is often not enough to overcome it. Even in cases where the steps to act rationally are known, the algorithm required may be more than can be done manually, or may require information which itself is not immediately at hand. However, a lot of things that are difficult become easier when you have the right tools. Could there be tools that supplement the effort involved in making a good decision? I suspect that this is the case, and will give several examples of programs that the community could work to create -- computer software to help you win. Because a lot of software is specifically created to address problems as they come up, it would also be worthwhile to maintain an index of already available software with special usefulness and applicability to Less Wrong readers.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)