The header for this page says "You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet.". It's inaccurate because Discussion doesn't include the posts which were started in Main.
Stuart Russell contributes a response to the Edge.org article from earlier this month.
...Of Myths And Moonshine
"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."
So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."
Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.
None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing a
I've been trying to decide whether or not to pursue an opportunity to spread rationalist memes to an audience that wouldn't ordinarily be exposed to them. I happen to be friends with the CEO and editor of an online magazine/community blog that caters to queer women, and I'm reasonably confident that with the right pitch I could convince them to let me do a column dedicated to rationality as it relates to the specific interests of queer women. I think there might be value in tailoring rationality material for specific demographics.
The issue is that, in order to make it relevant to the website and the demographic, I would need to talk about politics while trying to teach rationality, which seems highly risky. As one might imagine from the demographic, the website and associated community is heavily influenced by social justice memes, many of which I wholeheartedly endorse and many others of which I'm highly critical of. The strategy I've been formulating to avoid getting everybody mindkilled is to talk about the ways biases contibute to sexisim and homophobia, and then also talk about how those same bias can manifest in feminist/social justice ideas, while emphasising to death how i...
There a good strategy against publishing something stupid: Peer review before publication.
Something that's missing from a lot of social justice talk is quoting cognitive science papers. Talking about actual experiments and what the audience can learn from them could make people care more about empiricism.
Suddenly, I know the relative sizes of the planets!
HT Andrew Gelman.
ETA: Pluto isn't in the picture, but it would be a coriander seed, half the diameter of Mercury. For the Sun, imagine a spherical elephant.
I've long been convinced that donating all the income I can is the morally right thing to do. However, so far this has only taken the form of reduced consumption to save for donations down the road. Now that I have a level of savings I feel comfortable with and expect to start making more money next year, I no longer feel I have any excuse; I aim to start donating by the end of this year.
I’m increasingly convinced that existential risk reduction carries the largest expected value; however, I don’t feel like I have a good sense of where my donations would have the greatest impact. From what I have read, I am leaning towards movement building as the best instrumental goal, but I am far from sure. I’ll also mention that at this point I’m a bit skeptical that human ethics can be solved and then programmed into an FAI, but I also may be misunderstanding MIRI’s approach. I would hope that by increasing the focus on the existential risks of AI in elite/academic circles, more researchers could eventually begin pursuing a variety of possibilities for reducing AI risk.
At this point, I am primarily considering donating to FHI, CSER, MIRI or FLI, since they are ER focused. However, I am open to alternatives. What are others’ thoughts? Thanks a lot for the advice.
This week's writing lesson: If your motivation for writing is almost entirely internal, then you should write what you enjoy writing, not what you think you should write.
(I lost a few days' worth of productivity getting that one knocked into my skull, though hopefully I'm back to snuff.)
A song about self-awareness:
Yielding to Temptation by Mark Mandel, to the tune of Bin There, Dun That by Cat Faber
Something called me from the bookcase
and I answered quick and dumb
And I guess I'd still be reading there
if rescue hadn't come.
Well, I must have jumped six inches
and I answered "Coming, dear!"
Now the sf's in the basement
and it doesn't call so clear.Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.
...I was filling up the ice cube tray
last night at half past ten
When I heard a voice entreating
"Won't you dance with me again?"
It's the caramel fudge ripple,
sweet as love and thick as sin.
I'm not dumb, I'm not expAndable,
and I'm not digging in!Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the calories* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.As I stroll around the dealers' room
I'm only there to look.
No, I d
Development aid is really hard.
A project that works well in one place or for a little while may not scale. Focus on administrative costs may make charities less competent.
Nonetheless, some useful help does happen, it's just important to not chase after the Big Ideas.
One of the charities mentioned in the article, Deworm the World, is actually a Givewell top charity, due to "the strong evidence for deworming having lasting impact on childhood development". The article, on the other hand, claims that the evidence is weak, citing three studies in the British Medical Journal, which Givewell doesn't appear to mention in their review of the effectiveness of deworming.
Givewell's review of deworming
Might be worth looking into more.
What does your inner Quirrellmort tell you?
Has your internal model of the most competent person you can imagine ever given you an insight you wouldn't have thought of with more traditional methods?
Do you have more than one such useful sub-personality?
Does your main mode of thinking bring anything to the table that your useful mental models of others don't? If so, what?
I have been playing the card game Hanabi one hell of a lot recently, and I strongly recommend it to the LW community.
Hanabi is an abstract, cooperative game with limited information. And it's practically a tutorial in rational thinking in a group. Extrapolating unstated facts from other players' belief states is essential: "X did something that doesn't make sense given what I know; what is it that X knows but I don't, under which that action makes sense?" So, for that matter, is a consequentialist view of communication: "If I tell X the fact...
From a comment on SSC:
Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.
Does anyone close to CFAR know the specifics?
As someone who has taken the NIMS/ICS 100 course (online through FEMA), and gone to my local fire station and taken their equivalent of NIMS/ICS 100/200/70 -- I was not very impressed.
I can clearly see that there are valuable things in NIMS/ICS, and I can even believe that the movement which gave rise to the whole thing had valuable, interesting, and novel insights. But you're not going to get much of that by taking the course. It's got about one important concept -- which basically boils down to "it's good for different agencies to cooperate effectively, and here's one structure under which that empirically seems to happen well, therefore let's all use it" -- and the rest is a lot of details and terminology which are critically important to people actually working in said agencies, and mostly irrelevant otherwise.
EDIT: Boromir's big thing seems to be that HRO is about risk analysis, updating based on evidence, and dealing with low probabilities as mentioned in the excerpt. I can tell you that the basic ICS course covers exactly none of that. So I wonder what 'training course at the local volunteer fire department' he thinks we should all take. (I admit I have not taken the FEMA-official ICS 200 and 70 classes, which are online. But given the style of the 100 class, I cannot imagine them being dense with the kind of knowledge he thinks we should be gaining from them.)
An idea I've been toying with in my head, and discussed slightly at LW London yesterday: a sort of Snopes for "has person X professed opinion Y?"
Has Scott Alexander endorsed GamerGate? Did Eric Raymond say that hackers tend to be libertarian (or neoconservative, depending who you ask)? Did Eliezer say the singularity was too close to bother getting a degree?
I'll put further thoughts in replies to this comment.
I'd be wary of making a thing like that. Even ignoring the EU's bizarre "Right to be forgotten" law, people should be allowed to change their opinion, and such a website would incentivise consistency only. Not truth; consistency.
Are you sure that's what you want?
Mm, good point.
One of the things which inspired this idea was this thread: "okay, yes, it seems that Eliezer might well have said something like that, back in 2001". Eliezer already doesn't get to be forgotten. But if people are attacking him for things he said back in 2001, it seems like an improvement if we make it obvious that he said them back in 2001.
But for other people, I can see how this could be a bad thing to have. I'd like to be able to write "they said this in 2001, but in 2010 they said the opposite" and have people accept "okay, they changed their mind", but that doesn't seem entirely realistic.
I've updated from "probably good idea, unsure how valuable" to "possibly good idea, high variance".
There are also people who would believe that Eliezer is a neoreactionary if they were told it, but would also believe that Eliezer is not a neoreactionary if they were told that.
I guess I'm hoping that if this question comes up on a public forum, most people won't really know or care about Eliezer. The narrative in my head is along the lines of: someone says Eliezer is NRx, and someone else looks it up and says, no, Eliezer is not NRx, it says so right here. Then if the first person wants to convince anyone, their arguments become complicated and boring and nobody reads them.
This may be a naive question, which has a simple answer, but I haven't seen it. Please enlighten me.
I'm not clear on why an AI should have a utility function at all.
The computer I'm typing this on doesn't. It simply has input-output behavior. When I hit certain keys it reacts in certain, very complex ways, but it doesn't decide. It optimizes, but only when I specifically tell it to do so, and only on the parameters that I give it.
We tend to think of world-shaping GAI as an agent with it's own goals, which it seeks to implement. Why can't it be more like a...
This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI's.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher...
https://www.youtube.com/watch?v=KQ35zNlyG-o
Today I read a post by Bryan Caplan aimed toward effective altruists:
Question: How hard would it be to set up a cost-effective charity to help sponsor the global poor for immigration to Argentina? Responses from GiveWell, the broader Effective Altruism community, and Argentina experts are especially welcome.
For context, Argentina essentially allows immigration by anybody who can get an employer to sponsor them.
It's a common framing, and so I don't intend to pick on you, but I think the key issue isn't levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of.
You can't bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.
Anyone want to comment on a pilot episode of a podcast "Rationalists in Tech"? Please PM or email me. I'll ask for your feedback and suggestions for improvement on a 30-minute audio interview with a leading technologist from the LW community. This will allow me to plan an even better series of further interviews with senior professionals, consultants, founders, and executives in technology, mostly in software.
Discussion topics will include the relevance of CfAR-style techniques to the career and daily work of a tech professional; tips on
Many Interacting Worlds: Boffo or Bunk?
From my blogfeed: http://theness.com/neurologicablog/index.php/the-many-interacting-worlds-hypothesis/ , which links to http://www.nature.com/news/a-quantum-world-arising-from-many-ordinary-ones-1.16213 , which links to http://journals.aps.org/prx/abstract/10.1103/PhysRevX.4.041013 .
Does anyone with a better understanding of Schrodinger's Equation(s) than I know if any of the above is worth paying attention to?
It's an appealing and easy enough hack that I'll plug my recent LessWrong discussion post Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission. Especially now that Black Friday week has started on Amazon.
TLDR: Requesting articles/papers/books that feature detailed/explicit "how-to" sections for bio-feedback/visualization/mental training for improving performance (mostly mental, but perhaps cognitive as well)
Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.
I also saw a program about free divers (staying underwater for minutes) who slow ...
We're considering Meetup.com for the Tel Aviv LW group. (Also, the question was asked here.) It costs money, but we'd pay if it's worthwhile. I note that there are only 5 LessWrong groups at Meetup of which 2-3 are active. I'll appreciate feedback on the usefulness of Meetup.
Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.
"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."
I'm looking for an old post. Something about an extinct species of primate that may once have been nearly as smart as humans, but evolved over time to be much dumber, apparently because the energy costs of intelligence were maladaptive in its environment.
Can anyone point me in the right direction?
This site drains my energy. Too many topics seem interesting on the surface but are really just depressing and not actionable, with the big example being a bad singularity.
I have also found in my life that general, useful advice is rare. Most advice here seems either too vague or too specific to the poster. I did find at least one helpful book (by Scott Adams) and a couple of good posts, but think other sources could help at less cost. There are many smart people here, but if you look you can find something much more useful: smart people who have already achieved the particular goals you seek.
Bye.
Markus Ramikin's Semimonthly Dumb Question time. Since we seem to have both experts on physics and on editing wikipedia:
What do you think of the quality of the current Wikipedia article on heat death? Is it a fair treatment?
I keep seeing intelligent people talk about this concept like it's obviously useful and relevant, and to my layman mind it is, but the article sounds a little like it's basically bunk now, with the opening summary ending this way:
...it has been recognized by a respected authority on thermodynamics, Max Planck, that the phrase 'entropy
We shouldn't select our fitness gurus for whether they're of our tribe, we should select our fitness gurus for the effectiveness and truth of what they teach.
On that basis, do you have any reasons beyond "it's nerdy!" for recommending this website over any number of other ones, many of which are very good? If it's the gimmicky motivational approaches, I think LessWrong has that down pat - loads of us play HabitRPG and I'm pretty sure Beeminder's founders were some of our own.
Edit: For some reason my links ate themselves and the text between them so I took them out.
It seems that, in order to accomplish anything, one needs some combination of conscientiousness, charisma, and/or money*. It seems that each of the three can strengthen the others:
I am considering deleting all of my comments on Less Wrong (or, for comments I can't delete because they've been replied to, editing them to replace their text with a full stop and retracting them) and then deleting my account. Is there an easier way of doing that than by hand?
(In case you're wondering, that's because thanks to Randall Munroe the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.)
There's more to life than one's employment-- some mobs also go after their target's relatives.
Also, a fairly high proportion of people get highly distracted and upset by violent threats even if the likelihood of physical attacks has been low so far.
If there is a future Great Filter, it seems likely it would be one of two things:
1) a science experiment that destroys the world even though there was no reason to think that it would.
2) something analogous to nuclear weapons except easily constructable by an individual using easily obtainable materials, so that as soon as people have the knowledge, any random person can inflict immense destruction.
Are there any strategies that would guard against these possibilities?
I will donate N dollars to an x-risk organization within the next month. I tried to check what the effective altruism site recommended, but it required an email address. What organization should I donate to?
(N is predefined, and donating to the organization must not take longer than a standard online purchase.)
This is really worrying. Hubris and irrational geopolitical competition may create existential risks sooner then expected. http://motherboard.vice.com/read/how-the-pentagons-skynet-would-automate-war
Weird fictional theoritical scenario. Comments solicited.
In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).
We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."
They reply " We follow a version of your m...
The whole scenario depends on a reification fallacy. You don't negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).
Evolution is powerful, but that doesn't make it an intelligence, certainly not a superintelligence. We're not defecting against evolution, evolution just doesn't/can't play PD in the first place. But I'm also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.
And as long as we're personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution's "values", like survival and unlimited reproduction.
We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors.
I don't understand how this rule leads to the described behavior. As written, it suggests that the aliens would like to be crushed by their superiors...?
In business, almost all executive decisions (headcount and budget allocation, which unproven products to push ahead with aggressively, translating forecasts for macroeconomic risks into business-specific policies, who to promote to other executive level positions, etc.) are made with substantial uncertainty. Or to put it another way, any executive-level decision-maker would be paralyzed without strong priors. This is especially true in fast-changing or competitive markets, where the only way to collect more evidence without direct risk is to let your compe...
I thought this article about coaching in pickup techniques kind of misses the point:
I Took A Class on How to Pick Up Women—But I Learned More About Male Anxiety
http://www.alternet.org/culture/i-took-class-how-pick-women-i-learned-more-about-male-anxiety
I posted in response:
...For some reason we have this notion that the young man's "sexual debut," as the scientific literature about human sexuality calls it, happens as an organic developmental stage in the late teens, with a median age of around 17. If a 17 year old boy picked at random can probabl
I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating
Seeing sex as less "magical" could help reduce tension with trying to get sex.
(By the way, the whole article seems to me like: "Look, some people have less social skills -- let's make fun of them! Oh, they are trying to overcome their weakness -- wow, that's even funnier!" The elephant in the room is that in our culture it is taboo to express empathy towards men and boys.)
PUA coaches endorse the patriarchal view of women's weaknesses and vulnerabilities, and they teach men how to exploit these for sex by adopting the strategies of old-school cads.
I think most pickup coaches would object to this point of view, and it might make some of them quite unhappy. PUAs teach strategies that they believe will increase your attractiveness to the opposite sex. But it's silly to see attraction as a "weakness" or "vulnerability". Many people (women included, of course) want to feel attracted in the first place, especially to someone with other good qualities - they just don't get to make that choice most of the time! That's the one sense in which 'reduced agency' could be said to be relevant - but it doesn't negate the fact that agency really is quite heavily involved in any kind of pickup.
Studying computers I have ran into Turings name occasionally. When I actually looked up the papers he had wrote that seeded the concepts that caryy his name, this was a very refreshing read. To me they stand the test of tmie well. I knew that Turing committed suicide that had to do with him being a homosexual. Now I have learned of suggestions that official instituitons might have had a helping hand in that and that there wil be no offcial apology.
Turing was quite young and what he produced was pretty good stuff. I would have been really exited to read wha...
The Wikipedia article on the Ferguson crisis says,
"the population is only one-third white and about two-thirds black"
and then says,
"Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites"
which only appears anomalous if you ignore the base rate of finding a black driver vs. a white one. (Edited to add: other factors, like how many people in each group own/drive cars, may be relevant.)
There are many valid reasons to worry about racial tensions in that town (e.g. 48/53 police members are white), but the arrest rates is not one of them.
Statistics don't work like you think they do. The number is controlled.
If you come to that conclusion, the thing you should do as a rationalist is "notice confusion". Then you would check the source and would see:
While black residents accounted for 67 percent of Ferguson’s population, black drivers accounted for more than 86 percent of the traffic stops made last year by the Ferguson Police Department, according to a report produced by the office of Missouri Attorney General Chris Koster.
If you want to learn the relevant statistical literacy skills to understand what the sentence "Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites" usually means, the relevant subject is regressian analysis.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.