You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, Nov. 24 - Nov. 30, 2014

4 Post author: MrMind 24 November 2014 08:56AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (317)

Comment author: [deleted] 30 November 2014 10:33:39PM 1 point [-]

This site drains my energy. Too many topics seem interesting on the surface but are really just depressing and not actionable, with the big example being a bad singularity.

I have also found in my life that general, useful advice is rare. Most advice here seems either too vague or too specific to the poster. I did find at least one helpful book (by Scott Adams) and a couple of good posts, but think other sources could help at less cost. There are many smart people here, but if you look you can find something much more useful: smart people who have already achieved the particular goals you seek.

Bye.

Comment author: Unknowns 30 November 2014 03:50:54PM 0 points [-]

If there is a future Great Filter, it seems likely it would be one of two things:

1) a science experiment that destroys the world even though there was no reason to think that it would.

2) something analogous to nuclear weapons except easily constructable by an individual using easily obtainable materials, so that as soon as people have the knowledge, any random person can inflict immense destruction.

Are there any strategies that would guard against these possibilities?

Comment author: Izeinwinter 30 November 2014 07:33:43PM -1 points [-]

1: No. Well, in theory, an presence on the moons of neptune that could survive indefinately without contact would do it, but that's not going to happen any time soon.

2: Arguably, we already live in this world. There are very destructive things in the canon of human knowledge, only people don't conceptualize them as weapons at all, but merely as dangers to be avoided. So.. good news, this does not work as a filter, and the actually odd thing is that we *do think of runaway super criticality as a weapon. Conditioning by lots of wars to think of explosions as ways to kill people?

*I'm not going to name examples in this context, because that might theoretically "help" someone to think of said example as a weapon. Which would be bad.

Comment author: ilzolende 29 November 2014 11:53:57PM 0 points [-]

I will donate N dollars to an x-risk organization within the next month. I tried to check what the effective altruism site recommended, but it required an email address. What organization should I donate to?

(N is predefined, and donating to the organization must not take longer than a standard online purchase.)

Comment author: fubarobfusco 29 November 2014 06:24:35AM 4 points [-]

I have been playing the card game Hanabi one hell of a lot recently, and I strongly recommend it to the LW community.

Hanabi is an abstract, cooperative game with limited information. And it's practically a tutorial in rational thinking in a group. Extrapolating unstated facts from other players' belief states is essential: "X did something that doesn't make sense given what I know; what is it that X knows but I don't, under which that action makes sense?" So, for that matter, is a consequentialist view of communication: "If I tell X the fact P, what will they do? Not what will they believe or know, but what actions should I expect they will take?"

Two people I've played with have told me that the game has positively affected their understanding of communication.

Comment author: MrMind 01 December 2014 08:33:56AM 0 points [-]

Seconding too.
I've played in very small groups (~3), and the game usually stabilizes into predictable strategies (1 discards, 2 gives information, 3 puts down, and after a while switch between 2 and 3). Larger groups are probably messier and funnier, but nonetheless, very instructive.

Comment author: drethelin 01 December 2014 01:11:12AM 0 points [-]

Seconding this recommendation.

Comment author: shminux 28 November 2014 10:32:10PM 6 points [-]

From a comment on SSC:

Attempts to get the LW community to borrow some of the risk analysis tools that are used to make split second judgments in such communities effectively has been met with a crushing wall of failure and arrogance. Suggestion that LW-ers should take a simple training course at their local volunteer fire department so they can understand low probability high cost risk on an emotional level has been met with outright derision.

Does anyone close to CFAR know the specifics?

Comment author: gwillen 30 November 2014 08:52:58AM *  6 points [-]

As someone who has taken the NIMS/ICS 100 course (online through FEMA), and gone to my local fire station and taken their equivalent of NIMS/ICS 100/200/70 -- I was not very impressed.

I can clearly see that there are valuable things in NIMS/ICS, and I can even believe that the movement which gave rise to the whole thing had valuable, interesting, and novel insights. But you're not going to get much of that by taking the course. It's got about one important concept -- which basically boils down to "it's good for different agencies to cooperate effectively, and here's one structure under which that empirically seems to happen well, therefore let's all use it" -- and the rest is a lot of details and terminology which are critically important to people actually working in said agencies, and mostly irrelevant otherwise.

EDIT: Boromir's big thing seems to be that HRO is about risk analysis, updating based on evidence, and dealing with low probabilities as mentioned in the excerpt. I can tell you that the basic ICS course covers exactly none of that. So I wonder what 'training course at the local volunteer fire department' he thinks we should all take. (I admit I have not taken the FEMA-official ICS 200 and 70 classes, which are online. But given the style of the 100 class, I cannot imagine them being dense with the kind of knowledge he thinks we should be gaining from them.)

Comment author: Nornagest 30 November 2014 07:25:13AM *  3 points [-]

I'm not particularly close to the CFAR wing of that crowd, but: on the one hand, that sounds at least potentially valuable, and I'd look into it if I had anything more specific to go on than "a simple training course". (Poking around my local fire department's webpage turned up only something called "Community Emergency Response Training", which seems to consist of first aid, disaster prep, and basic firefighting -- too narrow and skill-based to be what Boromir's comment is talking about.)

On the other hand, though, I don't think we're getting the full story here. The fact that Boromir devotes most of his comment to flogging the organization he's (judging from his username's link) either a member or a fanboy of, in particular, is a very bad sign.

Comment author: bogus 29 November 2014 05:32:04AM *  4 points [-]

Interesting, though apparently this person made his suggestions to Salamon and Yudkowsky in person, not to the LW community itself - thus, his reference to "outright derision" is somewhat misleading. CFAR has indeed adopted some ideas that originally came from LW itself - the whole "goal factoring" theme of recent CFAR workshops seems to be a significant example.

Comment author: NikiT 28 November 2014 01:20:15PM 10 points [-]

I've been trying to decide whether or not to pursue an opportunity to spread rationalist memes to an audience that wouldn't ordinarily be exposed to them. I happen to be friends with the CEO and editor of an online magazine/community blog that caters to queer women, and I'm reasonably confident that with the right pitch I could convince them to let me do a column dedicated to rationality as it relates to the specific interests of queer women. I think there might be value in tailoring rationality material for specific demographics.

The issue is that, in order to make it relevant to the website and the demographic, I would need to talk about politics while trying to teach rationality, which seems highly risky. As one might imagine from the demographic, the website and associated community is heavily influenced by social justice memes, many of which I wholeheartedly endorse and many others of which I'm highly critical of. The strategy I've been formulating to avoid getting everybody mindkilled is to talk about the ways biases contibute to sexisim and homophobia, and then also talk about how those same bias can manifest in feminist/social justice ideas, while emphasising to death how important it is to avoid Fully General Counterarguments, but it still seems risky.

The other issue is that it might not be such a good idea to try to teach rationality when I'm still learning myself, and haven't really participated in the rationalist community. OTOH when will I ever be done learning, and should I let this opportunity pass by?

The potential Pros are; Improving the quality of discourse within my community, providing a space for the more rationalist members of that community, and spreading rationalist memes. Also, if it works out, it would probably raise my relative status within the community, which may be clouding my judgement of how good an idea it is.

The potential Cons are; That I might mess up and mindkill everyone, that I might say something too critical that gets me socially ostracize, and that I might accidentally write something foolish on the internet that I later regret.

Thoughts?

Comment author: ChristianKl 28 November 2014 04:07:35PM 6 points [-]

There a good strategy against publishing something stupid: Peer review before publication.

Something that's missing from a lot of social justice talk is quoting cognitive science papers. Talking about actual experiments and what the audience can learn from them could make people care more about empiricism.

Comment author: NikiT 29 November 2014 03:43:43AM 2 points [-]

I was planning to have one of my friends from the community around that website test read the articles for me, though I might also benefit from having a rationalist test read them, if anybody wants to volunteer.

Discussing cognitive science experiments is part of the plan. I actually performed a version of the 2-4-6 experiment on a group of people associated with the website (while dressed as a court jester!(it was during a renaissance fair)) and as predicted only 20% of them got it right. I think knowing that members of their own ingroup are just as susceptible to bias as faceless experimental subjects will help get the point across.

Comment author: ChristianKl 29 November 2014 08:04:19PM 2 points [-]

I volunteer for giving you feedback on a few articles.

Comment author: artemium 27 November 2014 05:49:39PM *  2 points [-]

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

Comment author: Viliam_Bur 28 November 2014 09:55:07AM *  1 point [-]

Seems very good, but this is coming from a person familiar with the topic. I wonder how good it would seem to someone who hasn't heard about the topic yet.

Comment author: Artaxerxes 27 November 2014 01:34:04AM 3 points [-]

Calico, the aging research company founded by Google, is hiring.

Comment author: Slider 27 November 2014 01:27:56AM -1 points [-]

Studying computers I have ran into Turings name occasionally. When I actually looked up the papers he had wrote that seeded the concepts that caryy his name, this was a very refreshing read. To me they stand the test of tmie well. I knew that Turing committed suicide that had to do with him being a homosexual. Now I have learned of suggestions that official instituitons might have had a helping hand in that and that there wil be no offcial apology.

Turing was quite young and what he produced was pretty good stuff. I would have been really exited to read what he would have written if he had been on the field for 5 times as much. Shortening that lifespan motivated with something as silly as homosexuality inflamed me with a big anger emotion.

You can add to your list of why we don't have the singularity yet the item of "not tolerant enough".

Comment author: artemium 27 November 2014 06:00:03PM *  0 points [-]

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

Comment author: ChristianKl 28 November 2014 11:13:26AM *  3 points [-]

The post argues that a single instance proves that lack of tolerance holds back the singularity. That's a stupid argument. The kind of argument people make if the operate in the mental domain of politics and suddenly throw out their standards for rational reasoning.

It also quite naive in that it thinks that having the singularity now would be a good thing. Given that we don't know how to build FAI at the moment having the singularity now might mean an obliteration of the human race.

Comment author: RowanE 28 November 2014 08:30:46AM -1 points [-]

It was already downvoted when I saw it so I didn't give it the most charitable reading, I thought it amounted to little more than a political cheer and not something that belongs here.

Comment author: TheOtherDave 27 November 2014 08:09:09PM 1 point [-]

I don't know, but a pattern I've noticed lately is that posts that can be understood as "soldiers for the progressive side" will often get two or three downvotes pretty quickly, and then get upvoted back to zero over the next few days. (If they are otherwise interesting they typically get lots more upvotes.)

I suspect that pattern is relevant here.

Comment author: bogus 27 November 2014 10:40:27PM 0 points [-]

I've noticed similar things. Probably some knee-jerk votes coming from NRX's, or from folks who just hate seeing political comments here. Or both.

Comment author: MrMind 27 November 2014 08:05:38AM 2 points [-]

Yeah, I was thinking about similar themes some days ago. My reference was Galois, a very young genius of the field. After single-handedly inventing group theory, he died. At 20. In a duel. Over a girl (allegedly).

Or Ramanujan. Died because he refused to eat healthily.

There are many examples of geniuses that died early, and had not the time to contribute much more to humanity, usually over silly things.

Comment author: NancyLebovitz 27 November 2014 04:06:15PM *  1 point [-]

Ramanujan died as the result of compulsive behavior from two cultures. He was (so far as I know) doing alright until WWI happened.

Comment author: Slider 27 November 2014 02:13:24AM 0 points [-]

I failed to do basic googling. They are sorry for the fate but don't revert any official decision.

Comment author: polymathwannabe 27 November 2014 01:36:24AM 2 points [-]
Comment author: Capla 26 November 2014 08:36:28PM 5 points [-]

This may be a naive question, which has a simple answer, but I haven't seen it. Please enlighten me.

I'm not clear on why an AI should have a utility function at all.

The computer I'm typing this on doesn't. It simply has input-output behavior. When I hit certain keys it reacts in certain, very complex ways, but it doesn't decide. It optimizes, but only when I specifically tell it to do so, and only on the parameters that I give it.

We tend to think of world-shaping GAI as an agent with it's own goals, which it seeks to implement. Why can't it be more like a computing machine in a box. We could feed it questions, like "given this data, will it rain tomorrow?", or "solve this protein folding problem", or "which policy will best reduce gun-violence?", or even "given these specific parameters and definitions, how do we optimize for human happiness?" For the complex answers like the last of those, we could then ask the AI to model the state of the world that results from following this policy. If we see that it leads to tiling the universe with smiley faces, we know that we made a mistake somewhere (that wasn't what we were trying to optimize for), and adjust the parameters. We might even train the AI over time, so that it learns how to interpret what we mean from what we say. When the AI models a state of the world that actually reflects our desires, then we implement it's suggestions ourselves, or perhaps only then hit the implement button, by with the AI takes the steps to carry out it's plan. We might even use such a system to check the safety of future generations of the AI. This would slow recursive self improvement, but it seems it would be much safer.

Comment author: gedymin 27 November 2014 12:20:18PM *  0 points [-]

This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent.

I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.)

Some arguments have already been mentioned:

  • Tool AI or friendly AI without executive power cannot stop the world from building UFAI. Its abilities to prevent this and other existential risks are greatly diminished. It especially cannot guard us against the "unknown unknowns" (an oracle is not going to give answers to questions we are not asking.)
  • The decisions of an oracle or tool AI might look good, but actually be bad for us in ways we cannot recognize.

There is also a possibility of what Bostrom calls mind crime. If a tool or oracle AI is not inherently friendly, it might simulate sentient minds in order to give the answers to the questions that we ask; kill or possibly even torture these minds. The possibility that these simulations have moral rights is low, but there can be trillions of them, so even a low probability cannot be ignored.

Finally, it might be that the best strategy for a tool AI to give answer is to internally develop an agent-type AI that is capable of self-improvement. If the default outcome of creating a self-improving AI is doom, then the tool AI scenario might in fact be less safe.

Comment author: ChristianKl 27 November 2014 07:05:16AM 0 points [-]

If you use a spell checking engine while you are typing that likely has an utility function buried in it's code.

Comment author: Wes_W 27 November 2014 01:44:51AM 4 points [-]

First, there's the political problem: if you can build agent AI and just choose not to, this doesn't help very much when someone else builds their UFAI (which they want to do, because agent AI is very powerful and therefore very useful). So you have to get everyone on board with the plan first. Also, having your superintelligent oracle makes it much easier for someone else to build an agent: just ask the oracle how. If you don't solve Friendliness, you have to solve the incentives instead, and "solve politics" doesn't look much easier than "solve metaethics."

Second, the distinction between agents and oracles gets fuzzy when the AI is much smarter than you. Suppose you ask the AI how to reduce gun violence: it spits out a bunch of complex policy changes, which are hard for you to predict the effects of. But you implement them, and it turns out that they result in drastically reduced willingness to have children. The population plummets, and gun violence deaths do too. "Okay, how do I reduce per capita gun violence?", you ask. More complex policy changes; this time they result in increased pollution which disproportionately depopulates the demographics most likely to commit gun violence. "How do I reduce per capita gun violence without altering the size or demographic ratios of the population?" Its recommendations cause a worldwide collapse of the firearms manufacturing industry, and gun violence plummets, along with most metrics of human welfare.

If you have to blindly implement policies you can't understand, you're not really much better off than letting the AI implement them directly. There are some things you can do to mitigate this, but ultimately the AI is smarter than you. If you could fully understand all its ideas, you wouldn't have needed to ask it.

Does this sound familiar? It's the untrustworthy genie problem again. We need a trustworthy genie, one that will answer the questions we mean to ask, not just the questions we actually ask. So we need an oracle that understands and implements human values, which puts us right back at the original problem of Friendliness!

Non-agent AI might be a useful component of realistic safe AI development, just as "boxing" might be. Seatbelts are a good idea too, but it only matters if something has already gone wrong. Similarly, oracle AI might help, but it's not a replacement for solving the actual problem.

Comment author: JStewart 26 November 2014 08:58:55PM *  5 points [-]

This has been proposed before, and on LW is usually referred to as "Oracle AI". There's an entry for it on the LessWrong wiki, including some interesting links to various discussions of the idea. Eliezer has addressed it as well.

See also Tool AI, from the discussions between Holden Karnofsky and LW.

Comment author: Capla 26 November 2014 10:28:16PM 1 point [-]

I was just reading though the Eliezer article. I'm not sure I understand. Is he saying that my computer actually does have goals?

Isn't there a difference between simple cause and effect and an optimization process that aims at some specific state?

Comment author: Viliam_Bur 27 November 2014 10:21:41AM *  2 points [-]

Maybe it would help to "taboo" the word "goal".

A process can progress towards some end state even without having any representation of that state. Imagine a program that takes a positive number at the beginning, and at each step replaces the current number "x" with value "x/2 + 1/x". Regardless of the original number, the values will gradually move towards a constant. Can we say that this process has a "goal" or achieving the given number? It feels wrong to use this word here, because the constant is nowhere in the process, it just happens.

Typically, when we speak about having a "goal" X, we mean that somewhere (e.g. in human brain, or in the company's mission statement) there is a representation of X, and then the reality is compared with X, various paths from here to X are evaluated, and then one of those paths is followed.

I am saying this to make more obvious that there is a difference between "having a representation of X" and "progressing towards X". Humans typically create representations of their desired end states, and then try finding a way to achieve them. Your computer doesn't have this, and neither does "Tool AI" at the beginning. Whether it can create representations later, that depends on technical details, how specifically such "Tool AI" is programmed.

Maybe there is a way to allow superhuman thinking even without creating representations corresponding to things normally perceived in our world. (For example AIXI.) But even in such case, there is a risk of having a pseudo-goal of the "x/2 + 1/x" kind, where the process progresses towards an outcome even without having a representation of it. AI can "escape from the box" even without having a representation of "box" and "escape", if there exists a way to escape from it.

Comment author: torekp 29 November 2014 07:58:32PM 0 points [-]

I don't get this explanation. Sure, a process can tend toward a certain result, without having an explicit representation of that result. But such tendencies often seem to be fragile. For example, a car engine homeostatically tends toward a certain idle speed. But take out one or all spark plugs, and the previously stable performance evaporates. Goals-as-we-know-them, by contrast, tend to be very robust. When a human being loses a leg, they will obtain a synthetic one, or use a wheelchair. That kind of robustness is part of what makes a very powerful agent scary, because it is intimately related to the agent's seeing many things as potential resources to use toward its ends.

Comment author: JoshuaFox 26 November 2014 08:23:34PM *  3 points [-]

Anyone want to comment on a pilot episode of a podcast "Rationalists in Tech"? Please PM or email me. I'll ask for your feedback and suggestions for improvement on a 30-minute audio interview with a leading technologist from the LW community. This will allow me to plan an even better series of further interviews with senior professionals, consultants, founders, and executives in technology, mostly in software.

  • Discussion topics will include the relevance of CfAR-style techniques to the career and daily work of a tech professional; tips on career aimed at LWer technologists; and the rationality-related products and services of some interviewees;

  • The goal is to show LessWrongers in the tech sector that they have a community of like-minded people. Often engineers, particularly those just starting out, have heard of the value of networking, but don't know where they can find people who they can and should connect to. Similarly, LWers who are managers or owners are always on the lookout for talent. This will highlight some examples of other LWers in the sector as an inspiration for networking.

Comment author: SodaPopinski 26 November 2014 03:14:45PM *  4 points [-]

This is a disturbing talk from Schmidhuber (who worked with Hutter and one of the founders of Deep Mind at the Swiss AI lab).
I say disturbing because of the last minute where he basically says we should be thankful for being the stepping stone to the next step in an evolution towards a world ran by AI's.
This is the nonsense we see repeated almost everywhere (outside lesswrong) that we should be happy to have humanity supplanted by the more intelligent AI, and here it is coming from a pretty wellknown AI researcher... https://www.youtube.com/watch?v=KQ35zNlyG-o

Comment author: polymathwannabe 26 November 2014 01:03:58PM *  -1 points [-]

The Wikipedia article on the Ferguson crisis says,

"the population is only one-third white and about two-thirds black"

and then says,

"Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites"

which only appears anomalous if you ignore the base rate of finding a black driver vs. a white one. (Edited to add: other factors, like how many people in each group own/drive cars, may be relevant.)

There are many valid reasons to worry about racial tensions in that town (e.g. 48/53 police members are white), but the arrest rates is not one of them.

Comment author: ChristianKl 26 November 2014 04:51:42PM 6 points [-]

Statistics don't work like you think they do. The number is controlled.

If you come to that conclusion, the thing you should do as a rationalist is "notice confusion". Then you would check the source and would see:

While black residents accounted for 67 percent of Ferguson’s population, black drivers accounted for more than 86 percent of the traffic stops made last year by the Ferguson Police Department, according to a report produced by the office of Missouri Attorney General Chris Koster.

If you want to learn the relevant statistical literacy skills to understand what the sentence "Ferguson police were twice as likely to arrest African Americans during traffic stops as they were whites" usually means, the relevant subject is regressian analysis.

Comment author: polymathwannabe 26 November 2014 05:09:56PM 1 point [-]

Thank you.

Comment author: artemium 26 November 2014 07:00:23AM 0 points [-]

This is really worrying. Hubris and irrational geopolitical competition may create existential risks sooner then expected. http://motherboard.vice.com/read/how-the-pentagons-skynet-would-automate-war

Comment author: Error 26 November 2014 03:56:32AM 2 points [-]

I'm looking for an old post. Something about an extinct species of primate that may once have been nearly as smart as humans, but evolved over time to be much dumber, apparently because the energy costs of intelligence were maladaptive in its environment.

Can anyone point me in the right direction?

Comment author: Unknowns 26 November 2014 04:32:18AM 10 points [-]
Comment author: Error 26 November 2014 03:00:55PM 0 points [-]

Perfect, thank you.

Comment author: [deleted] 26 November 2014 03:52:37AM *  3 points [-]

Today I read a post by Bryan Caplan aimed toward effective altruists:

Question: How hard would it be to set up a cost-effective charity to help sponsor the global poor for immigration to Argentina? Responses from GiveWell, the broader Effective Altruism community, and Argentina experts are especially welcome.

For context, Argentina essentially allows immigration by anybody who can get an employer to sponsor them.

Comment author: bramflakes 26 November 2014 01:29:33PM 7 points [-]

what could a faltering, medium-trust country like argentina need more than millions of poor, low-trust immigrants

Comment author: Salemicus 26 November 2014 02:58:15PM 10 points [-]

It's a common framing, and so I don't intend to pick on you, but I think the key issue isn't levels of trust, but levels of trustworthiness. Yes, there can be feedback effects in both directions between trust and trustworthiness, but fundamentally, it is possible for people and institutions with high trustworthiness to thrive in an otherwise low-trust/trustworthiness society. Indeed, lacking competitors, they may find it particularly easy to do so, and through gradual growth and expansion, lead to a high-trust/trustworthiness society over time. It is not possible for people and institutions with high trust to thrive in an otherwise low-trust/trustworthiness society, as they will be taken advantage of.

You can't bootstrap a society to a high-trust equilibrium by encouraging people to trust more. You need to encourage them to keep their promises.

Comment author: [deleted] 26 November 2014 05:32:54PM *  2 points [-]

I think this line of thinking is productive. Other thoughts:

For cooperative agents to thrive among non-cooperators, they must be able to identity other cooperators. Of course you can wait for the non-cooperators to identity themselves (via an act of non-cooperation in tit-for-tat, or a costly signal), but other agents are inevitably going to rely on other heuristics and information to predict the hidden strategies of others, and, when the agents are human, they will do this in a risk-averse way.

Accordingly, a low-trust society (one in which no single entity is able or willing to enforce cooperative behavior over all individuals) is seldom homogeneously low-trust (or low trustworthiness), but rather a amalgamation of subgroups, each of which is relatively more trusting and trustworthy, but only within the subgroup. Because of the need to guess at the hidden strategies of others, these subgroups don't necessarily split the society into "levels of trustworthiness".

The task of moving to a high trust/trustworthiness society becomes the task of getting cooperative subgroups to identity other potentially cooperative subgroups, and for those two subgroups to figure out a way to share the duty of enforcing cooperative behavior, or of allowing more true information about the cooperative behavior of individuals to flow between groups.

Since evolution produces a special cooperation in close-kinship relations, the simplest artificial grounds for merging two previously uncooperative subgroups is to stretch the kinship relation as far as possible (as in clans, or any society where third- and fourth-cousin relationships are considered relevant).

Some other examples related to this process:

  • The spread of shared religious identity (when this involves submitting to a punitive religious law).
  • Trade unions, cartels and guilds.
  • Language boundaries (which impede information about trustworthiness from flowing across groups).
  • Race, (as an amalgam of language, religion, class etc packaged with a convenient visual ID)
  • The cultivation of national and class identities.
  • The oft-maligned internal division of political parties, which smash together otherwise separate subgroups.
  • The forcible crushing of the old markers of old subgroups (old religions, old kinship practices, old languages)

It's a bit of theory of everything, but I think this is a helpful framing.

Comment author: Capla 25 November 2014 07:35:12PM 1 point [-]

I think there may people here that can benefit from this.

http://www.nerdfitness.com/

Comment author: RowanE 26 November 2014 10:24:31AM *  5 points [-]

We shouldn't select our fitness gurus for whether they're of our tribe, we should select our fitness gurus for the effectiveness and truth of what they teach.

On that basis, do you have any reasons beyond "it's nerdy!" for recommending this website over any number of other ones, many of which are very good? If it's the gimmicky motivational approaches, I think LessWrong has that down pat - loads of us play HabitRPG and I'm pretty sure Beeminder's founders were some of our own.

Edit: For some reason my links ate themselves and the text between them so I took them out.

Comment author: Wes_W 01 December 2014 07:38:55PM 1 point [-]

I'm not especially impressed with Steve Kamb as a fitness guru. He has a writing style I find accessible, and doesn't seem to mind covering introductory material, which are pluses, but not outstanding in the fitness world. The gimmicky motivational approaches probably work for some people, but I find them silly.

I've found the forums to be a very valuable resource, though. Lots of knowledgeable people whose brains you can pick, and a structure for social support/accountability, which can be scarce in meatspace.

Comment author: Capla 26 November 2014 08:17:51PM 3 points [-]

You are right, but much of the fitness game is motivation, and we are tribal organisms. Being part of a community to which one relates, that pushes you to be better, is a huge benefit.

Maybe this is a solved problem, but I think there might be at least one person here with whom it resonates, and to whom it could provide substantial value.

Comment author: ChristianKl 26 November 2014 08:54:04PM 3 points [-]

In general what this community is about is having good arguments for doing what you do. As such it usually makes sense if a person who advocates some practices makes the case for the practice instead of simply posting a link.

In this case, did you follow that program? What results did you get?

Comment author: blogospheroid 25 November 2014 04:28:30PM 0 points [-]

Weird fictional theoritical scenario. Comments solicited.

In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).

We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."

They reply " We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them"

What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our "empathy" to the impersonal forces that govern our life too much? What if the aliens simply don't see it that way?

Comment author: Document 26 November 2014 02:13:04AM 1 point [-]

Similar "problem"(?): Acausal trade with Azathoth

Comment author: Eliezer_Yudkowsky 25 November 2014 06:08:07PM 6 points [-]

That's not how TDT works.

Comment author: MrMind 26 November 2014 11:02:36AM 0 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Comment author: IlyaShpitser 26 November 2014 05:18:15PM *  1 point [-]

I view TDT as a bit unnatural, UDT is more natural to me (after people explained TDT and UDT to me).

I think of UDT as a decision theory of 'counterfactually equitable rational precommitment' (?controversial phrasing?).

So you (or all counterfactual "you"s) precommit in advance to do the [optimal thing], and this [optimal thing] is defined in such a way as to not give preferential treatment to any specific counterfactual version of you. This is vague. Unfortunately the project to make this less vague is of paper length.

:)


Folks working on UDT, feel free to chime in to correct me if any of above is false.

Comment author: MrMind 27 November 2014 08:11:59AM 0 points [-]

But isn't UDT relying on perfect information about the problem at hand?

If this is so, could it be seen as the limit of TDT with complete information?

Comment author: wedrifid 26 November 2014 12:34:07PM 2 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

Comment author: polymathwannabe 25 November 2014 05:24:52PM 9 points [-]

The whole scenario depends on a reification fallacy. You don't negotiate with, or engage in prediction theory games with, impersonal forces (and calling capitalism a force of nature seems a stretch to me).

Comment author: Wes_W 25 November 2014 05:09:16PM 12 points [-]

Evolution is powerful, but that doesn't make it an intelligence, certainly not a superintelligence. We're not defecting against evolution, evolution just doesn't/can't play PD in the first place. But I'm also not sure how important the PD game is to this scenario, as opposed to the aliens just crushing us directly.

And as long as we're personifying evolution, an argument could be made that the triumph of human civilization would still be a win for evolution's "values", like survival and unlimited reproduction.

We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors.

I don't understand how this rule leads to the described behavior. As written, it suggests that the aliens would like to be crushed by their superiors...?

Comment author: Lumifer 25 November 2014 04:58:34PM 3 points [-]

Is the idea of extending our "empathy" to the impersonal forces that govern our life too much?

Deification of natural forces is a standard human culture trait. A large proportion of early gods just personified natural phenomena.

Shinto is a contemporary religion that still does that a lot.

Comment author: Punoxysm 25 November 2014 03:27:50PM *  1 point [-]

In business, almost all executive decisions (headcount and budget allocation, which unproven products to push ahead with aggressively, translating forecasts for macroeconomic risks into business-specific policies, who to promote to other executive level positions, etc.) are made with substantial uncertainty. Or to put it another way, any executive-level decision-maker would be paralyzed without strong priors. This is especially true in fast-changing or competitive markets, where the only way to collect more evidence without direct risk is to let your competitors jump in the water first.

In other words, the kind of certainty we hold out for (often vainly) in science is almost unknown in many aspects of business, and the most critical decisions are often the most uncertain.

It's very "Black Swan" (in the sense of Taleb's whole, not just tail risk).

Thoughts?

Comment author: Lumifer 25 November 2014 04:37:39PM 2 points [-]

any executive-level decision-maker would be paralyzed without strong prior

I don't think that's necessarily true, just having a high risk tolerance works as well. I also think you underestimate the amount of evidence present -- e.g. in most organizations the next-year budget is a variation on the previous year's budget.

the kind of certainty we hold out for (often vainly) in science is almost unknown in many aspects of business

Yes, of course. That's why, for example, risk management is an important part of doing business but is not normally a big part of doing science...

Comment author: Punoxysm 25 November 2014 09:39:15PM 0 points [-]

Risk tolerance is a good, possibly more correct, way of looking at it. Actually most executives probably have a mixture of risk tolerance and strong priors.

Some businesses can get away with only relatively low-risk, safe decisions and focus on efficient operations. However, I think the majority of businesses, especially newer and growing ones, can't get away with this consistently or for a long time. And most businesses simply don't have that long a life, period.

Setting a budget based off last years' when your revenue is growing 50%+ YoY won't work well.

What I was thinking of more specifically is that something like setting a budget can be defined as a rigorous optimization problem, but with highly uncertain parameters (marginal return on investment from various units of the business). Any decision made implies a combination of prior over those values and risk tolerance.

Comment author: Lumifer 25 November 2014 09:56:36PM 0 points [-]

Any decision made implies a combination of prior over those values and risk tolerance.

If you treat budgeting as an optimization problem, you need forecasts, not priors.

I would also suspect that real-life business budgets will be hard to set as "rigorous optimization problems" because in reality you have discontinuities, nonlinear responses, and all kinds of funky dependencies between different parts of the budget.

Comment author: ChristianKl 25 November 2014 03:56:57PM 0 points [-]

It's very "Black Swan".

I don't think you understand what the term means. It's unknown unknowns and not known unknowns. Whether or not an unproven product will succeed is a question about a known unknown.

This is especially true in fast-changing or competitive markets, where the only way to collect more evidence without direct risk is to let your competitors jump in the water first.

I don't think that's true. There are various forms of doing market research that simply involve money but not additional risk.

Comment author: Punoxysm 25 November 2014 04:25:18PM *  0 points [-]

I use "Black Swan" in the context of the whole book. That is, we build narratives after-the-fact to explain correct priors as skill and judgment. Also, the greater impact of more uncertain decisions, in a way that ties uncertainty to the impact, is exactly the nature of unknown-unknown black swans (which I'd say the launching of a substantially new product category fits into, in a mild form. The iPod/iTunes was not a black swan for Apple, though they took considerable risks with it. It was a black swan for the music industry.).

Market research is better than nothing, but still has many problems. Most of it wouldn't pass peer review, and we know peer review makes plenty of mistakes. So when taking it into account, decision-makers must apply strong priors.

And on the occasions that market research really is that good, it's a no-brainer; your competitors will do it too.

Comment author: TimS 26 November 2014 05:03:30PM *  2 points [-]

I use "Black Swan" in the context of the whole book

Please don't take terminology with fairly precise meaning and use it idiosyncratically. At best, you unnecessary increase your inferential distance. At worst, you dilute the term so that it increases everyone's inferential distance.

Comment author: Punoxysm 27 November 2014 02:50:41AM *  1 point [-]

Edited for clarity. Thought terms get diluted all the time.

Maybe "Talebian" would be more appropriate.

Comment author: NancyLebovitz 25 November 2014 10:09:36AM *  17 points [-]

The header for this page says "You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet.". It's inaccurate because Discussion doesn't include the posts which were started in Main.

Comment author: Artaxerxes 25 November 2014 07:54:23AM *  18 points [-]

Stuart Russell contributes a response to the Edge.org article from earlier this month.

Of Myths And Moonshine

"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. There have been many unconvincing arguments – especially those involving blunt applications of Moore's law or the spontaneous emergence of consciousness and evil intent. Many of the contributors to this conversation seem to be responding to those arguments and ignoring the more substantial arguments proposed by Omohundro, Bostrom, and others.

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

  1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

  2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius. AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity. Senior AI researchers express noticeably more optimism about the field's prospects than was the case even a few years ago, and correspondingly greater concern about the potential risks.

No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief.

Comment author: Brillyant 26 November 2014 05:17:16PM 1 point [-]

Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

ELI5...

  • Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

  • Why is "spontaneous emergence of consciousness and evil intent" not a risk?

Comment author: [deleted] 30 November 2014 12:29:56PM 1 point [-]

Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

Because instructions are words, and "ask for instructions" implies an ability to understand and a desire to follow. The desire to follow instructions according to their givers' intentions is more-or-less a restatement of the Hard Problem of FAI itself: how do we formally specify a utility function that converges to our own in the limit of increasing optimization power and autonomy?

Comment author: TheAncientGeek 30 November 2014 03:18:10PM -2 points [-]

If you are worrying about the dangers of human level or greater AI, you are tacitly taking the problem of natural language interpretation to have been solved, so the above is an appeal to Mysterious Selective Stupidity.

Comment author: [deleted] 30 November 2014 10:08:58PM 1 point [-]

you are tacitly taking the problem of natural language interpretation to have been solved

No, I am not. Just because an AGI can solve the natural-language interpretation problem does not mean the natural-language interpretation problem was solved separately from the AGI problem, in terms of narrow NLP models. In fact, more or less the entire point of AGI is to have a single piece of software to which we can feed any and all learning problems without having to figure out how to model them formally ourselves.

Comment author: TheAncientGeek 15 December 2014 10:43:45AM *  0 points [-]

In responding to Brilliant, you were tacitly assuming that the AI has been given instructions in some higher level language that is subject to differing interpretations, and is not therefore just machine code, which US tacitly assuming it has already got .NL abilities.

Yes, it would probably need a motivation to interest such sentences correctly. But that us an easier problem to solve than coding un the whole of human value. An AI would need to understand human value in order to understand NL, but would not need to be preloaded with all human value, since discovering it would be a subsidiary goal of interpreting NL correctly.

And interpreting instructions correctly is a subgoal of getting things in general right. Building AIs that are epistemic rationalists could be a further simplification of the problem of AI safety. Epistemic rationality is difficult for humans because humans are evolutionary hacks whose goals are spreading their genes, achieving status, etc.It may be excessively anthropomorphic to assume human levels of deviousness in AIs.

Comment author: [deleted] 15 December 2014 01:35:22PM *  1 point [-]

In responding to Brilliant, you were tacitly assuming that the AI has been given instructions in some higher level language that is subject to differing interpretations, and is not therefore just machine code, which US tacitly assuming it has already got .NL abilities.

No, I'm insisting that no realistic AGI at all is a Magic Genie which can be instructed in high-level English. If it were, all I would have to say is, "Do what I mean!" and Bob's your uncle. But since that cannot happen without solving Natural Language Processing as a separate problem before constructing an AGI, the AGI agent has a utility function coded as program code in a programming language -- which makes desirable behavior quite improbable.

An AI would need to understand human value in order to understand NL, but would not need to be preloaded with all human value, since discovering it would be a subsidiary goal of interpreting NL correctly.

Again: knowing is quite different from caring. What we could do in this domain is solve natural-language learning and processing separately from AGI, and then couple that to a well-worked-out infrastructure of normative uncertainty, and then, after making absolutely sure that the AI's concept-learning via the hard-wired natural-language processing library matches the way human minds represent concepts computationally, use a large corpus of natural-language text to try to teach the AI what sort of things human beings want.

Unfortunately, this approach rarely works with actual humans, since our concept machinery is horrifically prone to non-natural hypotheses about value, to the point that most of the human race refuses as a matter of principle to consider ethical naturalism a coherent meta-ethical stance, let alone the correct one.

We have some idea of a safe goal function for the AGI (it's essentially a longer-winded version of "Do what I mean, but taking the interests of all into account equally, and considering what I really mean even under reflection as more knowledge and intelligence are added"), the question is how to actually program that.

Which is actually an instance of the more general problem: how do we program goals for intelligent agents in terms of any real-world concepts about which there might be incomplete or unformalized knowledge? Without solving that we can basically only build reinforcement learners.

The whole cognitive-scientific lens towards problems is to treat them as learning and inference problems, but that doesn't really help when we need to encode something we're fuzzy about rather than being able to specify it formally.

Building AIs that are epistemic rationalists could be a further simplification of the problem of AI safety. Epistemic rationality is difficult for humans because humans are evolutionary hacks whose goals are spreading their genes, achieving status, etc.It may be excessively anthropomorphic to assume human levels of deviousness in AIs.

If being devious to humans is instrumentally rational, an instrumentally rational AI agent will do it.

Comment author: TheAncientGeek 15 December 2014 02:23:53PM *  0 points [-]

No, I'm insisting that no realistic AGI at all is a Magic Genie which can be instructed in high-level English. If it were, all I would have to say is, "Do what I mean!" and Bob's your uncle. But since that cannot happen without solving Natural Language Processing as a separate problem before constructing an AGI, the AGI

I was actually agreeing with you that NLP needs to be solved separately if you want to instruct it in English. The rhetoric about magic isn't helpful.

agent has a utility function coded as program code in a programming language -- which makes desirable behavior quite improbable.

I don't see why that would follow, and in fact I argued against it.

knowing is quite different from caring.

I know.

What we could do in this domain is solve natural-language learning and processing separately from AGI, and then couple that to a well-worked-out infrastructure of normative uncertainty, and then, after making absolutely sure that the AI's concept-learning via the hard-wired natural-language processing library matches the way human minds represent concepts computationally, use a large corpus of natural-language text to try to teach the AI what sort of things human beings want.

That's not what I was saying. I was saying an AI with a motivation to understand .NL correctly would research whatever human value was relevant.

We have some idea of a safe goal function for the AGI (it's essentially a longer-winded version of "Do what I mean, but taking the interests of all into account equally, and considering what I really meaneven under reflection as more knowledge and intelligence are added"), the question is how to actually program that

That's kind of what I was saying.

If being devious to humans is instrumentally rational, an instrumentally rational AI agent will do it.

Non sequitur. In general, what is an instrumental goal will vary with final goals, and epistemic rationality is a matter of final goals. Omohundran drives are unusual in not having the property of varying with final goals.

Comment author: Viliam_Bur 26 November 2014 09:21:14PM 5 points [-]

Why can't we program hard stops into AI, where it is required to pause and ask for further instruction?

If the AI is aware of the pauses, it can try to eliminate them (if the pauses are triggered by a circumstance X, it can find a clever way to technically avoid X), or to make itself receive the "instruction" it wants to receive (e.g. by threating or hypnotising a human, or by doing something that technically counts as human input).

Comment author: Brillyant 26 November 2014 09:32:38PM -2 points [-]

I see.

by threating or hypnotising a human

This is the gist of the AI Box experiment, no?

Comment author: Viliam_Bur 27 November 2014 09:20:51AM *  2 points [-]

The important aspect is that there are many different things the AI could try. (Maybe including those that can't be "ELI5". It is supposed to have superhuman intelligence.) Focusing on specific things is missing the point.

As a metaphor, imagine that a group of retarded people is trying to imprison MacGyver in a garden shed. Later MacGyver creates an explosive from his chewing gum, destroys a wall, and leaves. The moral of this story is not: "To imprison MacGyver reliably, you must take all the chewing gum from him." The moral is: "If you are retarded, and your enemy is MacGyver, you almost certainly cannot imprison him in the garden shed."

If you get this concept, then similar debates will feel like: "Let's suppose we make really really sure he has no chewing gum. We will even check his shoes, although, realistically, no one keeps chewing gum in their shoes. But we will be extra careful, and will check his shoes anyway. What could possibly go wrong?"

Comment author: wedrifid 26 November 2014 09:51:48PM 0 points [-]

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

Comment author: artemium 25 November 2014 08:01:40PM 4 points [-]

Finally some common sense. I was seriously disappointed in statements made by people I usually admire (Pinker, Schremer). It just shows how much we still have to go in communicating AI risk to the general public when even the smartest intellectuals dismiss this idea before any rational analysis.

I'm really looking forward to Elon Musk's comment.

Comment author: Torello 25 November 2014 02:21:16AM *  2 points [-]

TLDR: Requesting articles/papers/books that feature detailed/explicit "how-to" sections for bio-feedback/visualization/mental training for improving performance (mostly mental, but perhaps cognitive as well)

Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.

I also saw a program about free divers (staying underwater for minutes) who slow their heart-rates through meditation.

I also read that elite military units visualize to remain calm and carry out complex tasks despite incredible stress (for instance, bomb squad members with heart rates lower in the presence of a bomb than on an average afternoon at the base). Unfortunately I didn't record the sources of these various pieces, so I can't link to them

Has anyone read any specific how-to books on the topic, i.e., here are step-by-step instructions for visualizations, lowering heart rate, mental clarity, etc?

Comment author: Brillyant 26 November 2014 05:30:29PM *  2 points [-]

Years ago I saw an interview with Michael Phelps' (Olympic swimmer) coach in which he claims that most Olympic-finalist caliber swimmers have nearly indistinguishable physical capabilities, Phelps' ability to focus and visualize success is what set him apart.

I'm skeptical of this.

No doubt it is relatively true that professional/elite athletes have similar physical capabilities, but even very small differences in athletic ability can be very consequential over the course of XXX meters in a swimming race or, say an entire season of football. We are talking about very small margins of victory in many (or most) cases.

Comment author: Torello 01 December 2014 03:02:30AM *  0 points [-]

I agree that small physical differences can be very consequential--wouldn't small mental differences be similarly consequential?

http://www.radiolab.org/story/91618-lying-to-ourselves/

This radiolab episode discusses how swimmers who engage in more self-deception win more frequently, controlling for other factors (i.e., self-deceivers on a division 3, 2, and 1 teams are more likely to beat their opponents, so at different levels of physical skill their mentality is predictive).

We are talking about very small margins of victory in many (or most) cases.

I'm not sure what you're getting at here--that the victory of a particular person is attributable to noise because the margin of error is small?

Comment author: Brillyant 02 December 2014 12:25:52AM 0 points [-]

Great points.

In Phelps' case, I think he is physically superior—though perhaps only slightly—compared to the competition. Same with Usain Bolt.

I'd agree confidence, even to the extent it is self-deception, can make a significant difference when it comes to sports performance. However, when an athlete—like Phelps or Bolt—routinely wins over the course of several races spanning years, I think physical capability differences are the main reason.

In team sports, or really any sport that requires more than just straight line speed, I think psychological difference are very important. But swimming and sprinting are largely physical contests. Unless you have problems with false starts, I'm not seeing where the mental edge figures in.

(Obviously longer races that require endurance and pacing considerations are more prone to psychological influence.)

Comment author: ChristianKl 25 November 2014 03:27:58PM 1 point [-]

The first step of how to of biofeedback means getting a biofeedback device.

Direct heart rate is no good goal. Doing biofeedback on heart rate variance is better.

I also read that elite military units visualize to remain calm and carry out complex tasks despite incredible stress (for instance, bomb squad members with heart rates lower in the presence of a bomb than on an average afternoon at the base).

I'm not sure whether you want a bomb squad to have a heart rate that's lower than normal.

Has anyone read any specific how-to books on the topic, i.e., here are step-by-step instructions for visualizations, lowering heart rate, mental clarity, etc?

Step-by-step instructions are not how you achieve the kind of results of Phelps or the bomb squat. Both are done through the guidance of coaches.

To the extend that the main way I meditate has steps it has three: 1. Listen to the silence 2. Be still 3. Close your eyes.

Among those (3) is obvious in meaning. (1) takes getting used to and is probably not accessible by mere reading. Understanding the meaning of (2) takes months.

Comment author: Torello 25 November 2014 10:39:27PM 0 points [-]

Thanks for your reply.

Can you point me to any articles/sites about biofeedback devices? Have you done biofeedback yourself?

Perhaps you're right about the bomb squad heart rate, maybe a moderately raised rate would be a proxy for optimal/peak arousal levels. However, I'd guess that a little too much calm is better than overwhelming panic, which would probably be a more typical reaction to approaching a bomb that's about to explode.

I agree that a coach would be better, but a book is a more practical option at the moment.

(this may sound snarky, but isn't) Did you learn meditation from a teacher, or from a step-by-step book? The steps you give seem are simple (not easy), and a good starting point. I think a meditation coach would help you flesh these out, but those kinds of precise instruction are what I'm looking for.

Comment author: ChristianKl 26 November 2014 10:50:41AM 1 point [-]

The steps you give seem are simple (not easy),

Yes, and people at LW are in generally very bad at simple. People here have the skills for dealing with complex intellectual subjects.

The problem with "be still" is that it leaves you with question like: "4 minutes in the meditation I feel the desire to adjust my position, what do I do?" It doesn't give you a easy criteria to decide when moving to change your position violates "be still" and when it doesn't.

Can you point me to any articles/sites about biofeedback devices? Have you done biofeedback yourself?

Doing biofeedback is still on my todo list.

My device knowledge might be 1-2 years out of date. Before that point the situation was that emWave2 and wilddivine were the good non-EGG based solutions. Good EGG based solutions are more expensive. See also a QS-forum article on neurofeedback. Even through the QS forum is very low in terms of posts, posting a question there on topics like this is still a good idea (Bias disclosure: I'm a mod at the QS-Forum).

Among those two emWave2 basically only goes over heart rate variance (HRV) and WildDevine also measures skin conductance level (SCL) with is a proxy for the amount that you sweat. WildDevine also has a patent for doing biofeedback with HRV + SCL. emWave2 is with 149$ at the moment AFAIK the cheapest choice for a good device that comes with a good explanation of how to do training with it and that you can just use as is.

(this may sound snarky, but isn't) Did you learn meditation from a teacher, or from a step-by-step book?

I started with learning meditation from a book by Aikido master Koichi Tohei ten years ago. I have roughly three years of in person training. I also have NLP/Hypnosis training since that time. If I would switch out an emotional response of the bomb swat, then hypnosis is probably the tool of choice. With biofeedback I would see no reason for overcompensation. Switching out an emotional response via hypnosis on the other hand can lead to such effects. Hearing an alarm of an ambulance might also lower my heart rate ;)

There are also safety issues. I don't like the idea of people messing themselves up and are faced with experiences that they can't handle because they don't have proper supervision.

Comment author: Sjcs 25 November 2014 11:26:49AM *  3 points [-]

The book On Combat by Dave Grossman discusses some of these things. I haven't read it yet, but have read reviews and listened to a podcast by two people I consider highly evidence-based and reputable (here). In particular, the book discusses a method of physiologically lowering your heart rate he calls "Combat Breathing". This entails 4 phases, each for the durations of a count of 4 (no unit specified, I do approx 4 seconds):

  1. Breathe in

  2. Hold in

  3. Breathe out

  4. Hold out

It sounds very simple, but I have heard multiple recommendations of it from both the armed-forces and medical worlds. I can also add a data point confirming it works well for me (mostly only for reducing heart rate to below 100, not all the way down to resting rate).

Comment author: CAE_Jones 25 November 2014 12:48:25AM 1 point [-]

It seems that, in order to accomplish anything, one needs some combination of conscientiousness, charisma, and/or money*. It seems that each of the three can strengthen the others:

  • Conscientiousness correlates with earning potential
  • A conscientious person can exert extraordinary effort to learn, practice, and internalize behaviors that increase charisma.
  • a charismatic person can make connections and get deals and convince people to give them money.
  • Money can buy charisma/conscientiousness training or devices, or can pay people to be charismatic/conscientious in pursuit of one's goals.

If someone lacks all of these resources severely enough, is there any way to correct that? It rather seems like the answer is "no, but most people can't imagine someone with that much of a deficit in all three at the same time".

* Yes, I could have gone for alliteration with "cash", "credit", or "capital". Money seems different enough that the dissonance seemed like a better idea at the time.

Comment author: Lumifer 25 November 2014 04:43:04PM 2 points [-]

Don't start with the resources you lack. Start with the resources you have and then look how can you utilize them to achieve your aims.

Comment author: fubarobfusco 25 November 2014 11:27:01PM -1 points [-]

... bearing in mind that "ability to discover new resources" is itself a resource, too.

Comment author: gjm 25 November 2014 12:17:31PM 2 points [-]

All of those things can be mitigated by other traits. Connections can be useful even without very much charisma. Cleverness can lead to pretty good earning potential even with relatively little conscientiousness, and may help one think of ways to improve charisma and conscientiousness. At any given level of earning potential, being cheap ("frugal" would be a better word but begins with the wrong letter) eases the transition from gradually sliding into debt to gradually accumulating savings. Other aspects of character besides conscientiousness make a difference -- e.g., a reputation for honesty may be helpful.

Given a bad enough deficit in everything that matters, it's certainly possible to be so screwed that recovery is unlikely. It's also possible to overestimate those deficits and the resulting screwage, e.g. on account of depression. There's probably a nasty positive feedback loop where doing so makes getting unscrewed harder.

Comment author: Torello 25 November 2014 02:26:42AM 4 points [-]

This is not exactly a reply to your question, but I think your question is fits this dynamic:

Miller's Iron Law of Iniquity

In principle, there is an evolutionary trade-off between any two positive traits. But in practice, every good trait correlates positively with every other good trait.

http://edge.org/response-detail/11314

Comment author: NancyLebovitz 24 November 2014 09:49:22PM 7 points [-]

Development aid is really hard.

A project that works well in one place or for a little while may not scale. Focus on administrative costs may make charities less competent.

Nonetheless, some useful help does happen, it's just important to not chase after the Big Ideas.

Comment author: hegemonicon 25 November 2014 03:12:38AM 9 points [-]

One of the charities mentioned in the article, Deworm the World, is actually a Givewell top charity, due to "the strong evidence for deworming having lasting impact on childhood development". The article, on the other hand, claims that the evidence is weak, citing three studies in the British Medical Journal, which Givewell doesn't appear to mention in their review of the effectiveness of deworming.

Givewell's review of deworming

Might be worth looking into more.

Comment author: NancyLebovitz 25 November 2014 10:05:15AM 2 points [-]

Something that should have occurred to me-- the deworming experiment was done in the late 90s, which means that the effect on lifetime income is an estimate.

Comment author: [deleted] 24 November 2014 08:22:47PM *  1 point [-]

The year is 1800. You want to reduce existential-risk. What do you do?

Comment author: lmm 25 November 2014 11:32:25PM -1 points [-]

I give Napoleon a hand, on the basis that he was one of the more scientifically-minded world leaders, and the theory that a strong France makes our future more multipolar. For the same reason I try to spread the notion of the limited-liability corporation in the islamic world (no idea how to do that though). I might try to convince nations of the (AIUI genuine) non-profitability of colonialism.

Comment author: TimS 26 November 2014 04:59:32PM 3 points [-]

If you want multi-polar, Napoleon is the last person you should help. He was clearly acting to reduce the number of Great Powers to 1. He even succeed for a bit re: Prussia & Austria.

Alternatively, if he wins, how do you prevent France v. USA instead of Russia v. USA.

Comment author: lmm 27 November 2014 06:46:44PM 0 points [-]

Alternatively, if he wins, how do you prevent France v. USA instead of Russia v. USA.

If it ends up more even and more positive-sum, I call that a win.

Comment author: TimS 03 December 2014 12:01:06PM 1 point [-]

Why would you expect any different outcome at all? Two-power dynamics often unstable - absent external stabilizer like MAD.

Comment author: Lumifer 26 November 2014 05:03:20PM 0 points [-]

if he wins, how do you prevent France v. USA instead of Russia v. USA.

You just have to keep the Canadian-Mexican border quiet :-)

Comment author: imuli 25 November 2014 07:03:37PM 0 points [-]

Start an insurance company with a focus on risk mitigation.

(Amass resources, collect information, you get the idea.)

Comment author: polymathwannabe 24 November 2014 09:44:18PM *  0 points [-]

Vaccination for everyone! Aqueduct (AND toilets) for everyone!

Make good publicity for Mr. Volta's new chemical battery, and convince everyone of how ugly the world is when tainted by coal smoke. This has a dual purpose: ease the way for early development of electric cars, thus fighting global warming, and delay Western meddling in the Middle East for oil extraction purposes, which contributed largely to the mess the region is now.

Find Mr. Heinrich Marx at his law practice in Trier and quietly castrate him.

Popularize DIY production of blue cheese and thus increase the chances that someone playing with Penicillium fungi will get creative.

Recruit would-be Temperance Leagues and redirect their strength to strangle the tobacco industry in its crib.

Edited to add: only massive distribution of aqueducts and toilets would be obvious to a true native of 1800.

Comment author: ChristianKl 25 November 2014 08:52:08AM 3 points [-]

Batteries still mean that you need electricity and that means burning coal.

Comment author: fubarobfusco 25 November 2014 02:40:28AM 2 points [-]

Uranium was discovered in 1789 in Saxony. What's the minimal technological path from there to reasonably-safe reactors? I would imagine it involves not only the obvious physics, but photography (to detect radiation) and significant advances in metallurgy (to refine ores) ....

Comment author: Alicorn 24 November 2014 08:24:17PM 11 points [-]

Are you a time-traveler or a native?

Comment author: [deleted] 24 November 2014 08:51:59PM *  2 points [-]

A native (but optionally a very insightful and visionary native).

EDIT: I said native, but all that I really want to avoid is an answer like "I would use all my detailed 21-st century scientific knowledge to do something that a native couldn't possibly do".

Comment author: Lumifer 24 November 2014 09:12:17PM 6 points [-]

all that I really want to avoid is an answer like "I would use all my detailed 21-st century scientific knowledge to do something that a native couldn't possibly do".

How about "I would use all my detailed 21-st century scientific knowledge to be concerned about something that a native couldn't possibly be concerned about"?

Comment author: [deleted] 24 November 2014 09:19:41PM 0 points [-]

Sure, if it leads to an interesting point.

For example, if you were trying to avoid suffering: "I would kill 12 year old Hitler" isn't very interesting, but "I would do BLAH to improve European relations" or "There's nothing I could do" are interesting.

Comment author: polymathwannabe 24 November 2014 10:28:59PM 0 points [-]

"I would kill 12 year old Hitler"

Did you mean 1800 or 1900?

Comment author: [deleted] 24 November 2014 11:01:06PM 3 points [-]

I didn't mean that example to refer to original question; I just wanted to demonstrate a vague but somewhat intuitive difference between "fair" and "unfair" use of future knowledge.

Comment author: Lumifer 24 November 2014 08:58:51PM 5 points [-]

Well, being concerned about existential risk in 1800 probably means you were very much impressed by Thomas Malthus' An Essay on the Principle of Population (published in 1798) and were focused on population issues.

Of course, if you were a proper Christian you wouldn't worry too much about X-risk anyway -- first, it's God's will, and second, God already promised an end to this whole life: the Judgement Day.

Comment author: Brillyant 25 November 2014 12:16:31AM 0 points [-]

Of course, if you were a proper Christian you wouldn't worry too much about X-risk anyway -- first, it's God's will, and second, God already promised an end to this whole life: the Judgement Day.

Still true today.

Comment author: Lumifer 25 November 2014 02:04:10AM 5 points [-]

Sure, but the percentage of fully believing Christians was much higher in 1800.

Comment author: NancyLebovitz 24 November 2014 05:07:20PM *  9 points [-]

A song about self-awareness:

Yielding to Temptation by Mark Mandel, to the tune of Bin There, Dun That by Cat Faber

Something called me from the bookcase
and I answered quick and dumb
And I guess I'd still be reading there
if rescue hadn't come.
Well, I must have jumped six inches
and I answered "Coming, dear!"
Now the sf's in the basement
and it doesn't call so clear.

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

  • changes with each chorus

I was filling up the ice cube tray
last night at half past ten
When I heard a voice entreating
"Won't you dance with me again?"
It's the caramel fudge ripple,
sweet as love and thick as sin.
I'm not dumb, I'm not expAndable,
and I'm not digging in!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the calories* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

As I stroll around the dealers' room
I'm only there to look.
No, I don't need that CD,
no, I do not need that book.
I can live without a T-shirt
showing Asterix the Gaul...
But I'm wearing ten new buttons
I don't recognize at all!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the dollars* go like nothing
and had nothing good to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

And when it comes to filking,
I perpetually find
One particular composer
reappearing in my mind,
Like some goddam chimes are ringing
in my little fuzzy brain,
And they set my head on fire
and I'm filking him again.

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the lyrics* go like nothing
and had something weird to show.
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the Hayes behind my eyes. **

We interrupt the writing
of this silly little song
'Cause my lady is reminding me
to not stay up too long.
She's reclining in the bedroom
with a warm and sultry smile,
And I'll write this down tomorrow
'cause the song can wait awhile!

Chorus: 'Cause I've bin there, dun that,
learned what I should know.
Had the hours* go like nothing
and had something good to show!
Yes, I've bin there, dun that,
learned to recognize
When I'm yielding to temptation
by the haze behind my eyes.

Comment author: advancedatheist 24 November 2014 03:53:49PM *  4 points [-]

I thought this article about coaching in pickup techniques kind of misses the point:

I Took A Class on How to Pick Up Women—But I Learned More About Male Anxiety

http://www.alternet.org/culture/i-took-class-how-pick-women-i-learned-more-about-male-anxiety

I posted in response:

For some reason we have this notion that the young man's "sexual debut," as the scientific literature about human sexuality calls it, happens as an organic developmental stage in the late teens, with a median age of around 17. If a 17 year old boy picked at random can probably figure out how to close the deal with a girl for the first time, this accomplishment certainly can't depend on coaching or life experience, because what the hell does a 17 year old boy know? But apparently a nontrivial number of boys in every generation miss this developmental window, and then they wind up in their 20's without an adult man's skill set for dealing with women, like the adult virgins who pay to receive instruction by alleged PUA's. If you have a teenage son, and you can see that girls don't find this boy sexually attractive, that has to affect how you view your son, and in a bad way. Perhaps we should consider earlier and more radical interventions into these boys' lives to help them to develop the adult man's skill set for relationships with women, instead of leaving this to the haphazard because of romantic nonsense that "the right girl will come along some day."

BTW, in case someone brings up the P-word, I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating - because I just don't see the connection.

Comment author: chaosmage 25 November 2014 10:50:14AM *  1 point [-]

I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating - because I just don't see the connection.

Dating and sex are related skills. I assume we agree a prostitute could give a good intro to sex. So why shouldn't she be a good dating coach too? The young man won't need to fear rejection from her, nor fear being talked about later, so they can role-play in emotional safety. She can still tell him what's going to cause rejection when he's not a customer, and what's going to work better. Best of all, she can lead all the way, past exchanging numbers and kissing all the way to sex etiquette.

Of course there's the drawback of possible shame over having visited a prostitute - but virginity can be a source of shame too. So I figure that for the median male adult virgin, seeing a prostitute would be net plus, especially if he manages to specifically ask for dating and first time sex roleplay.

Comment author: Username 25 November 2014 03:28:13PM 5 points [-]

(Posted using the anonymous community account; username and password are Username and password)

Dating and sex are related skills. I assume we agree a prostitute could give a good intro to sex. So why shouldn't she be a good dating coach too? The young man won't need to fear rejection from her, nor fear being talked about later, so they can role-play in emotional safety. She can still tell him what's going to cause rejection when he's not a customer, and what's going to work better. Best of all, she can lead all the way, past exchanging numbers and kissing all the way to sex etiquette.

I hear that prostitutes who do that charge a lot -- more than typical 17-year-olds can easily afford, and low-end prostitutes basically just let you masturbate with their bodies.

Comment author: chaosmage 25 November 2014 07:05:30PM 3 points [-]

Prostutites don't need a statutory rape charge any more than anybody else, so obviously I'm not talking about 17-year-olds. I mean guys of legal age.

Concerning economics, it's hard to compare. Here in Germany, prostitution is legal, the market is efficient, and there are lots of sex workers competent and professional enough to pull off what I described, available for 100-200 euros per hour. I imagine that in places where prostitution is illegal, the situation would be very different - especially if due to the threat of prosecution, potential customers can't simply email their needs and budget to a couple of providers to get a good offer...

Comment author: Username 25 November 2014 10:09:55PM 1 point [-]

(posted by another user using this account)

I'm not sure whether this is really a neutral coaching situation. For really independent sex-workers maybe. But I hear that many still work for a pimp, are highly motivated the extract high amounts from the yongster and wouldn't necessarily provide a neutral emotionally safe environment. This is from the source with significant (but possibly somewhat out-dated) work-experience in this field.

Comment author: Viliam_Bur 25 November 2014 10:33:47AM *  15 points [-]

I'd like to know how seeing a prostitute will help a young man develop the skills he needs to get into sexual relationships through dating

Seeing sex as less "magical" could help reduce tension with trying to get sex.

(By the way, the whole article seems to me like: "Look, some people have less social skills -- let's make fun of them! Oh, they are trying to overcome their weakness -- wow, that's even funnier!" The elephant in the room is that in our culture it is taboo to express empathy towards men and boys.)

Comment author: chaosmage 25 November 2014 08:12:59PM 1 point [-]

in our culture it is taboo to express empathy towards men and boys.

Really? I do that all the time and literally nobody has ever tried to stop me or punish me for it. Do your actual personal experiences differ?

Comment author: TheOtherDave 25 November 2014 10:17:20PM -2 points [-]

FWIW, there are contexts in which I've seen this criticized.

Usually, the context is that someone has started a discussion about some situation in which men or boys have caused suffering or otherwise behaved badly, and someone else has responded by expressing empathy towards the men or boys in question, and the person who started the discussion has criticized the attempt to switch the conversation focus from empathy towards the objects of the behavior, to empathy for the agents of it. (The jargon term for this is "derailing" in many contexts.)

Of course, this is only a subset of the general category of expressing empathy towards men and boys, but it's one that gets a lot of attention.

Comment author: fubarobfusco 25 November 2014 11:44:15PM *  -2 points [-]

This is hardly unique to situations involving gender.

For instance, sometimes this sort of thing happens —

  • Person A makes a decision or takes an action that hurts Person B — perhaps accidentally; perhaps out of negligence or bias.
  • Person B makes a demand — such as restitution for the harm done; or that the situation be corrected so that people like A won't hurt people any more.
  • A or A's supporters ignore or deflect B's demand, saying things such as that A's decision-making role is difficult; that A's guilt over hurting B is unpleasant to A; or that continuing to discuss A's mistake (and not "moving on") is a sign of malice, unfairness, or mental imbalance on B's part.

That's derailing: Person A changing the subject from "A hurt B, and B wants it fixed" to "A's life is so hard and people are being so harsh to A" in order to avoid talking about fixing the situation for B, the injured party.

Comment author: TheOtherDave 26 November 2014 01:11:35AM -1 points [-]

Yes, I agree that it's not unique to situations involving gender.

Comment author: bogus 26 November 2014 12:15:07AM *  1 point [-]

That's derailing: Person A changing the subject from "A hurt B, and B wants it fixed" to "A's life is so hard and people are being so harsh to A" in order to avoid talking about fixing the situation for B, the injured party.

Let's pick an example to make things more concrete. Person B owns a field, and Person A runs trains on a nearby railroad that throw dangerous sparks onto the field. Person B demands that Person A either stop the trains from passing near his property, or else fit them with a mechanism that will prevent sparks. Now Person A complains that the trains are used by low-income commuters who will be forced to pay unreasonably high prices in order to cover these additional costs. Is Person A "derailing the conversation", or is this a valid point? Extra credit: What might influence your answer to this question?

Comment author: MrMind 25 November 2014 08:35:30AM *  1 point [-]

I wouldn't be too much concerned. The article is a lot less dismissive of PUA than what is usually put forward, even on this site. Plus, it's not that La Ruina isn't another little Mystery clone.

If a 17 year old boy picked at random can probably figure out how to close the deal with a girl for the first time

Based on what I know of my culture (US or other European countries might differ), not even 17 yo boys who do get girls know better. They usually get them because of a combination of some better looks, wider social circle, inferior opinions on women.
Those who apply for a PUA seminar are the ones who are trying to optimize their understanding of females, letting aside the fact that you cannot will yourself into being non-anxious. My opinion is that if they could be at ease around the opposite sex, they would wind up with a better sexual life than their "natural" peers.

Comment author: advancedatheist 24 November 2014 07:43:59PM *  2 points [-]

Another post I made to this AlterNet piece:

I can see why progressives want to discredit PUA coaches and belittle the men who seek their help, setting aside the question of these coaches' competence at doing what they advertise about themselves.

One, the PUA subculture promotes a politically incorrect view of women which sounds like the world view of traditional, conservative patriarchy, only read in reverse, so to speak: PUA coaches endorse the patriarchal view of women's weaknesses and vulnerabilities, and they teach men how to exploit these for sex by adopting the strategies of old-school cads. And I feel some sympathy for this view of women because to me women seem to have defective agency relative to men. If PUA coaches and writers can make a living with this message, perhaps their advice to men based on this traditional understanding of women has some validity after all.

And two, these men seek to improve themselves in an era of "You didn't build that" and the denigration of the self-made man. They've sought help in civil society and in the market instead of turning to the collectivist institutions created, maintained and thought-policed by progressives. They've rejected the progressive ethic of helplessness, dependency and victimization, in other words, in favor of the conservative ethic of self-reliance.

Comment author: ChristianKl 25 November 2014 08:55:55AM 3 points [-]

If PUA coaches and writers can make a living with this message, perhaps their advice to men based on this traditional understanding of women has some validity after all.

There are a lot of quick success schemes sold with the same marketing that PUA products are sold. The fact that people are willing to pay money for a dream of quick success doesn't mean that they can deliver on the promise.

PUA is a quite complex topic.

Male anxiety is an issue, and I don't think that an expensive 3 to 4 day bootcamp normally fixes it. Neither does watching a 24 DVD set sold for 499$.

If I could either send a 18 year old to a tantra seminar or to a PUA seminar, I'm not sure that the PUA seminar is the one that gives the higher return as far as improving his success with the opposite sex.

And I feel some sympathy for this view of women because to me women seem to have defective agency relative to men.

The fact that you believe that might be the problem and illustrate lack of ability of dealing with women.

Comment author: Lumifer 25 November 2014 04:51:48PM 2 points [-]

If I could either send a 18 year old to a tantra seminar

Tantra isn't really new-age exotic sex practices.

Comment author: ChristianKl 25 November 2014 05:26:38PM 2 points [-]

Wikipedia has little influence on what's practiced in a seminar with the headline tantra. At the same time of course it's not simply about the stereotype it has.

One element of tantra is for example strong eye contact. You can go to a PUA seminar and hear a lecture by a guy about holding eye contact. That often leads to guys going out and being uncalibrated. If you on the other hand learn eye contact in a tantra seminar the resulting behavior is likely much better calibrated.

Comment author: Lumifer 25 November 2014 05:39:22PM 3 points [-]

I feel we are using the word "tantra" in entirely different meanings.

Comment author: ChristianKl 25 November 2014 06:09:39PM 2 points [-]

I speak about the kind of event that's titled a tantra seminar and take my knowledge of what happens there from people I meet in meatspace who took part in such events.

Comment author: [deleted] 25 November 2014 11:55:18PM 3 points [-]

Well, what happens there?

Comment author: ChristianKl 26 November 2014 10:51:26AM 2 points [-]

That's a fair demand, but I don't want to go in too much detail on that point. There a lot of inferential distance in talking about New Age practices on LW and Tantra isn't a subject I studied deeply enough to be confident that I fully understand it's theory base.

Comment author: Viliam_Bur 25 November 2014 10:41:34AM 5 points [-]

Male anxiety is an issue, and I don't think that an expensive 3 to 4 day bootcamp normally fixes it. Neither does watching a 24 DVD set sold for 499$.

Irrationality is an issue, and I don't think that reading the Sequences normally fixes it. Neither does a 3-day rationality seminar for $3900.

Still, for some people it's a good option.

If I could either send a 18 year old to a tantra seminar or to a PUA seminar, I'm not sure that the PUA seminar is the one that gives the higher return as far as improving his success with the opposite sex.

I would expect different things working for different people.

The interesting thing is that the tantra seminar would not motivate people to write similar articles. Even if there is also no guarantee that it is something more than just someone's strategy to make money quickly.

Comment author: bogus 24 November 2014 08:37:08PM 10 points [-]

PUA coaches endorse the patriarchal view of women's weaknesses and vulnerabilities, and they teach men how to exploit these for sex by adopting the strategies of old-school cads.

I think most pickup coaches would object to this point of view, and it might make some of them quite unhappy. PUAs teach strategies that they believe will increase your attractiveness to the opposite sex. But it's silly to see attraction as a "weakness" or "vulnerability". Many people (women included, of course) want to feel attracted in the first place, especially to someone with other good qualities - they just don't get to make that choice most of the time! That's the one sense in which 'reduced agency' could be said to be relevant - but it doesn't negate the fact that agency really is quite heavily involved in any kind of pickup.

Comment author: bogus 24 November 2014 04:44:03PM 0 points [-]

Yeah, that article has a weirdly dismissive tone. It reads like pickup is all about helping these 'painfully shy', inexperienced guys boost their self-confidence, and there's nothing more to it than that. But ISTM that folks who sign up for a random intro bootcamp are quite likely to be a lot shier and more intraverted than average. There's quite a bit of innovative stuff in pickup, but people probably come across it on internet forums, or perhaps through proprietary guides/videos or in the most 'elite', costly workshops/bootcamps.

Comment author: advancedatheist 24 November 2014 04:59:27PM 7 points [-]

I've noticed a similar lack of understanding in other men who had their sexual debuts at developmentally appropriate ages. It becomes a kind of cognitive barrier separating sexually experienced men from the inexperienced ones.

I also notice a lack of curiosity about this phenomenon in professional sex researchers. I have three different college textbooks of the Human Sexuality 101 sort, and none of them has a section on adult virgins, much less adult male virgins.

Comment author: MrMind 26 November 2014 11:31:13AM 1 point [-]

I also notice a lack of curiosity about this phenomenon in professional sex researchers.

That's the thing that bugs me the most. Why can't we just have quality research on the subject?

Comment author: advancedatheist 24 November 2014 04:06:24PM 1 point [-]

More along these lines by Dr. Helen Smith, the wife of blogger Glenn Reynolds, the Instapundit:

Geeks on Strike?

http://pjmedia.com/drhelen/2014/11/20/geeks-on-strike/

She references Vox Day's observations about how many young men these days find themselves alienated from young women, hence their willingness not to pull their punches when female social justice warriors start to mess with their gaming activities. What can these young women really do to these guys to punish them - withhold sex? They've already done that. Rejections have consequences.

Comment author: Viliam_Bur 25 November 2014 11:47:49AM *  13 points [-]

I believe that it is a factor, it is far from being the only factor, probably not even the most important one. But it points in an interesting direction.

Okay, some political stuff here, because the topic is inherently political, and I even want to go one step more meta, which is deeper in politics:

Feminists have been complaining for a long time about traditional power structures in our society. Which is a legitimate complaint in my opinion, but I disagree with their choice of the word "patriarchy", because it has the unfortunate connotation that the traditional power structures are merely about something that (all? most? some?) men do to women, and so it makes us blind about things that some women do to men to maintain the traditional power structures. Suggesting that women as a group even have some kind of social power probably already is a heresy.

The list of the techniques women are traditionally allowed to use against men is here. They are mostly ad-homined arguments that a woman (for more powerful impact: a group of young women; but also their male defenders) can use against a man who tries to step out of the line.

"You are bitter!" "You hate women!" Because everyone is already primed to see men as dangerous and hateful. "You are afraid!" "Man up!" When convenient, the stereotypes of masculinity become a useful tool to shame men. "You are immature!" Grow up!" Again a reminder of failing the traditional role. "Stop whining!" "Your fragile male ego!" People have less empathy towards men, so remind them to not expect any. "You just can't get laid!" "You probably have a small penis!" Even this kind of argument is relatively accepted against men. It doesn't prove anything, it just suggests that the man is somehow defective, therefore low-status, therefore his opinions don't matter.

Each of these critiques makes more or less sense separately, but when we take them together, it becomes apparent that as a set they can be used in any situation. A man can be shamed for following his traditional gender role and for deviating from it. Maybe even both at the same time. Neither power nor weakness is acceptable. Perhaps, as a rule of thumb, a man should follow all his traditional obligations (get a job, make a lot of money, move all the heavy objects) but should not expect any traditional advantages (because that would be sexist). Even having a hobby is suspicious, unless the man can explain how the hobby will help him make more money in the future. In our culture, men have instrumental value; only women have terminal value. (Unless the man is really high-status, in which case different rules apply.)

So, in a way, if feminists complain about the traditional gender roles, they should celebrate gamers as allies, because those break the male stereotypes, and they do it on their own, no education or propaganda or change of laws necessary. But of course there is a difference between being a feminist in a sense "trying to change the traditional power structures (patriarchy)" and in a sense "cheering for the 'team women'". It's situations like this when the difference becomes visible; when weakening "patriarchy" also removes some systemic power from the "team women".

Equality comes at a price. The price is that you don't have servants anymore. If you complain about it, you probably didn't want equality in the near mode, only as a far-mode slogan.

From a proper point of view, gamers' resistance towards patriarchal shaming technuiques is an important victory of feminism. However, I would not be surprised if most self-identified feminists don't get it.

What can these young women really do to these guys to punish them - withhold sex?

And what about women in gaming? Or gays, or asexuals? (Or course the official party line is that they don't exist.) All these people are now considered equal and respected members of the society... which includes the right to not give a fuck about what some young ladies are telling them to do.

Again, the true equality works both ways.

Comment author: NancyLebovitz 27 November 2014 04:14:53PM -1 points [-]

At least some of the attacks you describe are used against women as well-- in particular the "grow up" or "be tougher because our project is more important than your emotions" range. I'm not sure it's all as gendered as you think.

This being said, there are gendered insults (notably small penis,neckbeard, and sausage fest) that are common among feminists. I've seen some feminists argue against the first two, but not the third.

I'm wondering whether it makes sense to try to keep your opponents' identity small, and not modelling a large number of people as one big person with a unified agenda.

Comment author: IlyaShpitser 25 November 2014 05:07:34PM *  0 points [-]

<this is a political comment, usual mindkill caveats apply>

Here is a problem with an interest group:

http://thinkprogress.org/world/2014/03/05/3362801/nra-ivory-elephants-guns/

It's easy to hate the NRA if you come from certain parts. But the NRA is not very unusual in this respect. Interest groups, by their nature are unable to have the overview to know when to throw their cause under the bus for the "greater good." This is a general problem for all interest groups, regardless of whether their cause is noble or not.


The real question is how do we fight Moloch by a different method than competing interest groups (which will follow the usual "behavior physics" of interest groups, which feminism is not exempt from, regardless of how noble its goal is).

</political comment>

Comment author: [deleted] 26 November 2014 06:57:11PM *  -1 points [-]

Even ignoring the common good: Why do interest groups so often impede the long-term progress of their own goals?

Why, when X is simple, strong, and sufficient to advance the group purpose, will a group instead focus on advancing some complicated and contentious Y?

Many groups, (including some I support), appear genuinely unable to do any long-term strategic thinking at all, or powerless to control their internal social forces.

Comment author: Salemicus 25 November 2014 05:37:03PM *  4 points [-]

Like Lumifer, I think the NRA is doing the right thing here - even strictly from a conservationist perspective. If we all stopped eating eggs, would there be more chickens? Of course not. When I mentioned similar logic here at least the vegetarians were honest that they wanted to drastically reduce the chicken population. But if using fewer chicken products leads to fewer chickens, how will using fewer elephant products lead to more elephants? And note that these two contradictory answers are frequently pushed by the very same people.

If you really wanted to preserve elephant populations, you'd make it easier for people to farm them for their ivory, which would go, in part, into making gun handles. But because the NRA are culturally alien to you, you'd like to throw their cause under the bus "for the greater good," for the very slightest reason.

So yeah, we all want causes we don't care about to shut up and get out of our way. It's a good thing that we can't make them. After all, NRA members aren't just gun enthusiasts, they are also citizens in every other way. If NRA policy interferes too much with (say) economic wellbeing in the eyes of its members, then the NRA will lose force as an interest group.

Comment author: IlyaShpitser 25 November 2014 05:50:12PM *  1 point [-]

I think the NRA is doing the right thing here - even strictly from a conservationist perspective

I think maybe you do not realize how poor the institutions are here. There isn't some actor with long term overview maximizing ivory profits (and incidentally ensuring elephants continue as a species). Commercial overexploitation of resources in the biosphere is extremely common, and requires coordination to solve properly (see for example cod stocks collapse in the Atlantic for one example historically important for Europe). Collapse (the book) gave some examples where coordinating a long term exploitation of the environment was solved properly and examples where it wasn't.

But my point isn't about the NRA, or environmentalists specifically, I just used them as an example. My point is about a general problem with interest group ecosystems. If an interest group advocates a bridge to nowhere it is not going to lose force, it is doing precisely what it is meant to do.


But because the NRA are culturally alien to you

I would like to add here that I have been very very careful not to discuss my actual politics. Most of your assumptions about my culture or my politics are false. (So I guess I passed the ideological Turing test?)

Back when I had long hair, I was once accosted by a dude trolling for Obama votes who said: "you have long hair, you must be an Obama supporter!" What you are doing is basically this. Filling a hole with a pigeon is going to be very frustrating for you in this case.

Comment author: Lumifer 25 November 2014 06:09:45PM *  4 points [-]

requires coordination to solve properly

Not necessarily. An effective solution to the tragedy of the commons is property rights. While at the moment there may not be an actor with a long-term commercial interest in elephants, this kind of legislation is making sure that there never will be one.

Comment author: IlyaShpitser 25 November 2014 10:24:19PM *  -1 points [-]

Property rights do not magically enforce themselves, you need a government to enforce it for you. Everyone agreeing to a government's monopoly on force is yet another coordination problem. This is not so easy in places where elephant poaching happens. That aside, Collapse had examples where property rights were not sufficient in themselves. You should read it, I enjoyed it a lot!

Comment author: Lumifer 26 November 2014 01:17:01AM *  6 points [-]

Property rights do not magically enforce themselves, you need a government to enforce it for you.

Again, not necessarily. A private security force works fine -- especially in places where the government isn't... particularly effective. Such governments aren't all that good at coordination, either, by the way.

But the argument boiled down to its core is just incentives. It's much better to have incentives for private people to have herds of elephants roam on their ranches than depend on government bureaucrats who, frankly, don't care that much.

An international ban on ivory trading by itself wont' save the elephants -- the locals will just hunt them down for meat and because they destroy crops.

I think you just chose a bad example. Your underlying point that special-interest groups have tunnel vision and are constitutionally incapable of deviating from their charter is certainly valid.

Comment author: IlyaShpitser 26 November 2014 11:51:13AM *  -1 points [-]

I don't understand what this is about anymore (I think you just like to argue?)

(a) There aren't "private security forces" replacing governments making Africa a kind of modern day Snowcrash universe. Governments are mostly weak and corrupt, and there are warlords running around killing folks and each other, and taking their loot.

(b) The way the NRA makes its decisions has nothing to do with the political situation in Africa, the state of elephant herds in Africa, the long term fate of the African elephant species, or anything like that. They consult relevant gun makers, and decide based on that. This is contrary to the original claim that the NRA was making the correct decision even from a conservational point of view. They aren't in this case, but if we did the math and found out they did, it would certainly be by accident, because they surely didn't do the math.

(c) Do you actually know how many elephants are killed in Africa for non-ivory reasons?

Comment author: Lumifer 25 November 2014 05:18:26PM 4 points [-]

Here is a problem with an interest group

I don't see a problem. Or, rather, I see a problem with the blanket prohibition on the sale of <100-year-old ivory as it looks unreasonable to me.

Comment author: chaosmage 25 November 2014 08:05:51PM -1 points [-]

Do you see a problem with the dwindling elephant population too? If so, are you able to judge which is the greater problem? If so, what is your judgement?

Comment author: Lumifer 25 November 2014 08:16:18PM 4 points [-]

Do you see a problem with the dwindling elephant population too?

Yes, of course.

If so, are you able to judge which is the greater problem?

You are engaging in a classic false dilemma fallacy.

Do tell, how the prohibition on selling 50-year-old ivory helps the dwindling elephant population?

Comment author: chaosmage 25 November 2014 08:47:04PM 0 points [-]

Lots of existing ivory becomes illegal, leading to a local drop in value, leading to lots of US ivory being traded to countries where it isn't illegal. Right?

So that first of all that sets up excellent opportunities for police sting operations. But it also drives down prices (at least for a few years), making elephant poaching less lucrative.

In parallel to that, the US is setting an example. A lot of countries copy US criminal laws rather than thinking them up from scratch (the War on Drugs being the textbook example), and since almost everyone loves elephants and the ivory trade is a huge and growing threat to them, there'll be a particularly low threshold to copying this one.

Comment author: Lumifer 25 November 2014 09:28:33PM 4 points [-]

Lots of existing ivory becomes illegal, leading to a local drop in value, leading to lots of US ivory being traded to countries where it isn't illegal. Right?

Sigh. Wrong. Why don't you at least look at the original link to the article about the ban? Notably, it says (emphasis mine):

Last month, the White House announced a ban on the commercial trade of elephant ivory, placing a total embargo on the new import of items containing elephant ivory, prohibiting its export except in the case of bona fide antiques, and clarified that “antiques” only refers to items more than 100 years old when it comes to ivory.

Comment author: chaosmage 25 November 2014 10:09:52PM -1 points [-]

I neither said nor meant it was going to be exported legally. It'll be black market trade, but it'll still respond to market forces, just like drug trafficking does.

Comment author: NancyLebovitz 25 November 2014 04:21:58PM 2 points [-]

People underestimate the effect of the worst behaved people on their own side.

This being said, unless I've missed something (quite possible), feminists don't have a comparable history of doxing and violent threats.

Comment author: Viliam_Bur 25 November 2014 10:30:59PM *  8 points [-]

feminists don't have a comparable history of doxing and violent threats

You mean feminists in general, or just recent events?

EDIT: By the way, in the second link, the victim is a feminist, too.

Comment author: TimS 26 November 2014 04:25:28PM 0 points [-]

I could be wrong, but I thought the consensus was that your recent event example was not a dox of A by B (or only linking to a public dox by third party).

That said, it's very clear that A and B don't like each other and spin the facts unfavorably about each other.

Comment author: NancyLebovitz 26 November 2014 01:18:33AM 2 points [-]

Yeah, and you could throw in Erin Pizzey having been threatened for saying that a bit more than half the women in her domestic violence shelter were violent themselves.

Still, the list so far isn't comparable to the number of women who've been threatened just over GamerGate.

Comment author: VoiceOfRa 28 October 2015 05:52:46PM -2 points [-]

Still, the list so far isn't comparable to the number of women who've been threatened just over GamerGate.

Well, since the number of women who appear to have been threatened over GamerGate (as opposed to the number of women who claim to have been threatened, but the evidence vanishes whenever these allegations are investigated) appears to be 0. Furthermore, given your recent demonstrated lack of ability to detrmene whether something is a thread (hint: someone saying something that might imply he believes something you find threatening is not a threat), you probably shouldn't be making judgements on these issues.

Comment author: Viliam_Bur 26 November 2014 11:25:36AM *  8 points [-]

I'm at a huge risk of motivated thinking here, but I want to make a few points:

1) Not all forms of "threatening" are equal. For example killing someone's dog is much worse than sending someone a tweet "i hope you die". If we put these things in the same category, by such metric the latest tumblr debate may seem more violent than WW2. Also, the threats of blacklisting in an industry seem to me less serious, but also more credible than the threats of physical violence.

2) We have selective reporting here, often without verification. Journalists have a natural advantage at presenting their points of view in journals. Also, one side makes harrassment their central topic (and sometimes a source of income), while for the other side complaining about being harrassed is tangential to their goals. I haven't examimed the evidence, but seems to me there are almost no cases, on either side, where the threat is (a) documented, and (b) credibly linked to the opposing side, as opposed to a random troll, or some other unrelated conflict.

3) Lest we forget the parallel NotYourShield campaign, threats against gamers and game developers are technically also threats against women, and there are quite possibly more women in gamergate than in gaming journalism. Women are women even when they are not marching under the banner of feminism.

Comment author: NancyLebovitz 26 November 2014 02:48:16PM -1 points [-]

Yeah, I'd say motivated thinking.

Not all forms of threatening are equal, but "I'm having extremely violent fantasies about you and I know where you (and your children) live" isn't a tiny thing, and it goes rather beyond "I hope you die". (Is there a name for the rhetorical trick of choosing, not just a non-central example, but a minimized non-central example?)

Part of the point is that women are sometimes the target of harassment campaigns online. Some of the attackers may have an interest in the ostensible issue, some may be pure trolls. It seems as though a lot of the attackers are male.

I doubt that there are a number of women who left their homes because of nothing in particular.

When I mentioned above that people underestimate the effect of the worst people on their own side, I meant that just as I tend to underestimate the way feminism can add up, I think you're underestimating the number and forcefulness of the vicious people on your side.

I'm still incredibly angry at the way Kathy Sierra was driven out of public life.

Comment author: NancyLebovitz 27 November 2014 04:00:22PM 0 points [-]

I'm curious about why this comment got so many downvotes, if anyone would care to try explaining. I'm saying "try explaining" because any one person can only know the reason for at most one downvote.

Comment author: lfghjkl 27 November 2014 09:33:21PM 2 points [-]

Yeah, I'd say motivated thinking.

Comments like these are not helpful. Especially not on a highly politicized topic such as the one the two of you are discussing.

Comment author: Viliam_Bur 26 November 2014 05:13:59PM *  5 points [-]

Would this qualify as a sufficiently scary threat? Both men and women receive various kinds of abuse online. I would guess that most of the aggressors are men, but victims are of both genders. Being a victim of online harrassment is not a uniquely female experience, although some specific forms of harrassment may be, mostly of sexual kind. I would also guess that victims of "swatting" are typically men, but I have no data about it.

Now I feel it would be good to split the debate into two completely separated topics: feminism and GamerGate. Debating them as if they are the same thing would make this all extremely confusing. Framing GamerGate as "angry white men against feminists" is merely a propaganda of one side; in reality, both sides include angry white men, and both sides include feminists.

1) I believe I have read a few stories about violent behavior of feminists, but I usually don't keep records of things I read online. If my memory is reliable, the complaints about abuse from feminists usually came from LGBT people, although officially the feminists are supposed to be on their side. Googling for "violent feminists" mostly brings false positives, but also this.

I admit I am confused about the phenomenon of online SJWs. Are they supposed to be a part of feminism, or is that a separate thing? Because their opinions seem similar to some extreme feminist opinions. Seems to me these people do a lot of online harrassment, although on internet it is difficult to prove something isn't merely trolling. And generally, even if someone is a feminist, that doesn't mean that everything they do is done in the name of feminism.

2) Here is a collection of abuse towards pro-Gamergate people. Again, it's difficult to prove who did that. We would have to debate each piece of evidence individually, but I'd rather avoid that.

Comment author: NancyLebovitz 26 November 2014 06:20:52PM 2 points [-]

That first link strikes me as not extremely scary, and it seems to be a rant rather than a threat which was sent to someone in particular. Furthermore, it doesn't have specific details about injuries and degradation. It isn't a photoshopped image of the person being threatened, either.

Gamergate is hopelessly weird-- as you may know, the initial post was basically a man talking about having been emotionally abused by a woman, with only a minor mention of games and journalism, and it morphed into something completely different.

As far as I can tell, SJWs consider themselves to be part of feminism and/or the one true feminism. I haven't seen a claim anywhere that they aren't feminists, and at least one suggestion that there's no point is saying that they aren't feminists, even if they're wrong-headed.

It wouldn't surprise me if a lot of moderate feminists (like most people) aren't engaging with SJs because that looks like a lot of work and no fun.

Comment author: Salemicus 25 November 2014 04:45:22PM 8 points [-]

Feminists do have a long history of doxing. My impression is that they don't make the same level of violent threats, but they certainly aren't rare. For example, Chloe Madeley.

Comment author: NancyLebovitz 25 November 2014 05:27:32PM 0 points [-]

Details about the history of doxxing?

Comment author: bogus 24 November 2014 05:59:16PM *  3 points [-]

Gamers aren't "pulling their punches" online because SJW don't pull their punches either. It's all random Internet fun anyway until people actually get doxxed (or 'swatted', or worse).

Comment author: RichardKennaway 24 November 2014 12:14:16PM *  9 points [-]

Suddenly, I know the relative sizes of the planets!

HT Andrew Gelman.

ETA: Pluto isn't in the picture, but it would be a coriander seed, half the diameter of Mercury. For the Sun, imagine a spherical elephant.

Comment author: philh 25 November 2014 12:07:38AM 3 points [-]

The radius of the sun is only about ten times the radius of jupiter. I feel like a spherical elephant has considerably more than ten times the radius of a watermelon.

...is what I was about to say until I did research, and apparently it's pretty accurate. A watermelon can exceed 60cm diameter, and wolfram alpha gives an elephant's length between 5.4 and 7.5 metres.

Comment author: Brillyant 25 November 2014 12:04:05AM 3 points [-]

That's either one huge grapefruit...or one tiny watermelon.

Comment author: [deleted] 24 November 2014 11:46:40AM 1 point [-]

I am considering deleting all of my comments on Less Wrong (or, for comments I can't delete because they've been replied to, editing them to replace their text with a full stop and retracting them) and then deleting my account. Is there an easier way of doing that than by hand?

(In case you're wondering, that's because thanks to Randall Munroe the probability that any given person I know in meatspace will read my comments on Less Wrong just jumped up by orders of magnitude.)

Comment author: [deleted] 26 November 2014 08:49:44AM 2 points [-]

I have been convinced that deleting my comments would be overkill, so I'm going to just delete my account, which will anonymize my comments, and hope that the permalink page title bug will be fixed.

I might come back here with a different username later.

Thanks to Baughn for their offered help.

Have a nice day.