Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: artemium 13 July 2017 01:27:16PM 1 point [-]

This is great. Thanks for posting it. I will try to use this example and see if I can find some people who would be willing to do the same. Do you know of any new remote group that is recruiting members?

Comment author: artemium 01 June 2017 06:16:55AM *  3 points [-]

This is a good idea that should definitely be tested. I completely agree with the Duncan that modern society, and especially our community is intrinsically to allergic to authoritarian structure despite strong historical proof that this kind of organisations can be quite effective.

would consider joining in myself but given my location that isn't an option.

I do think that in order to build successful organisation based on authority the key factor are personal qualities and charisma of the leader and rules play smaller part.

As long as project is based on voluntary participation, I don't see why anyone should find it controversial. Wish you all the best.

In response to Playing offense
Comment author: Elo 30 November 2015 08:10:49PM 1 point [-]

can we fix the font of this? Thnx

In response to comment by Elo on Playing offense
Comment author: artemium 30 November 2015 11:13:19PM 1 point [-]

fixed.

Playing offense

-4 artemium 30 November 2015 01:59PM

There is a pattern I noticed that appears whenever some new interesting idea or technology gains some media traction above certain threshold. Those who are considered opinion-makers (journalists, intellectuals) more often than not write about this new movement/idea/technology in a way that it somewhere between cautions and negative. As this happens those who adopted this new ideas/technologies became somewhat weary of promoting it and in fear of a low status decide to retreat from their public advocacy of the mentioned ideas. 


I was wondering that maybe in some circumstances, the right move for those that are getting the negative attention is not to defend themselves but instead to go on a offense. And one the most interesting offensive tactic might be is to try to reverse the framing of the subject matter and put the burden of the argument on the critics in a way that requires them to seriously reconsider their position: 

  • Those who are critical of this idea are actually doing something wrong and they are unable to see the magnitude of their mistake 
  • The fact that they are not adopting this idea/product has a big cost that they are not aware of, and in reality they are the ones making a weird choice  
  • They already adopted that idea/position but they don't notice it as it not framed in context they understand or find comfortable
In all of this cases it the critic is usually stuck the status competition that prevents them to analyse situation objectively, and additionally he feels safety in numbers as there are a lot of people who are similarly criticising this idea.

So lets start with Facebook.

When Facebook was expanding rapidly and was predicted to dominate social media market (2008-2010) it became one of the most talked about subject in the public sphere. And the usual attitude towards Facebook from the 'intellectual' press and almost everyone who considered himself an independent thinker was usually negative. The Facebook was this massive corporate behemoth who is assimilating people in its creepy virtual world full of pokes, likes and Farmvilles. It didn't help that its CEO was the guy who had "I am a CEO bitch" written on his business card and walked in his flip-flops to the business meetings. 

I remember endless articles with titles like "Facebook is killing real life frendships", "Facebook is creepy corporate product which wants to diminish your identity ", "Why I am not on facebook", "I left facebook and that was the best decision ever!" At that time everyone who said "Actually I am not on facebook" was a sure way to gain a high status, as someone who refuses to become another FB drone. And those who were on facebook always felt the need to apologize for their decision "Yeah, facebook sucks, l am only there to stay in touch with my high school pals" 

The climax was reached with the "Social network", movie which presented Mark Zuckenberg and founding of facebook as some kind of Batman villain origin story (and it was grossly inaccurate for anyone who actually knows the facts). 

But then Fahrad Manjoo published an article that asked a simple question "Why are you not on facebook?" In it he reversed the framing of the story and presented facebook as something that is the new normal and being a holdout is a weird thing that should get a strange looks. His message was something like: "Actually it is you people who are not on facebook have to explain yourself. Facebook won, it is convenient tool for communication, almost everyone is there and you should get over your high horse and join the civilized world." 

The site has crossed a threshold—it is now so widely trafficked that it's fast becoming a routine aid to social interaction, like e-mail and antiperspirant.


In others words, if you are not on facebook in 2010, you are not brave intellectual maverick standing up against an evil empire.  You are like a cranky old man from 1950s who refuses to own a telephone because he is scared of the demon-machine. And this is making very inconvenient for his relatives to contact him. 

There is probably a lot of whats wrong with Manjoo's approach, some of it would fall under the 'dark arts' arsenal. And to be fair a lot of criticism of Facebook has a point especially after Snowden affair. But I really like Manjoo's  subversive thinking on this issue and the way he pierced the suffocating smugness with a brazen narrative reversal. 

I wonder that this tactic might be useful for other ideas that are slowly entering public space and are similarly getting a nasty look from the "intellectual elite"

Lets look at the Transhumanism and its media portrayal. 

It is important to notice that there is a difference between regular "SF tehnology is cool" and transhumanism. Everyone loves imagining a future world with cool gadgets and flying cars. However, once you start messing out with the human genome or with cybernetic implants things get creepy for a lot of people. When you talk about laser pistols you get heroic rebels fighting stromtroppers with their blasters. When you talk about teleporters and warp drives you get brave Starfleet captain exploring the Galaxy. But when you talk about cybernetic implants you get the Borg, when you talk about genetic enhancement you get Gattaca and when you talk about Immortality you get Voldermort. For the average viewer, technology is good as long it doesn't change what is perceived as 'normal' state of human being.

You can have Star Wars movies where families are watching massive starships destroying entire planets with billions of people but you are not supposed to ask a why is Han Solo so old in the new episode. They solved faster-than-light travel and invented planet-killing lasers but got stuck on longevity research, or at least good anti-aging cosmetics? (yeah I know that it would be expensive to make Harrison Ford look younger, but you get my point)

Basically the mainstream view of the future is "old-fashion humans + jetpacks" and you better not touch the "old fashion" adjective or you will get pattern matched into a creepy dude trying to create utopia , which as we learned from the movies and literature, always makes you a bad guy.

But then in a real world you have a group of smart people who seriously argue that changing the human biology with various weird technologies would actually be a good thing and that we should totally work on reliable ways to increase longevity, intelligence and other abilities and remove any regulations that would stop it. And in response you have much larger group of intellectuals, journalists, politicians and other 'deep thinkers' who are repulsed by this idea and  will immediately start to bludgeon someone who argues that we should improve our natural state.  (I am purposely not mentioning those who question feasibility of transhuman ideas, like if the genetic enhancement is even possible, as this is not relevant here.)

From the political right and religious groups you will instantly hear standard chants of "people playing God" and "destroying the fabric of society ", from the political left you will hear the shouting about "rich silicon valley libertarians trying to recreate feudalism through cognitive inequality and eugenics " and even from political center you will get someone like Francis Fukuyama accusing transhumanism of being "the most dangerous idea of the century" that might "destroy liberal society". Finally you have entire class of people called "bio-ethicitist" whose job description seems to be "bashing transhumanism".   

At best transhumanists are presented as a "well intentioned geeks who are unaware of the bad consequences of their ideas" at worst they will get labeled "rich, entitled Silicon Valley nerds with a bizarre and dangerous pseudo-religious cult " or they can be dismissed altogether from serious conversation and turned into a punchline on "Bing bang theory".

So when transhumanist receive this kind of criticism they would naturally try to soften their arguments and backtrack on their positions, After all many people that are optimistic about the future and sometimes positively talk about human enhancement don't label themselves as transhumanist. But should they do that? What if they "own" their label and go on a offensive? How would narrative reversal look in that case?

Well they could make exact copy of the Manjoo's facebook method and throw "Why are you not a transhumanist?" to their critics. But they can use third method and confidently say: "You are transhumanist too. Actually majority of people are transhumanist but they are not aware of it."

It might sound crazy at first, I mean majority of people usually find tranhsumanist ideas weird and uncanny when they are presented in the usual form. "Designer babies? Nanotech robots inside my body? Memory chips in the brain? Mind uploading??? That is crazy talk!!"

But lets stop for a moment and try to understand what is basis of transhumanism, to reduce it to its core idea. One of the best article ever written about transhumanism is E. Yudkowsy short essay "Simplified humanism". The beauty of this essay is in simplicity, there are no complicated moral theories or deep analysis of various disruptive technologies but just a common sense argument about what are the fundamental values that humans should strive for and how to achieve them . 
 
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. 

And it is hard to argue with it. I mean I can't imagine normal person arguing with this statement. But then it gets more interesting

You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. 

And at this point we reach the the crux of the issue. It is not that the the values are the problem, but our familiarity of the tools we are using in order to protect our values.

So we can finally define difference between transhumanist and non-transhumanist. Transhumanist is a person who believes that science and technology should be used to make humans happier, smarter and able live as long as possible. Non-transhumanist is usually a person who believes the same except that he technology used in that process should not be too strange compared to the one he is already used to.

Using this definition the pool of people that fall in the transhumanist group whether they admit it or not should rise significantly. 

But actually we can go better then than that. How many people would honestly define themselves as non-transhumanist in this case?

Imagine that you have a lovely 5-year old daughter that got struck by a horrible medical condition. The doctors says to you that there is no cure and that your daughter will suffer severe pains for the several months before she finally dies in agony. 

While you are contemplating the horrible twist of faith another doctor approaches you and says: 

"Well, there is potential cure. We just finished the series of successful trials that cured this condition by using revolutionary medical nanobots. They are injected in the patient bloodstream and then they are able to physically destroy the cancer. If we have your approval we can start the therapy next week.  

Oh... before you answer, there are some serious side-effects we should mention. For the reasons we don't completely understand, the nanobots will significantly increase the hosts IQ and they are able to rejuvenate old cells, so in addition to curing your daughter's disease they will also make her super-smart and allow her to live several hundred years. Now many people have some ethical issues with that, because it will give her unfair advantage over her peers, and once she becomes older it might create feelings of guilt of being superior to everyone else and might make her socially awkward because of it . I know that this is a difficult decision. So what will it be.. horrible death in pain after few months or long fulfilling life with occasional bout of existential angst for being superhuman and feeling unease for being unfairly celebrated for getting all those Nobel prizes and solving world hunger?"

Do you honestly know a single person that would choose that their child would die in pain instead of nanobot therapy? Well I admit that this example is super-contrived but it still represents general idea of transhumanism clearly. The point is that EVERYONE is transhumanist and will quickly dismiss any intellectual posturing when push comes to shove, and when they or their loved ones face the dark spectre of death and suffering .   

And don't forget, if you go to the past just several generations compared to them we are the transhuman beings from utopian future. Just a few centuries ago average human lifespan was only half of what it is now, and on almost any objective measure of well-being humans from the past were living horrible and miserable lives compared to the life we now take for granted. If someone is willing to argue against transhumanism he should also in principle be against all of the advances that made us more healthier, intelligent and wealthier then our ancestors by using technologies that from the point of view of our ancestors were looking more crazy then many SF transhumanist technology would look from our perspective. 

So next time someone starts criticising transhuman beliefs, don't defend your self by trying to retreat from your position and trying to avoid looking weird. Ask him to prove that he isn't transhumainst himself by present transhuman idea in it basic form as stated in the Yudkowsky essay. Ask him why he considers trying to use technology to improve human condition should be considered a bad thing, and let him try to define at which point it becomes a bad thing. 

Playing offense might also work in other domains. 

Effective altruism

Criticism: You are using your nerdy math in a field that should be guided by passion on strong moral convictions.

Response : Actually it is you who should explain why you are not effective altruist. EA has proven track record of using the most effective tools to improve outcomes in charity work on the level that surpasses traditional charities. How do you explain your decision to not use this kind of systemic analysis of your work that would result in better outcomes and more lives saved by your charity? 


Smartphones

Criticism: you are using smartphone only as status symbol. They are unnecessary 

Answer: On the contrary I am using it as useful tool that help me in my everyday activities. Why are you not using smartphone when everyone else recognised their obvious value? Are you aware of the opportunity costs of not using smartphone like being unable to use Google maps and translate in your travels ?


We on LW and in larger rationalist community are used to having a defence posture when we are faced with a broad public scrutiny. In many cases that is correct approach as we should avoid unnecessary hubris. But we should recognise circumstances when we are coming from a position of strength and where we can play more offensive game in order to defeat bad arguments that might still damage our cause if they are not met with strong response.

Comment author: OrphanWilde 23 November 2015 02:47:56PM 12 points [-]

What terrorists want is irrelevant. "Don't play into enemy hands" is irrelevant. The entire discussion is irrelevant.

The correct response to enemy action is the response that furthers your own ends. It doesn't matter what effect this has on your enemy, good, neutral, or positive; your long-term ends matter.

"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this." A particularly relevant quote from Musashi, used by Eliezer on at least one occasion in the sequences.

Avoiding doing what the enemy wants is mere parrying. Stop mere parrying, and cut.

Comment author: artemium 29 November 2015 06:25:21AM *  0 points [-]

We would first have to agree on what "cutting the enemy" would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.

Now that is liberal perspective, there are alternatives, off course.

Comment author: artemium 01 June 2015 10:14:55PM *  0 points [-]

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.

Comment author: artemium 15 April 2015 07:06:32AM *  0 points [-]

One possibility is to implement the design which will makes agent strongly sensitive to the negative utility when he invests more time and resources on unnecessary actions after he ,with high-enough probability , achieved its original goal.

In the paperclip example : wasting time an resources in order to build more paperclips or building more sensors/cameras for analyzing the result should create enough negative utility to the agent compared to alternative actions.

Comment author: artemium 08 April 2015 07:05:27AM *  1 point [-]

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?

  • How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?

  • How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?

Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.

I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.

Comment author: artemium 07 April 2015 11:41:14AM *  14 points [-]

Interesting talk at BOAO forum : Elon Musk, Bill Gates and Robin Li (Baidu CEO). They talk about Superintelligence at around 17:00 minute.

https://www.youtube.com/watch?v=NG0ZjUfOBUs&feature=youtu.be&t=17m

  • Elon is critical of Andrew Ng remark that 'we should worry about AI like we should worry about Mars overpopulation' ("I know something about mars" LOL)

  • Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

  • Later, Robin Li mentions China Brain projects, which appears to be Chinese government AGI project (anyone knows something about it? Sounds interesting...hopefully it won't end like Japans 'fifth-generation computing' in the 80s)

Comment author: Viliam_Bur 24 March 2015 04:10:47PM *  4 points [-]

I am more concerned about the lack of specific algorithms in the book. If I remember correctly, there is no pseudocode anywhere. It's just metaphorically that the whole book is about human thinking algorithms, etc. But using the word "algorithm" in the title feels like a false promise.

EDIT: Okay, the hive mind has spoken, and I accept the "algorithms". Thanks to everyone who voted!

Comment author: artemium 31 March 2015 06:53:25AM 0 points [-]

I never thought of that, but that's a great question. We have similar problem in Croatian language as AI would be translated 'Umjetna Inteligencija' (UI). I think we can also use the suggested title "From Algorithms to Zombies" once someone decides to make Croatian/Serbian/Bosnian translation

View more: Next