In response to Playing offense
Comment author: Elo 30 November 2015 08:10:49PM 1 point [-]

can we fix the font of this? Thnx

In response to comment by Elo on Playing offense
Comment author: artemium 30 November 2015 11:13:19PM 1 point [-]

fixed.

Comment author: OrphanWilde 23 November 2015 02:47:56PM 12 points [-]

What terrorists want is irrelevant. "Don't play into enemy hands" is irrelevant. The entire discussion is irrelevant.

The correct response to enemy action is the response that furthers your own ends. It doesn't matter what effect this has on your enemy, good, neutral, or positive; your long-term ends matter.

"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this." A particularly relevant quote from Musashi, used by Eliezer on at least one occasion in the sequences.

Avoiding doing what the enemy wants is mere parrying. Stop mere parrying, and cut.

Comment author: artemium 29 November 2015 06:25:21AM *  0 points [-]

We would first have to agree on what "cutting the enemy" would actually mean. I think liberal response would be keeping our society inclusive, secular and multicultural at all costs. If that is the case than avoiding certain failure modes like becoming intolerant militaristic societies and starting unnecessary wars could be considered as successful cuts against potential worse world-states.

Now that is liberal perspective, there are alternatives, off course.

Comment author: artemium 01 June 2015 10:14:55PM *  0 points [-]

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.

Comment author: artemium 15 April 2015 07:06:32AM *  0 points [-]

One possibility is to implement the design which will makes agent strongly sensitive to the negative utility when he invests more time and resources on unnecessary actions after he ,with high-enough probability , achieved its original goal.

In the paperclip example : wasting time an resources in order to build more paperclips or building more sensors/cameras for analyzing the result should create enough negative utility to the agent compared to alternative actions.

Comment author: artemium 08 April 2015 07:05:27AM *  1 point [-]

Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.

  • Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?

  • Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?

  • How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?

  • How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?

Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.

I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.

Comment author: artemium 07 April 2015 11:41:14AM *  14 points [-]

Interesting talk at BOAO forum : Elon Musk, Bill Gates and Robin Li (Baidu CEO). They talk about Superintelligence at around 17:00 minute.

https://www.youtube.com/watch?v=NG0ZjUfOBUs&feature=youtu.be&t=17m

  • Elon is critical of Andrew Ng remark that 'we should worry about AI like we should worry about Mars overpopulation' ("I know something about mars" LOL)

  • Bill Gates mentioned Nick Bostrom and his book 'Superintelligence'. His seems to have read the book. Cool.

  • Later, Robin Li mentions China Brain projects, which appears to be Chinese government AGI project (anyone knows something about it? Sounds interesting...hopefully it won't end like Japans 'fifth-generation computing' in the 80s)

Comment author: Viliam_Bur 24 March 2015 04:10:47PM *  4 points [-]

I am more concerned about the lack of specific algorithms in the book. If I remember correctly, there is no pseudocode anywhere. It's just metaphorically that the whole book is about human thinking algorithms, etc. But using the word "algorithm" in the title feels like a false promise.

EDIT: Okay, the hive mind has spoken, and I accept the "algorithms". Thanks to everyone who voted!

Comment author: artemium 31 March 2015 06:53:25AM 0 points [-]

I never thought of that, but that's a great question. We have similar problem in Croatian language as AI would be translated 'Umjetna Inteligencija' (UI). I think we can also use the suggested title "From Algorithms to Zombies" once someone decides to make Croatian/Serbian/Bosnian translation

Comment author: Epictetus 24 March 2015 08:36:10AM 2 points [-]

I've spent the last few months following a new diet/exercise plan. I notice that my past failures came down to using food as a way to regulate my mood and deal with stress. Exercise mollifies this to a great extent; however, I find that I regularly experience temporary spurts of depression lasting a few hours, and in those times I find it difficult to maintain discipline. Is there a good way to guard against this sort of thing?

Comment author: artemium 31 March 2015 06:44:30AM 0 points [-]

One thing that might help you from my experience is to remove any food from your surroundings that could tempt you. I myself have only fruits, milks and cereals in my kitchen and basically nothing else. While I could easily go to supermarket or order food the fact I would need to do do some additional action is enough form me to avoid doing that. You can use laziness for your advantage.

Comment author: artemium 31 March 2015 06:23:37AM 1 point [-]

One of the reasons is that a lot of LW members are really involved in FAI issues and they strongly believe that if they manage to succeed in building a "good" AI , most of earthly problems will be solved in an very short time, Bostrom said something like that we can postpone solving complicated philosophical issues after we solved AI ethics issue.

Comment author: tailcalled 30 March 2015 10:04:37PM 2 points [-]

I think the fundamental point I'm trying to make is that Eliezer merely demonstrated that humans are too insecure to box an AI and that this problem can be solved by not giving the AI a chance to hack the humans.

Comment author: artemium 31 March 2015 06:06:16AM 0 points [-]

Agree.. The AI boxing Is horrible idea for testing AI safety issues. Putting AI in some kind of virtual sandbox where you can watch his behavior is much better option, as long as you can make sure that AGI won't be able to become aware that he is boxed in.

View more: Next