Comment author: tailcalled 30 March 2015 07:41:03PM 6 points [-]

Assuming the simulators are good, that would imply that people who experience lives not worth living are not actually people (since otherwise it would be evil to simulate them) but instead shallow 'AIs'. Paradoxically, if that argument is true, there is nothing good about being good.

Or something along those lines.

Comment author: artemium 31 March 2015 06:02:02AM 1 point [-]

Hmm I still think that there is incentive to behave good. Good, cooperative behavior is always more useful than being untrustworthy and cruel to other entities. There might be some exceptions, thought (simulators want conflict situation for entertainment purposes or some other reasons).

Comment author: kingmaker 30 March 2015 07:31:10PM *  8 points [-]

This co-opts Bostrom's Simulation argument, but a possible solution to the fermi paradox is that we are all AI's in the box, and the simulators have produced billions of humans in order to find the most friendly human to release from the box. Moral of the story, be good and become a god

Comment author: artemium 31 March 2015 05:57:09AM *  0 points [-]

I had exactly the same idea!

It is possible that in that only few people are actually 'players' (have consciousness) and others are NPC-like p-zombies. In that case, I can say I'm one of the players, as I'm sure that I have consciousness, but there is no way I can prove it to anyone else ;-) .

One of the positive aspects of this kind of thought experiments is that usually gives people additional reasons for good behavior because in most cases it is highly likely that simulators are conscious creatures who will probably reward those who behave ethically.

Comment author: RicardoFonseca 28 January 2015 01:23:59AM 12 points [-]

It's funny, when trying to think how could an AI exploit the text interface, I came up with the idea of writing multiple lines of ASCII text in a rapid fashion and produce an "ASCII video" for the human to watch. Unless the AI could only write a new line every second or something like that...

This made me wonder if the AI could also exploit other things such as the sound the machine would do when specific data patterns would be copied into RAM or when certain sequences of computations would be processed by a CPU. Sure, the machine (including the monitor) could be placed in a sound proof room, but then I start thinking about energy dissipation patterns and how could those be used to interfere with the environment and consequently with the human...

Anyway, I think that when thinking about the means of communication the AI has available, one should be thorough and consider everything that is a part of, or is connected to, the physical system running the AI.

Comment author: artemium 30 January 2015 08:50:15PM 0 points [-]

Exactly. Also there are great number of possibilities that even the smartest persons could not even imagine, but powerful Superintelligence could.

Comment author: passive_fist 27 January 2015 11:18:35PM 9 points [-]

I stopped reading after the first few insults about excrement... I'm not sure where you were trying to get with that. If that was part of some strategy I'm not sure how you think that would have worked.

Comment author: artemium 30 January 2015 08:42:14PM 0 points [-]

I stopped reading after the first few insults about excrement... I'm not sure where you were trying to get with that. If that was part of some strategy I'm not sure how you think that would have worked.

Agree. Hopefully I'm not the only one who thinks that AGI game in this example was quite disappointing. But anyway, I was never convinced that AI boxing is good idea as it would be impossible for any human to correctly analyze the intentions of SI based on this kind of test.

Comment author: artemium 26 December 2014 01:59:54PM 0 points [-]

There is additional benefit of breaks while doing computer work: it helps reduce strain on your eyes. Watching into computer screen for too long reduces your blinking rate and may cause eye problems in future.

A lot of people who work in programming (including myself) have dry eyes condition.

There are good apps for chrome which can help you with this and most of them allow you to customize breaks depending on your schedule.

Comment author: ChristianKl 16 December 2014 11:16:14PM 4 points [-]

The article says:

There are many possible sources, biological or non-biological, such as interaction of water and rock.

There's also matter exchange from earth to mars that could have brought life that originated on earth to mars.

Comment author: artemium 17 December 2014 07:19:27AM 0 points [-]

Yeah, I know that there are other filters behind us, but I just found it as a funny coincidence while I was in the middle of the facebook discussion about Great Filter and someone shared this Bostrom's article .

But I hope that our Mars probes will discover nothing.  It would be good news if we find Mars to be completely sterile.  Dead rocks and lifeless sands would lift my spirit.

Comment author: artemium 17 December 2014 06:59:56AM 4 points [-]

Ok I have one meta-level super-stupid question . Would it be possible to improve some aspects of the LessWrong webpage? Like making it more readable for mobile devices? Every time I read LW in the tram while going to work I go insane trying to hit super-small links on the website. As I work in Web development/UI design, I would volunteer to work on this. I think in general that the LW website is a bit outdated in terms of both design and functionality, but I presume that this is not considered a priority. However a better readability on mobile screens would be a positive contribution to its purpose.

Comment author: artemium 16 December 2014 10:53:22PM *  0 points [-]

Horrible news!!! Organic molecules have just been found on Mars. It appears that the great filter is ahead of us.

Comment author: Slider 27 November 2014 01:27:56AM -1 points [-]

Studying computers I have ran into Turings name occasionally. When I actually looked up the papers he had wrote that seeded the concepts that caryy his name, this was a very refreshing read. To me they stand the test of tmie well. I knew that Turing committed suicide that had to do with him being a homosexual. Now I have learned of suggestions that official instituitons might have had a helping hand in that and that there wil be no offcial apology.

Turing was quite young and what he produced was pretty good stuff. I would have been really exited to read what he would have written if he had been on the field for 5 times as much. Shortening that lifespan motivated with something as silly as homosexuality inflamed me with a big anger emotion.

You can add to your list of why we don't have the singularity yet the item of "not tolerant enough".

Comment author: artemium 27 November 2014 06:00:03PM *  0 points [-]

I think you posts was interesting., so why the downvote? I'm new here, and I'm just trying to understand Karma system. Any particular reason?

Comment author: artemium 27 November 2014 05:49:39PM *  2 points [-]

Nice blog post about AI and existential risks by my friend and occasional LW poster. He was inspired by disappointingly bad debate on Edge.org. Feel free to share if you like it. I think it is a quite good introduction on Bostrom's and MIRI arguments.

"The problem is harder than it looks, we don’t know how to solve it, and if we don’t solve it we will go extinct."

http://nthlook.wordpress.com/2014/11/26/why-fear-ai/

View more: Prev | Next