Thank you for the material, arundelo, Manfred, and TheOtherDave. Still getting the hang of the forum, so I hope this reaches everyone.
My original question came because I was worried about brain uploading. If I digitized my mind tomorrow, would I still love my wife? Would I still want to write novels? And would these things hold true as time passed?
Let’s say I went for full-body emulation. My entire body is now a simulation. Resources and computing power are not an issue. There are also no limits to what I can do within the virtual world. I have complete freedom to change whatever I want, including myself. If I wanted to look like a bodybuilder, I could, and if I wanted to make pain taste like licorice, I could do that too.
So I’m hanging out in my virtual apartment when I feel the need to go to the bathroom. Force of habit is so strong that I’m sitting down before I realize: “This is ridiculous. I’m a simulation, I don’t need to poop!” And because I can change anything, I make it so I no longer need to poop, ever. After some thought, I make it so I don’t need to eat or sleep either. I can now devote ALL my time to reading comic books, watching Seinfeld reruns, and being with my wife (who was also digitized.)
After a while I decide I don’t like comic books as much as I like Seinfeld. Since I’m all about efficiency, I edit out (or outgrow) my need to read comic books. Suddenly I couldn’t care less about them, which leaves me more time for Seinfeld.
Eventually I decide I don’t love my wife as much as I love Seinfeld. I spend the next billion years watching and re-watching the adventures of Jerry, George, and Elaine, blessed be their names.
Then I decide that I enjoy Seinfeld because it makes my brain feel good. I change it so I feel that way ALL THE TIME. I attain perfect peace and non-desire. I find nirvana and effectively cease to exist.
All of the basic AI drives assume that the mind in question has at least ONE goal. It will preserve itself in order to achieve that goal, optimize and grow in order to achieve that goal, even think up new ways to achieve that goal. It may even preserve its own values to continue achieving the goal... but it will ALWAYS have that one goal.
Here are my new questions:
Is it possible for intelligence to exist without goals? Can a mind stand for nothing in particular?
Given the complete freedom to change, would a mind inevitably reduce itself to a single goal? Optimize itself into a corner, as it were?
If such a mind had a finite goal (like watch Seinfeld a trillion times) what would happen if it achieved total fulfillment of its goal? Would it self-terminate, or would it not care to do even that?
How much consciousness do you need to enjoy something?
If it’s true that a pure mind will inevitably cease to be an active force in the universe, it implies a few things:
A. That an uploaded version of me should not be given complete freedom lest he become a virtual lotus-eater.
B. That the alternative would be to upload myself to an android body sufficiently like my old body that I retain my old personality.
C. That AIs, whether synthetic or formerly human, should not be given complete freedom because their values and goals would change to match the system they inhabit. If I were uploaded to a car, I might find myself preferring gasoline and spare parts to love and human kindness.
What are you trying to achieve with these questions?
If you just think the questions are entertaining to think about, you might find reading the Sequences worthwhile, as they and the discussions around them explore many of these questions at some length.
If you're trying to explore something more targeted, I've failed to decipher what it might be.
It may even preserve its own values to continue achieving the goal
This is backwards. An intelligent system selects goals that, if achieved, optimize the world according to its values.
...I change it so I feel
A few notes about the site mechanics
A few notes about the community
If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address.
* Normal_Anomaly
* Randaly
* shokwave
* Barry Cotter
A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.
A list of some posts that are pretty awesome
I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:
More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site!
Note from orthonormal: MBlume and other contributors wrote the original version of this welcome post, and I've edited it a fair bit. If there's anything I should add or update on this post (especially broken links), please send me a private message—I may not notice a comment on the post. Finally, once this gets past 500 comments, anyone is welcome to copy and edit this intro to start the next welcome thread.