Edit2: reactions to the edit made me reconsider, partially. I might get around to making more posts here.
EDIT: Because this and all my comments on it is getting downvoted already, I won't bother finish this and wish I'd never posted anything on it. Should I delete this thread or leave it as a monument to my own pathetic failure?
The topic of what you'd do if you found yourself as an upload and were to self improve is dangerous to think about for many reasons. It's unlikely to happen before the singularity and if it happens afterwards you'll have knowledge and a community that renders current speculation moot. As a human you almost certainly can't reach superintelegence without becoming Unfriendly. You can't think about any that improve intelligence beyond the first iteration because thatd be trying to predict somehting smarter than you. Etc.
However, even if you can only think about the very start of it, and the actual predictions or plans that you generate neither will or should have any reason to happen, there can be less direct benefits. The dominant one is it's damn fun; thinking about things you could do to your mind is way more interesting than what you could do with that hot guy/gal sitting in front of you on the bus or what you'd do with a billion dollars. More importantly thou, it serves to provide a LOWER BOUND, helping against failures of imagination and providing more salient and near mode motivation for a friendly singularity in establishing life after it will be at least this good and the only reason you wont do these awesome things is that you'll be provided by even better alternatives. Lastly, the chance is infinitesimal, but maybe you really will at some time have to boot the singularity from only your own upload and then a repository of the least unsafe upgrades LW could think of might come in handy. Just don't fool yourself the first one isn't the real cause of doing this thou. :p
Now, it happens to appear that all these 3 goals actually have the same most important heuristic: Keep it comprehensible to a vanilla human. There is a limited amount of fun to be gained from thinking of just a change to do without your brain being able to respond with what it'd feel like afterwards. Likewise, in the second goal the abstract "somehting really good but i don't know how good or in what exact way" is what we're trying to get away from. And for the last one, doing only changes you can comprehend is just common sense; "know what you're doing" taken literally.
So, for he format of this thread: Have discrete improvement suggestions, and put only one in each comment with a witty title bolded. To keep it from degenerating in to buzzwords and the obvious, but all these are very lose suggestions, here's a few guidelines that improvements should follow:
- The exact situational assumptions for each example may vary, but in general you're yourself, uploaded to a machine with enough power to simulate you at 10 to 10^12 times human speed, having 10 to 10^12the required memory, containing only you and software not much more advanced than we have today, using an architecture that provides no additional obstacles to anything (for example, all the computing power can be used serially and latency can be considered negligible), and you have no reason to be interested the outside world and under no obligation to personally cause the singularity and just enjoying yourself, but making sure you do not foom and cause a bad one. These are just establishing a default and you're free to make other assumptions but you have to write them out.
- It should be highly predictable and EASILY comprehensible. I won't bother defining this other than by heuristic: you should be able to predict what you'd do and feel after the change as well as you'd be able to predict what you'd do before it. By this definition reading a book you haven't read before is an example of a non comprehensible change but being wireheaded is. The narrowness of this is indeed excessive, but I'm confident it still gives a large enough search space and there is no need to go further into unpredictability than necessary.
- keep it low level. The point of this is things you can vividly imagine, and it's very easy to get carried away into far mode and abstraction. Talk neurons and algorithms, not ideas and functionality. Or rather, talk about the low level changes first and then the results they give on higher levels. Describe not what end result it'd be cool to have, but what procedure it'd be fun to do!
- Have a witty title. it should be in bold.
- Keep it fun. This is intended fair bit less serious than most LW discussions.
- Keep it something more than fun, and on topic for LW.
- Look at the examples I make.
EDIT: Damn, it's really late and I were a lot wordier than I thought. I don't have time to write the actual examples. I'll do that tomorrow then hopefully. Sorry. :(
Interesting potential akrasia-blocker: set this up to automatically revert me if I spend a whole hour doing one of a list of activities tagged "procrastinating". This could have annoying effects and wouldn't stop me from spending 50 minutes of the hour goofing off, but it might be useful.
Um, If you can do this stuff there's a much easier way to do this: Just have a copy of you (probably running at a lower speed) supervising the other copies. Then, if a copy is wasting time, SUSPEND that simulation: After all it's somehting you want to do sometime, akrasia is only a problem if time is limited or it has other negative effects.
Personally, I probably wouldn't even need to do this, I can't think of anything that'd actually be akrasia in a situation like that with near unlimited time and the ability to bring back an ancient copy from storage if ... (read more)