I've failed Uberman twice myself. You have pretty much an optimal plan, except for the naptation.
"Cut your naps down to 6 as quickly as you can without it hurting too much".
From my own knowledge, which may or may not be trustworthy, naptation doesn't need to be ended prematurely - the whole point is to get a huge number of naps in a short timeframe in order to learn to get REM in a 24-minute interval (which dreaming is a sign of). Getting a few more will just decrease your REM dep. The way I would do it is, get 12 naps a day until you find your...
Another thing that happened when I tried this was that no alarm could phase me. Every alarm I tried, including one that required typing my computer password, I would figure out how to turn it off in my sleep. I'm sure I could have continued escalating into solving np complete problems before it stopped, but I gave up soon afterward. I pretty much woke up exclusively from other being physically waking me. I even answered the phone while asleep once, no idea what I said.
I discovered this issue for myself by reading a similar article, and going through the same process, but with my third thought being "does that guy [the Prime Minister in this story] really believe this thing that I believe [in this case, pro-choice]?" I think he's bad because he broke the rules, then I forgive him because he's on my side, then for one reason on another I start to wonder if he really is on my side...and notice that I'm trying to decide whether to blame him for breaking the rules or not. (I think this is because I myself use irony...
I've doubted his process from the start - I remember reading a third person's comment that pointed out he had forgotten to add iron - and his subsequent reply that this mistake was the cause of his feeling bad. I know nothing about nutrition (except that it's not a very good science, if it's science at all), yet iron is obvious even to me. To miss it shows that he didn't really do much double checking, much less cross-referencing or careful deliberation of the ingredient list.
I'm really hopeful about Soylent - I'd even jump in and risk poisoning to test it...
Is it useful to increase reading speed, even if it takes a minimal amount of time (to go from basic level to some rudimentary form of training)? I've always been under the impression that speed increases in reading are paid for with a comprehension decrease - which is what we actually care about. Or is this only true for the upper speed levels?
What was the name of that rule where you commit yourself to not getting offended?
I've always practiced it, though not always as perfectly as I've wanted (when I do slip up, it's never during an argument though; my stoicism muscle is fully alert at those points in time). An annoying aspect of it is when other people get offended - my emotions are my own problem, why won't they deal with theirs; do I have to play babysitter with their thought process? You can't force someone to become a stoic, but you can probably convince them that their reaction is hurting them and show them that it's desirable for them to ignore offense. To that end, I'm thankful for this post, upvoted.
I agree, you can get over some slip-ups, depending on how easy what you're trying is compared to your motivation.
As you said, it's a chain - the more you succeed, the easier it gets. Every failure, on the other hand, makes it harder. Depending on the difficulty of what you're trying, a hard reset is sensible because it saves time from an already doomed attempt, >and< makes the next one easier (due to the deterrent thing).
I disagree. This entire thread is so obviously a joke, one could only take it as evidence if they've already decided what they want to believe and are just looking for arguments.
It does show that EY is a popular figure around here, since nobody goes around starting Chuck Norris threads about random people, but that's hardly evidence for a cult. Hell, in the case of Norris himself, it's the opposite.
If you want to get up early, and oversleep once, chances are, you'll keep your schedule for a few days, then oversleep again, ad infinitum. Better to mark that first oversleep as a big failure, take a break for a few days, and restart the attempt.
Small failures always becoming huge ones also helps as a deterrent - if you know that that single cookie that bends your diet will end up with you eating the whole jar and canceling the diet altogether, you will be much more likely to avoid even small deviations like the plague, next time.
This was my argument when I first encountered the problem in the Sequences. I didn't post it here because I haven't yet figured out what this post is about (gotta sit down and concentrate on the notation and the message of the author and I haven't done that yet), but my first thoughts when I read Eliezer claiming that it's a hard problem were that as the number of potential victims increases, the chance of the claim being actually true decreases (until it reaches a hard limit which equals the chance of the claimant having a machine that can produce infinit...
The point is that a superhero can't take preemptive action. The author can invent a situation where a raid is possible, but for the most part, superman must destroy the nuke after it has been launched - preemptively destroying the launch pad instead would look like an act of aggression from the hero. And going and killing the general before he orders the strike is absolutely out of the question. This is fine for a superhero, but most of us can't stop nukes in-flight.
A dictatorship is different because aggression from the villain is everywhere anyway - and ...
I can definitely agree with 5, and to some extent with 3. With 4, it didn't seem to me when I read this months ago that the Superhappies would be willing to wait; it works as a part of 3 (get a competent committee together to discuss after stasis has bought time), but not by itself.
I found it interesting on my first reading that the Superhappies are modeled as a desirable future state, though I never formulated a comprehensive explanation for why Eliezer might have chosen to do that. Probably to avoid overdosing the Lovecraft. It definitely softens the blo...
-Hanlon's razor - I always start from the assumption that people seek the happiness of others once their own basic needs are met, then go from there. Helps me avoid the "rich people/fanatics/foreigners/etc are trying to kill us all [because they're purely evil and nonhuman]" conspiracies.
-"What would happen if I apply x a huge amount of times?" - taking things to the absurd level help expose the trend and is one of my favourite heuristics. Yes, it ignores the middle of the function, but more often than not, the value at x->infinity is all that matters. And when it isn't, the middle tends to be obvious anyway.
When you mentioned compartmentalization, I thought of compartmentalization of beliefs and the failure to decompartmentalize - which I consider a rationalistic sin, not a virtue.
Maybe rename this to something about remembering the end goal, or something about abstraction levels, or keeping the potential application in mind; for example "the virtue of determinism"?
Doesn't this machine have a set of ways to generate negative utility (It might feel unpleasant when using up resources for example, as a way to prevent a scenario where the goal of 32 paperclips becomes impossible)? With fewer and fewer ways to generate utility as the diminishing returns pile on, the machine will either have to terminate itself (to avoid a life of suffering), or seek to counter the negative generators (if suicide=massive utility penalty).
If there's only one way to generate utility and no way to lose it however, that's going to lead to the behavior of an addicted wirehead.
At night F.Lux is usually great. Except when waking up or doing polyphasic (where you actually treat night and day as the same thing). I discovered the program a week after I started Uberman, and a shortly after installing it, I started having trouble staying up during the early morning hours between 3am-7am, where previously I had no issue at all. I am no longer doing polyphasic, so it's awesome - I never get blinded by my monitor etc. I only wish I could make it so that it uses the daylight setting if I turn the PC on at night - so it helps me wake up. As it stands, I get two hours of "you should be in bed" lighting before it finally gives up on sending me for a nap.
From rereading the article, which I swear I stumbled upon recently, I took away that I shouldn't take too long to decide after I've written my list, lest I spend the extra time conjuring extra points and rationalizations to match my bias.
As for the meat of the post, I don't think it applies as much due to the importance of the decision. I could go out and gather more information, but I believe I have enough, and now it's just a matter of weighing all the factors; for which purpose, I think, some agonizing and bias removal is worth the pain.
Hopefully I can ...
I have an important choice to make in a few months (about what type of education to pursue). I have changed my mind once already, and after hearing a presentation where the presenter clearly favored my old choice, I'm about to revert my decision - in fact, introspection tells me that my decision was already changed at some point during the presentation. In regards to my original change of mind, I may also have been affected by the friend who gave me the idea.
All of this worries me, and I've started making a list of everything I know as far as pros/cons go ...
For the preference ranking, I guess I can mathematically express it by saying that any priority change leads to me doing stuff that would be utility+ at the time, but utility- or utilityNeutral (and since I could be spending the time generating utility+ instead, even neutral is bad) now. For example, if I could change my utility function to eating babies, and babies were plentiful, this option would result in a huge source of utility+ after the change. Which doesn't change the fact that it also means I'd eat a ton of babies, which makes the option a huge s...
I attach negative utility to getting my utility function changed - I wouldn't change myself to maximize paperclips. I also attach negative utility to getting my memory modified - I don't like the normal decay that is happening even now, but far worse is getting a large swath of my memory wiped. I also dislike being fed negative information, but that is by far the least negative of the three, provided no negative consequences arise from the false belief. Hence, I'd prefer being fed negative information to having my memory modified to being made to stop cari...
Don't we have to do it (lying to people) because we value other people being happy? I'd rather trick them (or rather, let the AI do so without my knowledge) than have them spend a lot of time angsting about not being able to help anyone because everyone was already helped. (If there are people who can use your help, I'm not about to wirehead you though)
Do you mean to distinguish this from believing that you have flown a spaceship?
Yes. Thinking about simulating achievement got me confused about it. I can imagine intense pleasure or pain. I can't imagine...
No, this is more about deleting a tiny discomfort - say, the fact that I know that all of it is an illusion; I attach a big value to my memory and especially disagree with sweeping changes to it, but I'll rely on the pill and thereby the AI to make the decision what shouldn't be deleted because doing so would interfere with the fulfillment of my terminal values and what can be deleted because it brings negative utility that isn't necessary.
Intellectually, I wouldn't care whether I'm the only drugged brain in a world where everyone is flying real spaceship...
Can't I simulate everything I care about? And if I can, why would I care about what is going on outside of the simulation, any more than I care now about a hypothetical asteroid on which the "true" purpose of the universe is written? Hell, if I can delete the fact from my memory that my utility function is being deceived, I'd gladly do so - yes, it will bring some momentous negative utility, but it would be a teensy bit greatly offset by the gains, especially stretched over a huge amount of time.
Now that I think about it...if, without an awesomen...
I started doing the same thing a few days ago, in an attempt to get back my habit of waking early (polyphasic experimenting got my sleep schedule out of whack). Something I did differently was, I write in the same box twice - once before I go to bed, something like committing to waking up early, and once after I get up. This solved my problem of getting up, making up some reason to postpone the habit-formation process (or even cancel it to start anew later), and going back to bed. My symbols are a bit more complex, so that I can mark a failure on top of th...
I remember that when I went through all of the Sequences a year ago, I was curious about the retina issue that Eliezer keeps referring to, but a cursory search didn't return anything useful. I poked around a bit more just now, and found a few short articles on the topic. Could someone point me to more in-depth information regarding the inverted retina?
For pep talks, I dislike them because they rely on the "I have this image of you" approach. The motivator is trying to get you to think they think you're great - if you don't agree, you will want to live up to the expectation regardless, as the alternative is disappointment, and disappointment hurts. For me, this gets me thinking about ways to win, which gets me back to my thoughts about not being very good, and thus the cycle is reinforced. I might try harder, but I won't feel good about it, and I'll feel paralyzed quickly, once it becomes appar...
I share similar behaviors, although with key differences, and you just alerted me - I should be careful with my failure mode. It's gotten to a point where I don't want to try improving particular skillsets around my parents. I've already shown them that I'm bad at them, and that I'm not interested; trying to improve through my usual all-or-nothing approach would feel very awkward, a 180 personality turn.
I find a hundredfold cost decrease quite unlikely, but then, I'm not familiar at all with the costs involved or their potential to be reduced. If the idea of cryonics were accepted widely enough for it to be an acceptable alternative to more expensive treatment though, freezing old people with negative utility to society until a potential technological Singularity or merely the advent of mind uploading would not be far off - and that would be efficient even without cheap cryonics.
Didn't predictions for the Singularity follow a similar trend? Older people predicting 30-40 years until the event, and younger predictors being more pessimistic because they're likely to still be alive even if it happens in 60 years?