Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it's easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
Can you elaborate on this? Specifically, what did you learn about simulation shutdown risk, what do you mean by FAI team, and what does one have to do with the other?
Six months ago I thought CFAR was probably a bad idea. Now I think it's worth the investment, and have been positively surprised in three major ways in the last two months about the positive effects of already-done CFAR work.
I just updated in favor of CFAR being a misleading acronym. Took me a while to work out that this means Center For Applied Rationality, not this. That may become less significant once google actually knows about it.
Meta-Note: This is great! We should make this into a monthly or bi-monthly recurring thread like "Rationality Quotes" or "What are you working on?".
Back to the topic: I overestimated the efficacy of my anti-depressant and now believe that it was mainly placebo.
I read Shalizi's post on the difficulties of central planning, where he noted that even using something as simple as linear optimization to organize things becomes impossible if you need to do it on the scale of a national economy. This made me significantly reduce my belief in the proposition that something like CEV would be anywhere near computationally tractable, at least in the form that it's usually discussed.
That made me consider something like Goertzel & Pitt's human-assisted CBV approach, where much of the necessary computation gets outsourced to humans, as an approach that's more likely to work. Of course, their approach pretty much requires a slow takeoff in order to work, and I consider a hard takeoff pretty likely. Logically I should then have updated more towards considering that we'll end of losing our complexity of value during the Singularity but I didn't, possibly because I was already giving that a very high probability anyway and I can't perceive my intuitive probability estimates in sufficiently high precision for the difference to register. However I did update considerably towards thinking that Goertzel's ideas on Friendliness have more merit than I'd previously presumed, and that people should be looking in a direction like the one Goerzel & Pitt propose.
I changed my mind about my own effectiveness at relationships, and downgraded my confidence in being actually in control of my brain. I've upgraded my estimation of the extent to which I am like a typical female.
Specifically, what I've learned is that in dealing with things that are, to a large extent, affected by my unconscious, it is helpful to treat my conscious and unconscious as separate agents, only one of which I am in control of. In doing this, I noticed that the proportion of my decision making affected by unconscious "blips" was higher than I thought, and furthermore that my unconscious reacts to stimuli in a way which is predicted by pua-Game to a far greater extent than I believed (despite me being a very atypical female).
Concrete predictions which have changed as a result of this experience: I've increased my confidence in being able to deal with future relationship problems. If future problems do arise, I plan to use trusted sources on LTR-Game (to decipher my unconscious) as well as conscious reasoning. I've also massively decreased my confidence that polyamory is a worthwhile relationship model for me to attempt at any point (while my conscious thinks it's...
This experience has definitely been a positive for me, because I now have a more accurate model of my own behaviour which does allow me to more successfully solve problems. (Solving this particular problem has cause relationship satisfaction to shoot up from an albeit quite low slump to a real high point for both me and my OH.)
I'll just share the main specific technique I learned from the experience, just in case it might also work for you. When I treat my conscious and unconscious as separate agents, I accept that I cannot control my unconscious thinking (so just trying really hard not to do it won't help much), but I can /model/ how my unconscious reacts to different stimuli. If you draw the diagram with "Me", "Hamster" (what I call my unconscious) and "World" (including, specifically to here, the behaviour of my OH), and use arrows to represent what can affect what, then it's a two-way arrow between Me and World, a one-way arrow from World to Hamster, and a one-way arrow from Hamster to Me. After I drew that diagram it became pretty bloody obvious that I needed to affect the world in such a way as to cause positive reactions in Hamster (and for which I need accurate models of both World and Hamster), and most ideally, kickstart a Me -> World -> Hamster -> Me positive feedback loop.
I expect to read a dashed-off blog comment that makes half a maybe-plausible technical argument against cryonics but is not turned into a blog post that sets the argument out in detail roughly once every three months. Every day I don't read one, my confidence goes up. Reading that comment returns my confidence down to where it was about three months ago.
I've continued to research iodine's effect on IQ in adults & children, and the more null studies I manage to find the more pessimistic I get. I think I'm down to 5% from 50% (when I had only Fitzgerald 2012 as a null). The meta-analysis reports, as expected, a very small estimated effect size.
I met a relativist postmodern-type that also understood evolutionary psychology and science in general.
Young adult male here.
I've come to the conclusion that I'm nowhere near as attractive or good with girls as I thought I was.
I got my first girlfriend pretty much by accident last year. It was so incredibly amazing that I decided that romantic success was something I needed to become very good at. I spent quite a while reading about it, and thinking about how to be attractive and successful with women. I broke up with my girlfriend as I moved to a small town for two months at the beginning of this year, during which time I practiced approaching girls and flirting with them.
Then I moved to college, and the first attractive, smart girl I saw, I went up to her and got her as a girlfriend pretty much immediately. I thought that I must have been very good and attractive to have gotten such a gorgeous girlfriend so quickly. She broke up with me after a month or two. She immediately moved through two or three boyfriends over the space of a month or two. Meanwhile, I've been looking for a new girlfriend, but haven't had any success.
So I thought I was attractive and good with girls, and then it turned out that I just had a wild stroke of luck. So it goes.
I'm suspicious that I was simply arrogant about how good I was, and if I had thought more dispassionately, I wouldn't have been so wrong in my assessment of my own attractiveness.
Might I suggest that you may be looking at this all wrong- women are more attracted to your confidence than your looks. I suspect that your physical attractiveness is just fine, but the event of being dumped by this smart and beautiful woman hurt your self-confidence, and caused you to seem less attractive to other women afterwards.
The sort of guy who thinks a girl broke up with him because of his unattractiveness is very unattractive to most women, whereas the sort of guy who thinks "it's her loss, I was out of her league anyways" is highly attractive. If you get (or learn to fake) more self confidence, I predict that your success will return. Ironically, being arrogant about how good you are is both necessary and almost sufficient to actually be good.
Austrian Economics.
I was fairly convinced too, so I am now very worried about how many other more blatant silly things I believe and may believe in the future. I've definitely been more at least a bit more wary than usual after realising this particular mistake.
I initially didn't really want to make this post, but I recognised that it was for reasons perhaps relating to status ( I was embarrassed to admit I believed something comparatively less trivial and more obviously rubbish compared to others in this thread) but it was pretty was easy to get over it once I thought about it and also thanks to the fact that this kind of thing is exactly what the thread was probably looking for.
I just changed my mind in this direction:
Due to updates about simulation shutdown risk and the difficulty of FAI philosophy (I think it's easier than I used to believe, though still very hard), I think an FAI team is a better idea than I thought four months ago.
... and slightly upgraded my expectation of human non-extinction.
Damn it is easy to let other people do (some of) your thinking for you.
Up until a month or so ago, I was convinced I'd landed my dream job. If I had a soul, it would be crushed now.
Which is not to say that it's awful, not by any means. I've just gained a new perspective on the value of best practices in software development.
Seconded, more explanation is needed.
My experience with the best software practices is the following:
When a deadline is near, all best software practices are thrown out of the window. Later in the project, a deadline is always near.
While the spirit of the best software practices is ignored, it is still possible to follow their letter religiously, and be proud of it. This frequently leads to promotion.
I've changed my mind on whether Cosma Shalizi believes that P=NP. I thought he did, upon reading "Whether there are any such problems, that is whether P=NP, is not known, but it sure seems like it.", at his blog, only to discover after emailing him that he made a typo. I've also learned not to bet with people with such PredictionBook stats, and specially not as much as $100.00.
I have lowered my estimation of how hard it is to write rhyming, rhythmically satisfying poetry (no regard to the literary quality of the product). It has become my hobby on my walk to work. Read some Lewis Carroll and try to think along the beat pattern he uses - just garble words to yourself filling in the occasional blank that sounds good so long as you get the rhythm right. Do that for a while and you can start piecing phrases into that mold with some work and iteration. It's much more fun to just feel the beats than to count.
Changed my mind from lack of particular opinion on SI to SI being complete cranks. Changed my mind from opinion that AIs may be dangerous, to much lower estimate of potential danger. (I have post history on Dmytry account to prove changing mind).
I've changed my mind on the persuasiveness of a specific argument. I used to hold a high degree of confidence in the line of reasoning that "since nobody can agree on just about anything about god, it is likely that god doesn't exist". But then, in an unrelated conversation, someone pointed out that it would be foolish to say that "since nobody can agree on the shape of the earth, earth has no shape." I must be question-begging!
After reading the comments on my post on Selfish Reasons to Have More Kids, I think it's somewhat less likely that it's substantially correct. I think this might be largely a social effect rather than a evidence effect, though.
I wouldn't say I changed my mind, but I substantially increased my p-estimate that the following recipe could produce something very close to intelligence:
1) a vast unlabeled data set (think 1e8 hours of video and audio/speech data plus the text of every newspaper article and novel ever written)
2) a simple unsupervised learning rule (e.g. the reduced Boltzmann machine rule)
3) a huge computer network capable of applying many iterations of the rule to the data set.
I previously believed that such an approach would fail because it would be very difficult to "debug" the resulting networks. Now I think that might just not matter.
I expected that my intuitive preference for any number of dust specks over torture would be easy to formalize without stretching it too far. Does not seem like it.
On the other hand, given the preference for realism over instrumentalism on this forum, I'm still waiting for a convincing (for an instrumentalist) argument for this preference.
Two minutes ago I changed my mind on the source of the problem I was having: It wasn't an off-by-one error in my indexing but rather just failing to supply multiple work spaces for the case of multiple convolutions in the same PDF. D'oh! I must confess, though, that I wasn't convinced by anyone's argument, as such. So perhaps it doesn't count properly for this thread.
I was wrong about Neanderthals and us. I was sure that they were much more alien to us, as now has appeared they were. Now we see that some even grandfathered some of us.
I was politically correct, I guess. Australian Aborigines and Europeans are nearly too distant cousins for this correctness.
Admitting to being wrong isn't easy, but it's something we want to encourage.
So ... were you convinced by someone's arguments lately? Did you realize a heated disagreement was actually a misunderstanding? Here's the place to talk about it!