All of _ozymandias's Comments + Replies

2[anonymous]
How can you ... uhm ... be both?

I don't believe that that quite applies to my situation. I'm not predicting whether I'll choose right now to break up with my girlfriend (99.999% certainty I won't); I'm predicting whether at some point in the next year one of the future Ozymandiases, subtly different from me, will find zirself in a state in which zie wants to break up with zir girlfriend. I have already made up my mind to not break up; I'm predicting how likely I am to change my mind.

2Luke_A_Somers
Are you not certain of your future self's gender, or are you using Dr Dan Streetmentioner's grammar for time travelers?

I hope that the cynicism I reject in my own self-examination of my membership in my own church of rational physics engineering leads me to reject cynicism when trying to understand other people's churches. There ARE reasons people believe things and they are by no means all stupid reasons.

We're definitely in agreement there. And even the ones that are stupid may be psychologically reassuring or otherwise "make sense" even if they are completely irrational. While signalling arguments are important, I think it's unrealistic to consider them to the exclusion of other arguments.

I was thinking roughly Matrix 2 level backlash: a significant group of "ruined FOREVER" fans, but the movie does not become a byword for terribleness now and forever like Episode 1. Possibly this could be measured by the number of negative YMMV tropes on its TVTropes page?

Fan backlash is remarkably difficult to operationalize.

0MugaSofer
You sure Matrix 2 isn't "a byword for terribleness now and forever like Episode 1"? I wasn't really around for either of them, but the reaction people have seems about the same for both. EDIT: Although I may be confusing backlash against the Matrix sequels for backlash against "Matrix 2". Was that lower?
2Nornagest
Don't think that'd work; TV Tropes isn't very representative of fandom as a whole, and in any case popular works will attract more negative tropes than obscure ones simply as a function of having more eyes on the page and more fingers on keyboards. On the other hand, if the page gets locked for bickering, that's probably a good (if binary) indicator of backlash. If you asked me to come up with a more general metric of fannish approval, I might look at ratios of fanworks to mainstream sales; that's pretty hard in itself, though, since different fandoms congregate in different places. You'll find a lot more Naruto fanart on DeviantArt than Sherlock Holmes.

Sorry. I apparently suck at the Internet. :)

noseriouslywhatabouttehmenz.wordpress.com

0Raemon
Does the blank post signify anything?

No death or rape threats. I have yet to come up with a theory about why (beyond "crazy random happenstance" and "I'm so nice no one wants to rape and murder me"); suggestions appreciated.

3MixedNuts
I feel tempted to send you some extremely silly and colorful threats just so you can check off that milestone. ("I will pay Pinky from Pinky and the Brain to invent a time-travel machine to genetically modify your great-great-grandparents so that you end up with a lethal allergy to Cornish pasties, and then I will mail you a Cornish pasty!")
2FiftyTwo
The sort of people who make rape threats on feminist websites wouldn't rape or don't believe it is possible to rape someone with a masculine sounding screen-name.

Thanks! LW actually helped me crystallize that a lot of the stuff social-justice-types talk about is not some special case of human evil, but the natural consequence of various cognitive biases (that, in this case, serves to disadvantage certain types of people).

Dammit, could someone clean the fanboy off the ceiling? The goop is getting in my hair. :)

It is true, I forgot to account for the effects of a GOP presidency on OWS. However, I still think there's a high chance of a OWS fadeaway for a few reasons. One, the liberal hippies (generally the backbone of social justice movements) have started to nitpick OWS in earnest: this could be a sign either that OWS is getting more successful (and the crab in a bucket mentality is taking over) or that it's losing their support, but given that the mainstream media seems to have decided OWS is yesterday's news, I think it might be the latter. Second, as the econo... (read more)

To a certain degree, different brands of feminism could function as different parties (certainly in academic feminism they do). A Christina-Hoff-Sommers-esque conservative feminist is unlikely to agree much with a Dworkinite radical feminist. For instance, "rape is a subset of violence with no particularly gendered component" and "rape is the natural outgrowth of a culture in which women's subordination to men is eroticized" are two substantially different positions (both of which I disagree with).*

Admittedly, the average person is not... (read more)

-1MugaSofer
What if an idea is highly competitive but factually wrong? Or even actively harmful?
7HughRistik
You are quite correct. There are large disagreements and fissures within feminism. These disagreements might not be obvious or cared about by non-feminists (similar to how many feminists don't recognize the differences within MRAs and PUAs). See out-group homogeneity bias. As you also observe correctly, there are some common premises (and biases) even within these different groups of feminists. Although there are widely varying feminist opinions on porn, trans people, race issues, etc, there unfortunately seems to be a lot of homogeneity in how feminists view men's issues. For example, the notion that "men are privileged over women" is very common, and I wish that there was more debate within feminism about whether that was an acceptable generalization, and what it means. The acceptance of these concepts is merely a case of the availability heuristic. Women's oppression (and men's privilege) is more cognitively available to feminist women, so their theories often fail to account for oppression towards men and female privileges. This bias is not completely universal across feminist factions, but it's very broad. I hope that if examples of male suffering, female perpetration, and female advantages were more cognitively available to feminists, then some of them would eventually update their theories into a form of feminism that is more inclusive. I think you've been taking a step in that direction with your blogging, with your posts on undiagnosed brain injury in the military, how sexual violence, domestic violence, and abuse are much less gendered than the traditional feminist portrayal according to new surveying, and the underreporting and cover-up of sexual violence towards men in African conflict zones. I couldn't agree more.

The difference in my reaction when reading this post before and after I found my something to protect is rather remarkable. Before, it was well-written and interesting, but fundamentally distinct from my experience-- rather like listening to people talk about theoretical physics. Now, when I read it, my feeling of determination is literally physical. It's quite odd.

Has anyone else had a similar experience?

7Shmi
Feel free to share what is that something you found to protect.

I'm already polyamorous, so there is in fact a certainty of a polyamorous relationship situation at some point in 2012. :)

0Nick_Roy
Ah, I should have taken that possibility into account. Thank you.

My girlfriend knows and is highly amused at my pessimism.

My logic is that I have never actually had a relationship that went much beyond the six-month mark, and while there are all kinds of factors that mean that this one is different and will stand the test of time, all of my other relationships also had all kinds of factors that meant this one is different and will stand the test of time.

The prediction is only 60%, however, since I might have actually gotten better at relationships since the last go-round. And because my girlfriend is really fucking awesome. :)

0Tripitaka
You may be interested in this http://lesswrong.com/lw/jx/we_change_our_minds_less_often_than_we_think .
0FiftyTwo
Can you get her prediction? Then possibly revise the prediction in light of new information from an informed party.

Romney will be the Republican presidential nominee: 80%.

Obama will win reelection: 90%.with a non-Romney presidential nominee, 50% against Romney

The Occupy Wall Street protests will fade away over the next year so much that I no longer hear much about them, even in my little liberal hippie news bubble: 75%

There will be massive fanboy backlash against The Hobbit: 80%. Despite this, the Hobbit will be a pretty good movie (above 75% on Rotten Tomatoes): 70%

John Carter will be a pretty good movie (above 75% on Rotten Tomatoes). 85% Whether or not it is a good ... (read more)

0MBlume
When you make a series of predictions A, B, and C, are the probabilities you give for B and C conditional on A coming out in such a way that B and C make sense?
2NancyLebovitz
My opinion is that a lot of the OWS folks are conferring and planning during the winter, and will continue to protest but will be doing something other than occupying public or semi-public spaces. I don't know how to frame this as a testable prediction.
0taw
Intrade says: * Romney 78.8% chance of 2012 Republican nomination. * Romney 38.5% chance of 2012 presidency. (and 38.5 / 78.8 = 48.8% for what it's worth) * Obama 51.4% chance of 2012 presidency. So in these you are in agreement with everybody else. I predict you're wrong on Hobbit backlash, but I don't even see how to define "backlash". Are we talking Matrix 2 backlash or Episode 1 backlash?
0Prismattic
Not too far off my own estimate, but... = 42% chance of a Republican president in 2013. ...seems overconfident. Counterprediction: OWS comes roaring back in some form|GOP presidency : 85% Assuming only, say 20% chance of OWS maintaining itself in some form under a Democrat, that still gives (0.85x0.42 + 0.2x0.58) = 0.515 of continued OWS activity. Rounding down to correct for the likelihood of overconfidence at some intermediate step, I'll say Chance of OWS fading away: 50%

I will get my first death or rape threat this year: 80% My reaction to the death or rape threat will be elation that I've finally made it in feminist blogging: 95% Even if it isn't I will totally say it is in order to seem cooler.

You haven't gotten one yet?

I once had a totally non-political blog with less than 1000 views per month, and I still got a few.

2Nick_Roy
So, with a 60% chance of girlfriend breakup and a 90% chance of new partner acquisition, does this mean a 36% chance of a polyamorous, open, "cheating" or otherwise non-monogamous relationship situation for you at some point over the next year? Edited to add: actually somewhat higher than 36%, since multiple new partners are possible along with a girlfriend breakup.
9falenas108
I sincerely hope your girlfriend does not read this site, or at least doesn't know your username.

Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.

The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn't need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it's sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a

... (read more)
1Zetetic
Yeah, I'd say that's a fair approximation. The AI needs a way to compress lots of input data into a hierarchy of functional categories. It needs a way to recognize a cluster of information as, say, a hammer. It also needs to recognize similarities between a hammer and a stick or a crow bar or even a chair leg, in order to queue up various policies for using that hammer (if you've read Hofstadter, think of analogies) - very roughly, the utility function guides what it "wants" done, the statistical inference guides how it does it (how it figures out what actions will accomplish its goals). That seems to be more or less what we need for a machine to do quite a bit. If you're just looking to build any AGI, he hard part of those two seems to be getting a nice, working method for extracting statistical features from its environment in real time. The (significantly) harder of the two for a Friendly AI is getting the utility function right.

Before I ask these questions, I'd like to say that my computer knowledge is limited to "if it's not working, turn it off and turn it on again" and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I'm pretty talented at. So, uh... don't assume I know anything, okay? :)

How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons... (read more)

9[anonymous]
What prevents you from making a meat-based AI?
5Zetetic
A couple of things come to mind, but I've only been studying the surrounding material for around eight months so I can't guarantee a wholly accurate overview of this. Also, even if accurate, I can't guarantee that you'll take to my explanation. Anyway, the first thing is that brain form computing probably isn't a necessary or likely approach to artificial general intelligence (AGI) unless the first AGI is an upload. There doesn't seem to be good reason to build an AGI in a manner similar to a human brain and in fact, doing so seems like a terrible idea. The issues with opacity of the code would be nightmarish (I can't just look at a massive network of trained neural networks and point to the problem when the code doesn't do what I thought it would). The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn't need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it's sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once). Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it's fairly mainstream. As far as computing power - the computing power of the human brain is definitely measurable so we can do a pretty straightforward analysis of how much more is possible. As far as raw computing power, I think we're
3[anonymous]
As far as we know, it easily could require an insanely high amount of computing power. The thing is, there are things out there that have as much computing power as human brains—namely, human brains themselves. So if we ever become capable of building computers out of the same sort of stuff that human brains are built out of (namely, really tiny machines that use chemicals and stuff), we'll certainly be able to create computers with the same amount of raw power as the human brain. How hard will it be to create intelligent software to run on these machines? Well, creating intelligent beings is hard enough that humans haven't managed to do it in a few decades of trying, but easy enough that evolution has done it in three billion years. I don't think we know much else about how hard it is. Well, "bootstrapping" is the idea of AI "pulling itself up by its own bootstraps", or, in this case, "making itself more intelligent using its own intelligence". The idea is that every time the AI makes itself more intelligent, it will be able to use its newfound intelligence to find even more ways to make itself more intelligent. Is it possible that the AI will eventually "hit a wall", and stop finding ways to improve itself? In a word, yes. There's no easy way. If it knows the purpose of each of its parts, then it might be able to look at a part, and come up with a new part that does the same thing better. Maybe it could look at the reasoning that went into designing itself, and think to itself something like, "What they thought here was adequate, but the system would work better if they had known this fact." Then it could change the design, and so change itself.
lukeprog310

Consciousness isn't the point. A machine need not be conscious, or "alive", or "sentient," or have "real understanding" to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?

Neurons work differently than computers only at certain levels of organization, which is true for every two systems yo... (read more)

1TimS
The highlighted portion of your sentence is not obvious. What exactly do you mean by work differently? There's a thought experiment (that you've probably heard before) about replacing your neurons, one by one, with circuits that behave identically to each replaced neuron. The point of the hypo is to ask when, if ever, you draw the line and say that it isn't you anymore. Justifying any particular answer is hard (since it is axiomatically true that the circuit reacts the way that the neuron would). I'm not sure that circuit-neuron replacement is possible, but I certainly couldn't begin to justify (in physics terms) why I think that. That is, the counter-argument to my position is that neurons are physical things and thus should obey the laws of physics. If the neuron was build once (and it was, since it exists in your brain), what law of physics says that it is impossible to build a duplicate? I'm not physicist, but I don't know that it is feasible (or understand the science well enough to have an intelligent answer). That said, it is clearly feasible with biological parts (again, neurons actually exist). By hypothesis, the AI is running a deterministic process to make decisions. Let's say that the module responsible for deciding Newcomb problems is originally coded to two-box. Further, some other part of the AI decides that this isn't the best choice for achieving AI goals. So, the Newcomb module is changed so that it decides to one-box. Presumably, doing this type of improvement repeatedly to will make the AI better and better at achieving its goals. Especially if the self-improvement checker can itself by improved somehow. It's not obvious to me that this leads to super intelligence (i.e. Straumli-perversion level intelligence, if you've read [EDIT] A Fire on the Deep), even with massively faster thinking. But that's what the community seems to mean by "recursive self-improvement."

Very few people know what career they want when they're seventeen. Of those people, a significant proportion end up either doing a different job or displeased by their choice.

This is what I did; it may or may not work for you. Go to a college with a wide variety of class choices and highlight everything in the course book that looks interesting and that you have the prereqs for. Narrow it down to four or five classes by eliminating courses that occur in the same time block as another course you're more interested in, courses with dull or unintelligent te... (read more)

5John_Maxwell
Related idea: look through the course catalog for the course prerequisite chains that are the longest (they will probably be for math, chemistry, and physics). Take the 1st course in each of the longest chains early on in your college career so you'll know right away if one of the long-chain majors is for you (as opposed to a few years later, when it will be too late to make the switch).

I think many people will assume that "literature thread" also means "book thread," since "literature" is often used to mean "book, with connotations of being worthwhile/classic/making you a better person/whatever."

Perhaps "media" would work? Although that almost presents the opposite problem...

I'd suggest that high-cost ideas are generally high-benefit, or at least high-apparent-benefit (see: love-bombing in cults), in order to incentivize people to believe them.

I definitely think it's important to recognize that almost all group beliefs are both signalling and something that people actually believe and that has effects on their life. The PhD's role as a signal of membership in the Physicist Conspiracy doesn't conflict with the PhD's role of learning interesting things about physics; in fact, they're complementary. (However, it's certainly possi... (read more)

0mwengler
I think the Physicist Conspiracy in which I am a member with my PhD and all does NOT require a PhD to join. Freeman Dyson for example is clearly accepted in the club despite never bothering to get a degree beyond B.A. I hope that the cynicism I reject in my own self-examination of my membership in my own church of rational physics engineering leads me to reject cynicism when trying to understand other people's churches. There ARE reasons people believe things and they are by no means all stupid reasons.

Interesting article!

I presume that "I realized this goal was irrational and switched to a different goal that would better achieve my values" would also be a victory for instrumental rationality...

Ah, thank you. I misunderstood. :) I've had a few problems with people being confused about why my blog uses so much feminist dogma if it's a men's rights blog, so I'm hyper-sensitive about being mistaken for a non-feminist.

Thank you very much, Miley! I tend to view feminism and men's rights as being inherently complementary: in general, if we make women more free of oppressive gender roles, we will tend to make men more free of oppressive gender roles, and vice versa. However, in the great football game of feminists and men's rights advocates, I'm pretty much on Team Feminism, which is why I get so upset when it's clearly doing things wrong.

Also, my pronoun is zie, please. :)

1MileyCyrus
What I meant is that you actually demand results from your team, instead of giving them a free pass just because they have a certain label.

To a certain degree one could test instrumental rationality indirectly. Perhaps have them set a goal they haven't made much progress on (dieting? writing a novel? reducing existential risk?) and see if instrumental rationality training leads to more progress on the goal. Or give people happiness tests before and a year after completing the training (i.e. when enough time has passed that the hedonic treadmill has had time to work). Admittedly, these indirect methods are incredibly prone to confounding variables, but if averaged over a large enough sample size the trend should be clear.

3NancyLebovitz
Something to think about if you have a goal of losing weight. How do you decide whether a goal makes sense?

I think the most important thing about a rationality training service is operationalizing what is meant by rationality.

What exact services would the rationality training service provide? Would students have beliefs that match reality better? Be less prone to cognitive biases? Tend to make decisions that promote greater utility (for themselves or others)? How would you test this? Martial arts dojos tend to (putting it crudely) make their students better at hitting things than they were before; that's a lot easier to objectively measure than making students... (read more)

1ksvanhorn
Instrumental rationality is the focus we have in mind -- doing the things that most enhance your personal utility. Avoiding cognitive biases and having beliefs that match reality better are means to better instrumental rationality, but not the end. Some of the things that I think would fall under instrumental rationality would be better decisions (the ones important enough to merit some analyzing), determining what habits would be good to acquire or discard, and overcoming akrasia. I think we would have to start highly focused on one of these areas and a specific target market, and branch out over time. As to how to test benefit of the training... I've put that on my list of questions to consider. I don't know the answer right now. But anything that has an observable effect of some sort will be measurable in some fashion.

I think the distinction is not between logical and illogical ideas, but between high-cost and low-cost ideas.

Illogical ideas are generally high-cost, for the reasons outlined in the OP, unless you live in a society in which everyone accepts the high-cost idea (for instance, Creationism in the American South). Cryonics is a high-cost idea: it may be right, but it is also deeply weird and unlikely to find acceptance among non-transhumanists. PhD physicists have high-cost ideas because of the time and effort required to understand them. Even jargon might coun... (read more)

0mwengler
My thinking is that the discussion of high cost ideas being dopey and primarily for signalling membership in a group is only partially correct, only a part of the story. In the case of physics, engineering, more applied parts of math and computer science, and probably many forms of understanding of management, politics, and "social engineering," these high cost ideas have high benefit in terms of what you can manage to do. Also I would imagine the causation does go both ways what with these being natrualistic systems. Nature has never been shy about exploiting valuable causalities just to keep the story simple, it seems to me. In general, I think a lot of the signalling arguments tend to overstate things, staring so excitedly at the secondary effects of group cohesion and definition and missing the intrinsic value that many of these signals have. If spending 7 years getting a phd in physics (I enjoyed myself, I wasn't in a rush, that's my story and i'm sticking to it) is signalling my membership in a group I very much want to be in, it has also created in me a bunch of very valuable capabilities in terms of mastering the physical world around me and mastering the intellectual (social political) world around me in certain narrow ways. I guess I feel as though the REASON I want to be in this group is because the people in this group can do stuff I want to be able to do. THat is, I'm impressed by their wizards and want to learn some of their magick. See what I mean? Religious jargon of signalling and membership seems one way when you are talking about something that you think is BS but an entirely different way when talking about something that you "believe in." But it is the same human stuff. Its a tool that we benefit from using every bit as much as do the people in other groups. Indeed, if we are to "win", we better be benefitting from it more than they are.

l'd like a separate Less Wrong readthrough because I don't have a Reddit account and don't want to acquire one for the sole purpose of the readthrough (because then I'll comment on Reddit, and I have quite enough time-wasting things to do on the Internet already :) ).

Where are you? I'm in Fort Lauderdale and the Tampa area. If we're near each other maybe we could arrange one of those meetup thingies...

0khafra
I just got back to Saint Petersburg from a trip to San Francisco that included a meetup at Tortuga, and that was nifty, so I'll throw my hat into the ring.
2daenerys
Hi ozy! I am really happy to see you on here! I enjoy your blog. This map shows that as of last week-ish there were at least four Floridians on LW. Unfortunately, their identity is unknown, and you guys seem to be spread out. But if you post a meetup, you can see who responds. Good luck!
0windmil
That could be cool if we ever got around to it. I'm usually in either Daytona Beach or Gainesville, not that it's too big of a state to drive across... at least width-wise.

I'm another classic brilliant-at-age-ten kid. The biggest problem I experienced related to being considered smart rather young was that a lot of my sense of self-worth got tied up in being the smartest kid in the room. This is suboptimal-- not only does it lead to the not asking stupid questions issue, but it also means that as soon as I was in a situation in which I wasn't smart about something, I felt like I had no worth as a human being whatsoever. (Possible confounding variable: I had depression.)

The closest thing to a solution I've found is to try to ... (read more)

0Solvent
I would have thought that would be quite a bad idea, as it rewards you for attempting to do something, as opposed to succeeding. Kaj talked about this here.

Hi everyone! I'm Ozy.

I'm twenty years old, queer, poly, crazy, white, Floridian, an atheist, a utilitarian, and a giant geek. I'm double-majoring in sociology and psychology; my other interests range from classical languages (although I am far from fluent) to guitar (although I suck at it) to Neil Gaiman (I... can't think of a self-deprecating thing to say about my interest in Neil Gaiman). I use zie/zir pronouns, because I identify outside the gender binary; I realize they're clumsy, but English's lack of a good gender-neutral pronoun is not my fault. :) ... (read more)

6MBlume
Hi Ozy, it's really good to see you here, I enjoy the blog a lot. I remember reading one of your first social justice 101 posts, finding it peppered with LW links, and thinking "holy crap, somebody's using LW as a resource to get important background information out of the way while talking about something-really-important-that-isn't-itself-rationality -- this is awesome and totally what LW should be for", so that made me happy =)
-1MixedNuts
turns into a raving fanboy, squees, explodes
0CronoDAS
Hi Ozy! (It's Doug.) Glad to see you decided to stop lurking and join in!
2NancyLebovitz
Hi, Ozy! I've enjoyed your writing at No Seriously What About Teh Menz; so it's good to see you here.
5MileyCyrus
Her blog is good. Instead of blindly cheering for a side in the feminism vs men's-rights football game, Ozymandias actually tries to understand the problem and recommend workable solutions.
0HughRistik
Hi Ozy!
0windmil
The only LWer that I've noticed was from Florida! (Of course, people don't too frequently pepper their posts with particulars of their placement.)