Comment author: HughRistik 02 January 2012 05:14:50AM 2 points [-]

But is feminism unusually biased for the level of its status?

I'm not sure.

If feminism weren't occupying that position of status, some other ideology would be, and I wouldn't expect this other ideology to be less biased.

An alternative is that feminism would share space with other gender political ideologies in liberal political dialogue. Just like both liberalism and conservatism share status among different parts of the population, feminism would share status with other gender political movements.

Unfortunately, in white middle/upper class, educated liberal gender politics, feminism is the single party in a one-party system. I would like to see more forms of gender politics that are progressive, so there can be competition in the gender politics space.

Comment author: _ozymandias 02 January 2012 05:54:38AM 6 points [-]

To a certain degree, different brands of feminism could function as different parties (certainly in academic feminism they do). A Christina-Hoff-Sommers-esque conservative feminist is unlikely to agree much with a Dworkinite radical feminist. For instance, "rape is a subset of violence with no particularly gendered component" and "rape is the natural outgrowth of a culture in which women's subordination to men is eroticized" are two substantially different positions (both of which I disagree with).*

Admittedly, the average person is not particularly clear on the distinct branches of feminism; hell, there is still a widespread belief that radical feminist means "a feminist who's really extreme" as opposed to a distinct framework of theories and political beliefs. And even among the different groups of feminists there are usually some common premises (gender being at least partially a social construct, men being privileged over women, etc.).

That said, I too would like more variation in the gender politics space; some groups (most notably, men) are distinctly underserved by the current gender discourse, and more competition in the marketplace of ideas can only be a good thing. :)

*I am somewhat cheating here by picking an issue on which there is a lot of disagreement among different branches of feminism, as opposed to (say) the gender gap, in which the primary disagreement is between feminists who do and do not suck at math.

Comment author: _ozymandias 02 January 2012 05:28:27AM 4 points [-]

The difference in my reaction when reading this post before and after I found my something to protect is rather remarkable. Before, it was well-written and interesting, but fundamentally distinct from my experience-- rather like listening to people talk about theoretical physics. Now, when I read it, my feeling of determination is literally physical. It's quite odd.

Has anyone else had a similar experience?

Comment author: Nick_Roy 02 January 2012 04:04:03AM *  0 points [-]

So, with a 60% chance of girlfriend breakup and a 90% chance of new partner acquisition, does this mean a 36% chance of a polyamorous, open, "cheating" or otherwise non-monogamous relationship situation for you at some point over the next year?

Edited to add: actually somewhat higher than 36%, since multiple new partners are possible along with a girlfriend breakup.

Comment author: _ozymandias 02 January 2012 04:55:10AM 2 points [-]

I'm already polyamorous, so there is in fact a certainty of a polyamorous relationship situation at some point in 2012. :)

Comment author: falenas108 02 January 2012 02:24:22AM 7 points [-]

I will break up with my girlfriend at some point over the next year: 60%.

I sincerely hope your girlfriend does not read this site, or at least doesn't know your username.

Comment author: _ozymandias 02 January 2012 03:04:48AM *  14 points [-]

My girlfriend knows and is highly amused at my pessimism.

My logic is that I have never actually had a relationship that went much beyond the six-month mark, and while there are all kinds of factors that mean that this one is different and will stand the test of time, all of my other relationships also had all kinds of factors that meant this one is different and will stand the test of time.

The prediction is only 60%, however, since I might have actually gotten better at relationships since the last go-round. And because my girlfriend is really fucking awesome. :)

Comment author: _ozymandias 01 January 2012 06:38:19PM *  9 points [-]

Romney will be the Republican presidential nominee: 80%.

Obama will win reelection: 90%.with a non-Romney presidential nominee, 50% against Romney

The Occupy Wall Street protests will fade away over the next year so much that I no longer hear much about them, even in my little liberal hippie news bubble: 75%

There will be massive fanboy backlash against The Hobbit: 80%. Despite this, the Hobbit will be a pretty good movie (above 75% on Rotten Tomatoes): 70%

John Carter will be a pretty good movie (above 75% on Rotten Tomatoes). 85% Whether or not it is a good movie, I will love it. 95%

I will get my first death or rape threat this year: 80% My reaction to the death or rape threat will be elation that I've finally made it in feminist blogging: 95% Even if it isn't I will totally say it is in order to seem cooler: 99%

My comod and I will complete the NSWATM spinoff book this year: 75% It will be published as an ebook: 80% It will not make the transition to dead-tree-book this year: 90% It will make the transition to dead-tree-book eventually: 60%

I will break up with my girlfriend at some point over the next year: 60%.

I will acquire a new partner at some point over the next year: 90%.

Comment author: lukeprog 30 December 2011 02:14:22AM *  22 points [-]

Consciousness isn't the point. A machine need not be conscious, or "alive", or "sentient," or have "real understanding" to destroy the world. The point is efficient cross-domain optimization. It seems bizarre to think that meat is the only substrate capable of efficient cross-domain optimization. Computers already surpass our abilities in many narrow domains; why not technology design or general reasoning, too?

Neurons work differently than computers only at certain levels of organization, which is true for every two systems you might compare. You can write a computer program that functionally reproduces what happens when neurons fire, as long as you include enough of the details of what neurons do when they fire. But I doubt that replicating neural computation is the easiest way to build a machine with a human-level capacity for efficient cross-domain optimization.

How does it know what bits to change to make itself more intelligent?

There is an entire field called "metaheuristics" devoted to this, but nothing like improving general abilities at efficient cross-domain optimization. I won't say more about this at the moment because I'm writing some articles about it, but Chalmers' article analyzes the logical structure of intelligence explosion in some detail.

Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI?

The emphasis on Friendliness is the key thing that distinguishes SIAI and FHI from other AI-interested organizations, and is really the whole point. To develop full-blown AI without Friendliness is to develop world-destroying unfriendly AI.

Comment author: _ozymandias 30 December 2011 05:25:46AM 2 points [-]

Thank you for the link to the Chalmers article: it was quite interesting and I think I now have a much firmer grasp on why exactly there would be an intelligence explosion.

Comment author: Zetetic 30 December 2011 03:04:25AM 4 points [-]

A couple of things come to mind, but I've only been studying the surrounding material for around eight months so I can't guarantee a wholly accurate overview of this. Also, even if accurate, I can't guarantee that you'll take to my explanation.

Anyway, the first thing is that brain form computing probably isn't a necessary or likely approach to artificial general intelligence (AGI) unless the first AGI is an upload. There doesn't seem to be good reason to build an AGI in a manner similar to a human brain and in fact, doing so seems like a terrible idea. The issues with opacity of the code would be nightmarish (I can't just look at a massive network of trained neural networks and point to the problem when the code doesn't do what I thought it would).

The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn't need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it's sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).

Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it's fairly mainstream.

As far as computing power - the computing power of the human brain is definitely measurable so we can do a pretty straightforward analysis of how much more is possible. As far as raw computing power, I think we're actually getting quite close to the level of the human brain, but I can't seem to find a nice source for this. There are also interesting "neuromorphic" technologies geared to stepping up the massively parallel processing (many things being processed at once) and scale down hardware size by a pretty nice factor (I can't recall if it was 10 or 100), such as the SyNAPSE project. In addition, with things like cloud/distributed computing, I don't think that getting enough computing power together is likely to be much of an issue.

Bootstrapping is a metaphor referring to the ability of a process to proceed on its own. So a bootstrapping AI is one that is able to self-improve along a stable gradient until it reaches superintelligence. As far as "how does it know what bits to change", I'm going to interpret that as "How does it know how to improve itself". That's tough :) . We have to program it to improve automatically by using the utility function as a guide. In limited domains, this is easy and has already been done. It's called reinforcement learning. The machine reads off its environment after taking an action an updates its "policy" (the function it uses to pick its actions) after getting feedback (positive or negative or no utility).

The tricky part is having a machine that can self-improve not just by reinforcement in a single domain, but in general, both by learning and by adjusting its own code to be more efficient, all while keeping its utility function intact - so it doesn't start behaving dangerously.

As far as SIAI, I would say that Friendliness is the driving factor. Not because they're concerned about friendliness, but because (as far as I know) they're the first group to be seriously concerned with friendliness and one of the only groups (the other two being headed by Nick Bostrom and having ties to SIAI) concerned with Friendly AI.

Of course the issue is that we're concerned that developing a generally intelligent machine is probable, and if it happens to be able to self improve to a sufficient level it will be incredibly dangerous if no one put in some serious, serious effort into thinking about how it could go wrong and solving all of the problems necessary to safeguard against that. If you think about it, the more powerful the AGI is, the more needs to be considered. An AGI that has access to massive computing power, can self improve and can get as much information (from the internet and other sources) as it wants, could easily be a global threat. This is, effectively, because the utility function has to take into account everything the machine can affect in order to guarantee we avoid catastrophe. An AGI that can affect things at a global scale needs to take everyone into consideration, otherwise it might, say, drain all electricity from the Eastern seaboard (including hospitals and emergency facilities) in order to solve a math problem. It won't "know" not to do that, unless it's programed to (by properly defining its utility function to make it take those things into consideration). Otherwise it will just do everything it can to solve the math problem and pay no attention to anything else. This is why keeping the utility function intact is extremely important. Since only a few groups, SIAI, Oxford's FHI and the Oxford Martin Programme on the Impacts of Future Technologies, seem to be working on this, and it's an incredibly difficult problem, I would much rather have SIAI develop the first AGI than anywhere else I can think of.

Hopefully that helps without getting too mired in details :)

Comment author: _ozymandias 30 December 2011 04:20:10AM 1 point [-]

The second is that consciousness is not necessarily even related to the issue of AGI, the AGI certainly doesn't need any code that tries to mimick human thought. As far as I can tell, all it really needs (and really this might be putting more constraints than are necessary) is code that allows it to adapt to general environments (transferability) that have nice computable approximations it can build by using the data it gets through it's sensory modalities (these can be anything from something familiar, like a pair of cameras, or something less so like a geiger counter or some kind of direct feed from thousands of sources at once).

Also, a utility function that encodes certain input patterns with certain utilities, some [black box] statistical hierarchical feature extraction [/black box] so it can sort out useful/important features in its environment that it can exploit. Researchers in the areas of machine learning and reinforcement learning are working on all of this sort of stuff, it's fairly mainstream.

I am not entirely sure I understood what was meant by those two paragraphs. Is a rough approximation of what you're saying "an AI doesn't need to be conscious, an AI needs code that will allow it to adapt to new environments and understand data coming in from its sensory modules, along with a utility function that will tell it what to do"?

Comment author: _ozymandias 30 December 2011 01:05:11AM 12 points [-]

Before I ask these questions, I'd like to say that my computer knowledge is limited to "if it's not working, turn it off and turn it on again" and the math I intuitively grasp is at roughly a middle-school level, except for statistics, which I'm pretty talented at. So, uh... don't assume I know anything, okay? :)

How do we know that an artificial intelligence is even possible? I understand that, in theory, assuming that consciousness is completely naturalistic (which seems reasonable), it should be possible to make a computer do the things neurons do to be conscious and thus be conscious. But neurons work differently than computers do: how do we know that it won't take an unfeasibly high amount of computer-form computing power to do what brain-form computing power does?

I've seen some mentions of an AI "bootstrapping" itself up to super-intelligence. What does that mean, exactly? Something about altering its own source code, right? How does it know what bits to change to make itself more intelligent? (I get the feeling this is a tremendously stupid question, along the lines of "if people evolved from apes then why are there still apes?")

Finally, why is SIAI the best place for artificial intelligence? What exactly is it doing differently than other places trying to develop AI? Certainly the emphasis on Friendliness is important, but is that the only unique thing they're doing?

Comment author: D_Malik 29 December 2011 10:14:52AM *  0 points [-]

I'm 17 and going to my final year of high school in January. I'm having some trouble making up my mind about what to do after high school and would appreciate some help with this.

I've skimmed a few books on career choice but they all just spout platitudes. I don't think I should do "What Interests Me" because I think I'd become bored of almost anything after a few weeks. I don't think I should do what I'm "talented" at because I doubt talents are specific enough to narrow down career-space enough. (Yes, a person might have high g and thus be good at computer programming, but that same high g would aid them as much with lots of other careers - why choose programming specifically?) Even if talents were specific enough, I don't think my self-assessments of what my talents are are even nearly accurate enough to base the next 50+ years of my life on them.

It's pretty obvious that most people have no idea what they're doing when they choose a career. So what should I base a career choice on?

Comment author: _ozymandias 29 December 2011 11:33:26PM 3 points [-]

Very few people know what career they want when they're seventeen. Of those people, a significant proportion end up either doing a different job or displeased by their choice.

This is what I did; it may or may not work for you. Go to a college with a wide variety of class choices and highlight everything in the course book that looks interesting and that you have the prereqs for. Narrow it down to four or five classes by eliminating courses that occur in the same time block as another course you're more interested in, courses with dull or unintelligent teachers, or courses that come from disciplines you've already taken a lot of classes in. (Note: if you have general course requirements, take those courses.) That should give you some data to eliminate majors you're absolutely not interested in; for the rest, assuming you have not gotten an all-consuming obsession with one particular field, look at the BLS statistics to see which one has the best overall job outcomes (income, hours worked, unemployment risk, etc) and major in that one.

General warnings: unlike most people here, I am not a STEM major; my experience applies strictly to the social sciences and the humanities. I also have not attempted to get a job in this economy, so take my advice with a grain of salt.

Comment author: mwengler 29 December 2011 04:50:14PM 0 points [-]

Of course, all this is purely speculative. And the causation might go the other way: instead of adopting a high-cost idea signalling one's membership in the group, it might be that high-cost ideas tend to create groups, because low-cost ideas tend to be adopted by large numbers of people.

My thinking is that the discussion of high cost ideas being dopey and primarily for signalling membership in a group is only partially correct, only a part of the story. In the case of physics, engineering, more applied parts of math and computer science, and probably many forms of understanding of management, politics, and "social engineering," these high cost ideas have high benefit in terms of what you can manage to do.

Also I would imagine the causation does go both ways what with these being natrualistic systems. Nature has never been shy about exploiting valuable causalities just to keep the story simple, it seems to me.

In general, I think a lot of the signalling arguments tend to overstate things, staring so excitedly at the secondary effects of group cohesion and definition and missing the intrinsic value that many of these signals have. If spending 7 years getting a phd in physics (I enjoyed myself, I wasn't in a rush, that's my story and i'm sticking to it) is signalling my membership in a group I very much want to be in, it has also created in me a bunch of very valuable capabilities in terms of mastering the physical world around me and mastering the intellectual (social political) world around me in certain narrow ways. I guess I feel as though the REASON I want to be in this group is because the people in this group can do stuff I want to be able to do. THat is, I'm impressed by their wizards and want to learn some of their magick.

See what I mean? Religious jargon of signalling and membership seems one way when you are talking about something that you think is BS but an entirely different way when talking about something that you "believe in." But it is the same human stuff. Its a tool that we benefit from using every bit as much as do the people in other groups. Indeed, if we are to "win", we better be benefitting from it more than they are.

Comment author: _ozymandias 29 December 2011 07:06:06PM 0 points [-]

I'd suggest that high-cost ideas are generally high-benefit, or at least high-apparent-benefit (see: love-bombing in cults), in order to incentivize people to believe them.

I definitely think it's important to recognize that almost all group beliefs are both signalling and something that people actually believe and that has effects on their life. The PhD's role as a signal of membership in the Physicist Conspiracy doesn't conflict with the PhD's role of learning interesting things about physics; in fact, they're complementary. (However, it's certainly possible to imagine someone who can signal "being a physicist" without having learned interesting things about physics (fake PhD) or vice versa (extremely skilled autodidact), which why I think they're probably two separate but related functions.)

View more: Prev | Next