Comment author: Desrtopa 27 October 2011 09:13:27PM 2 points [-]

Well, I can affirm that I, at least, don't have the same reaction. Also I use italics similarly myself. I treat it as a mark of emphasis by the authorial voice. I don't mind having a textual marker to show me where the emphasis is, preventing me from having to intuit it, any more than I mind hearing people's emphasis when they talk, rather than having to intuit it as I would if they were using a voice synthesizer. The idea of finding it offputting is weird to me.

Comment author: lythrum 27 October 2011 09:19:03PM 0 points [-]

Yes, I'm aware it's not universal. I can't really explain why it's so bothersome - it's similar to occasional words being in a bright colour, or someone poking me every so often while I'm trying to read. It's probably a pity because, combined with my laziness, it just means that I avoid reading writing with lots of italics. If I'm feeling particularly motivated I'll modify the text to remove all the italics before reading it.

Comment author: antigonus 25 October 2011 02:52:22PM 4 points [-]

Avoid overuse of italics. Try to write so that the reader can intuit where the emphasis goes.

Comment author: lythrum 27 October 2011 09:05:59PM 1 point [-]

I strongly agree. I find Eliezer's italics so off-putting that I avoid reading his writing in formatted text. I don't know why, and I'm sure not everyone has the same reaction, but excess italics just make me twitch.

Comment author: lythrum 24 September 2011 12:10:48AM *  8 points [-]

I am not from Mudd, but I went to a talk by Maria Klawe on this very topic. I feel suddenly potentially useful. Warning: this is all only from memory. I thought I had the slides somewhere, but cannot find them. I'll email Maria, and if I hear back from her, I'll pass it on.

First off, here's the abstract for her talk:

Begin abstract

In 2006, much like at many other institutions, about 10% of HMC’s CS majors were female. At that time only a third of HMC’s students were female, but CS was an aberration. About 20% of the Physics majors and close to 30% of the engineering majors were female. Four years later 42% of HMC’s CS majors were female, exactly the same percentage as the whole HMC student body. This talk describes how the CS department accomplished this change.

End abstract

She emphasised that they tried to change how their first year program was run, with more cooperative group work. She was big on how they did a lot of work to try and get rid of the "macho" attitude among their undergrads in early courses - by "macho" she seemed to mean some sort of arrogant hackerish programmer attitude. She mentioned a bunch of mentorship programs for female undergrads, and programs to help undergrads get to conferences like Grace Hopper.

But! A couple friends and I were bothered by something else she said they did: they changed their undergrad admissions so as to admit more women to computer science in first year. Because they are a small, elite college they were able to do this without affecting the quality of their students, she felt. I thought that probably this was what made most of the difference, but that's only my opinion.

Comment author: James_Miller 19 September 2011 02:45:38PM *  6 points [-]

What are the main objections to the likelihood of the Singularity occurring? What actions might people take to stop a Singularity from occurring? How will competition among businesses and governments impact the amount of care taken with respect to friendly AI? What's the difference between AI and AGI? What organizations are working on the friendly AI problem? How will expectations of friendly/unfriendly AI impact the amount of resources devoted to AI, i.e. if financial markets expect utopia to arrive in the next decade savings rates will fall which will lower tech research. What are AGI researchers estimates for when/if AGI will happen and the likelihood of it being "good" if it does occur. Why do most computer engineers dismiss the possibility of AGI? What is the track records of AGI predictions? Who has donated significant sums to friendly AI research? How much money is being spent on AGI/friendly AGI research?

Comment author: lythrum 20 September 2011 05:32:53AM 4 points [-]

As an academicish person, I suggest a few questions that bothered me at first: Why aren't more artificial intelligence research groups at universities working on FAI? Why doesn't the Singularity Institute publish all of its literature reviews and other work?

In response to Moral enhancement
Comment author: lythrum 20 September 2011 05:10:35AM 1 point [-]

I'm curious: if you're a person interested in "benevolence training", why do you want to have more benevolence or empathy for others? I generally want to be less empathetic, and I'd love to be convinced that I'm wrong.

Comment author: [deleted] 18 July 2011 01:14:05AM 25 points [-]

I wonder:

if you had an agent that obviously did have goals (let's say, a player in a game, whose goal is to win, and who plays the optimal strategy) could you deduce those goals from behavior alone?

Let's say you're studying the game of Connect Four, but you have no idea what constitutes "winning" or "losing." You watch enough games that you can map out a game tree. In state X of the world, a player chooses option A over other possible options, and so on. From that game tree, can you deduce that the goal of the game was to get four pieces in a row?

I don't know the answer to this question. But it seems important. If it's possible to identify, given a set of behaviors, what goal they're aimed at, then we can test behaviors (human, animal, algorithmic) for hidden goals. If it's not possible, that's very important as well; because that means that even in a simple game, where we know by construction that the players are "rational" goal-maximizing agents, we can't detect what their goals are from their behavior.

That would mean that behaviors that "seem" goal-less, programs that have no line of code representing a goal, may in fact be behaving in a way that corresponds to maximizing the likelihood of some event; we just can't deduce what that "goal" is. In other words, it's not as simple as saying "That program doesn't have a line of code representing a goal." Its behavior may encode a goal indirectly. Detecting such goals seems like a problem we would really want to solve.

In response to comment by [deleted] on Secrets of the eliminati
Comment author: lythrum 18 July 2011 11:40:07PM 3 points [-]

If you had lots of end states, and lots of non-end states, and we want to assume the game ends when someone's won, and that a player only moves into an end state if he's won (neither of these last two are necessarily true even in nice pretty games), then you could treat it like a classification problem. In that case, you could throw your favourite classifier learning algorithm at it. I can't think of any publications on someone machine learning a winning condition, but that doesn't mean it's not out there.

Dr. David Silver used temporal difference learning to learn some important spatial patterns for Go play, using self-play. Self play is basically like watching yourself play lots of games with another copy of yourself, so I can imagine similar ideas being used to watching someone else play. If you're interested in that, I suggest http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-170.pdf

On a sadly less published (and therefore mostly unreliable) but slightly more related note, we did have a project once in which we were trying to teach bots to play a Mortal Kombat style game only by observing logs of human play. We didn't tell one of the bots the goal, we just told it when someone had won, and who had won. It seemed to get along ok.

Comment author: Nornagest 18 July 2011 09:04:54PM *  6 points [-]

Reaction to Methods seems highly polarized: almost every review of it I've seen either falls over itself to gush or sees it as pretentious and self-indulgent. Age and gender seem to matter less, by that stage, than contrarian tendencies and tolerance for what tvtropes calls an author tract, but the demographics of fanfiction readers are weighted heavily towards people in their teens and twenties already, so samples of older readers are small. The particular characteristics of Methods do probably push it towards the older end of the scale.

Since that's more or less the demographic that LW attracts already, I'd say that Methods, and the rational fic meme more generally, are effective as advertising but ineffective in broadening the site's appeal.

Comment author: lythrum 18 July 2011 11:00:20PM 2 points [-]

Your first paragraph rings true to me: the complaints I've heard are basically those you mentioned.

My friends are mostly fairly contrarian late-twenties male engineering, computing science and math people. I think that apart from not enjoying Methods, they're pretty much the usual LW demographic. That's part of the reason I was surprised when they didn't like Methods. There are lots of possible reasons for this (to me) surprising result. Maybe they thought I didn't like it, and wanted to mirror that back. Maybe they're a group already biased against LW. Maybe they actually just dislike the writing style. Who knows? If they don't enjoy Eliezer's writing style, then maybe LW is not a good place for them to hang out, so it doesn't matter that it didn't work as advertising on them.

Do you think that LW doesn't need other methods of marketing?

Comment author: Raemon 15 July 2011 10:02:10PM 4 points [-]

The goal here isn't to spread the rationality meme - it's to figure out whether "rationality" is a good word to use to describe the set of ideas contained within the rationality meme.

Methods of Rationality and the Game of Life are only interesting to a narrow portion of the population. My dad's a rational guy and I thought he'd like Methods of Rationality - he hated it. The people who've liked it that I've shown it to are almost exactly in my demographic - 20 something males who already nerdy, geeky, and have similar sense of humor to me.

I think Rationality is important enough that we should not be limiting ourselves to that demographic. At the very least it warrants our consideration.

I'm thinking of doing a followup post that takes a step back and talks about the questions I set aside in my first paragraph, to discuss the overall problem more thoroughly before getting too attached to particular solutions.

Comment author: lythrum 18 July 2011 08:19:30PM 1 point [-]

I've had mostly negative reactions to Methods of Rationality from 20-something males (and a few females) who are nerdy and geeky and mostly already like GEB, so I agree that this community needs other methods of marketing.