Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: David_Gerard 19 July 2014 05:36:41PM *  0 points [-]

About a hundred AIDS researchers. Shooting that plane down may kill millions.

Comment author: Adele_L 19 July 2014 09:03:29PM 13 points [-]

That number sounded suspicious to me when I first heard it, and it turns out that according to the International AIDS Society president, it was more like six. Still a terrible loss.

Meetup : MIRIxAtlanta: 19 July 2014

1 Adele_L 18 July 2014 04:04AM

Discussion article for the meetup : MIRIxAtlanta: 19 July 2014

WHEN: 19 July 2014 06:00:00PM (-0400)

WHERE: 2388 Lawrenceville Hwy. Unit L, Decatur, GA 30033

We'll be having the first MIRIx in Atlanta - please come if you are interested in math, programming, or AI. One research program at MIRI is to develop better theoretical foundations for making decisions under uncertainty. We'll look at some recent progress in this field, discuss open problems, and explore possible research directions.

Recommended Reading: Summary of problems: http://intelligence.org/research/#decision Robust Cooperation: http://arxiv.org/abs/1401.5577

P.S. There will be snacks!

Discussion article for the meetup : MIRIxAtlanta: 19 July 2014

Comment author: bramflakes 11 July 2014 02:41:24PM 8 points [-]

Or how a label like 'human biological sex' is treated as if it is a true binary distinction that carves reality at the joints and exerts magical causal power over the characteristics of humans, when it is really a fuzzy dividing 'line' in the space of possible or actual humans, the validity of which can only be granted by how well it summarises the characteristics.

I don't see how sex doesn't carve reality at the joints. In the space of actually really-existing humans it's a pretty sharp boundary and summarizes a lot of characteristics extremely well. It might not do so well in the space of possible humans, but why does that matter? The process by which possible humans become instantiated isn't manna from heaven - it has a causal structure that depends on the existence of sex.

Comment author: Adele_L 11 July 2014 04:02:14PM 16 points [-]

I agree it is a pretty sharp boundary, for all the obvious evolutionary reasons - nevertheless, there are a significant number of actual really-existing humans who are intersex/transgender. This is also not too surprising, given that evolution is a messy process. In addition to the causal structure of sexual selection and the evolution of humans, there are also causal structures in how sex is implemented, and in some cases, it can be useful to distinguish based on these instead.

For example, you could distinguish between karyotype (XX, XY but also XYY, XXY, XXX, X0 and several others), genotype (e.g. mutations on SRY or AR genes), and phenotypes, like reproductive organs, hormonal levels, various secondary sexual characteristics (e.g. breasts, skin texture, bone density, facial structure, fat distribution, digit ratio) , mental/personality differences (like sexuality, dominance, spatial orientation reasoning, nurturing personality, grey/white matter ratio, risk aversion), etc...

Comment author: XiXiDu 10 July 2014 08:29:38AM 0 points [-]

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".

With "very extreme" I was referring to the part where he claims that this will happen "moments later".

Comment author: Adele_L 10 July 2014 11:21:06AM 3 points [-]

Yes, that was clear. My point is that it isn't extreme under the mild assumption that the AI has prepared for such an event beforehand.

Comment author: TheAncientGeek 08 July 2014 10:51:16AM *  2 points [-]

contrarians

How much of EYs material has been retracted or amended under critique? AFAICT , the answer is none.

Comment author: Adele_L 09 July 2014 10:15:15PM 0 points [-]

IIRC, he retracted one of his earlier articles on gender because he doesn't agree with it anymore.

Comment author: paper-machine 09 July 2014 12:38:04PM *  0 points [-]

Now that I'm on the job market, I'm considering changing my gmail address, but I'm having trouble deciding between the alternatives.

My current address (created in '05 or so) consists of two words. This has the advantage of being easy to say, but the second word is a bit long and I feel slightly silly writing it on a CV.

On the other hand, it's 2014 and almost every reasonable gmail address has already been taken. The exceptions in my case are a slightly l33t version of my name, a version of my name with vowels removed, and my name followed by a random number.

So, LW, which of the following do you feel is the most useful email address?

I don't use G+ anymore, so I'm ignoring various social costs associated to changing my Google account. If you think of a better alternative, go ahead and list it in the comments.

Submitting...

Comment author: Adele_L 09 July 2014 09:37:53PM 5 points [-]

A good alternative might be to buy your own domain name (only around $20 a year), and put up a small personal site. You can then have your email address get redirected to your normal gmail one (and with gmail, it's easy to have it send messages from your new address also). This may also look more impressive on a CV since it signals some level of technical competence. Of course, you still have to choose a domain name, but it gives you a bit more flexibility.

For example, I have the address adele@<lastname>.org which redirects to my gmail account I've used for years.

Comment author: XiXiDu 09 July 2014 06:51:20PM *  2 points [-]

...to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.

In the paper human-level AI was defined as follows:

“Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.”

Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...

Fast takeoff / intelligence explosion has always seemed to me to be the most controversial premise, which the most people object to, which most consigned SIAI/MIRI to being viewed like cranks;

What sounds crankish is not that a human level AI might reach a superhuman level within 2 years, but the following. In Yudkowsky's own words (emphasis mine):

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM". Just to be clear on the claim, "fast" means on a timescale of weeks or hours rather than years or decades;

These kind of very extreme views are what I have a real problem with. And just to substantiate "extreme views", here is Luke Muehlhauser:

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

Comment author: Adele_L 09 July 2014 09:24:11PM 5 points [-]

It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.

It's not like it's that hard to hack into servers and run your own computations on them through the internet. Assuming the superintelligence knows enough about the internet to design something like this beforehand (likely since it runs in a server cluster), it seems like the limiting factor here would be bandwidth.

I imagine a highly intelligent human trapped in this sort of situation, with similar prior knowledge and resource access, could build a botnet in a few months. Working on it at full-capacity, non-stop, could bring this down to a few weeks, and it seems plausible to me that with increased intelligence and processing speed, it could build one in a few moments. And of course with access to its own source code, it would be trivial to have it run more copies of itself on the botnet.

Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".

Comment author: shminux 28 June 2014 11:43:35PM 0 points [-]

True, but at this point the nascent FAI will do the correct utility calculation and decide what to do.

Comment author: Adele_L 02 July 2014 04:07:28PM 0 points [-]

Ah yes, that is a good point - assuming the FAI has access to the same ability (the window might be very narrow, for example).

Comment author: selylindi 02 July 2014 03:23:06PM *  2 points [-]

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days? Changes in karma to old posts are no longer an especially useful signal to their author anyway. Such a ban could be a cheap way to mitigate downvote stalking without significantly impacting current discussions.

An attacker could still use multiple accounts to mass-downvote everything from a user in the past 30 days. On the other hand, it's possible that some users' comments were uniformly bad. For the purpose of providing a useful signal, I think we only need enough downvotes to go just a bit negative. People respond disproportionately strongly to loss than to gain, after all! The karma of a particular comment could be capped at no worse than, say, -3, regardless of how many downvotes it received. That would be a cheap way to reduce the possibility of malicious mass-downvoting.

Comment author: Adele_L 02 July 2014 04:03:44PM 11 points [-]

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days?

This is one of those little things I really like about LW; I would miss it if it was gone. The best content here is on posts that are years old, and discouraging discussion/engagement there would just make the current content problem worse.

The karma of a particular comment could be capped at no worse than, say, -3, regardless of how many downvotes it received. That would be a cheap way to reduce the possibility of malicious mass-downvoting.

This doesn't do anything to solve the problem of one mass-downvoter.

Comment author: shminux 27 June 2014 05:02:03PM *  0 points [-]

And the "final" timeline goes like

Receive strange flash-drive, plug into computer, FOOM.

Almost. The final send-back is unnecessary.

Also, nothing as conspicuous as winning multiple lotteries.

Comment author: Adele_L 28 June 2014 04:02:55PM 0 points [-]

Almost. The final send-back is unnecessary.

But assuming that it works right, you're gaining a five year head start - which is very significant. For one, you could save all the people who would die in those five years, and also, you would probably be able to colonize more galaxies in the future, etc...

View more: Next