Charles Stross: Three arguments against the singularity

10 ciphergoth 22 June 2011 09:52AM

I periodically get email from folks who, having read "Accelerando", assume I am some kind of fire-breathing extropian zealot who believes in the imminence of the singularity, the uploading of the libertarians, and the rapture of the nerds. I find this mildly distressing, and so I think it's time to set the record straight and say what I really think.

Short version: Santa Claus doesn't exist.

- Charles Stross, Three arguments against the singularity, 2011-06-22

EDITED TO ADD: don't get your hopes up, this is pretty weak stuff.

London meetup, Sunday 2011-05-15 14:00, near London Bridge

2 ciphergoth 13 May 2011 08:54PM

GiveWell.org interviews SIAI

28 ciphergoth 05 May 2011 04:29PM

Holden Karnofsky of GiveWell.org interviewed Jasen Murray of SIAI and published his notes (Edit: PDF, thanks lukeprog!), with updates from later conversations. Lots of stuff to take an interest in there - thanks to jsalvatier for drawing our attention to it. One new bit of information stands out in particular:

  • Michael Vassar is working on an idea he calls the "Persistent Problems Group" or PPG. The idea is to assemble a blue-ribbon panel of recognizable experts to make sense of the academic literature on very applicable, popular, but poorly understood topics such as diet/nutrition. This would have obvious benefits for helping people understand what the literature has and hasn't established on important topics; it would also be a demonstration that there is such a thing as "skill at making sense of the world."

Reminder: London meetup, Sunday 2pm, near Holborn

4 ciphergoth 28 April 2011 09:26AM

London meetup, Sunday 1 May, 2pm, near Holborn

2 ciphergoth 03 April 2011 09:47AM

London meetup, Sunday 2011-03-06 14:00, near Holborn (reminder)

5 ciphergoth 26 February 2011 08:10AM

Reminder: we're meeting up in London next weekend. Sunday 6th March, at 2pm, in the Shakespeares Head (official page) on Kingsway near Holborn Tube station.  We'll have a big picture of a paperclip on the table so you can find us; I look like this.  Hope to see lots of you there!

Open Thread, January 2011

4 ciphergoth 10 January 2011 11:14AM

Better late than never, a new open thread. Even with the discussion section, there are ideas or questions too short or inchoate to be worth a post.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

London meetup, Shakespeare's Head, Sunday 2011-03-06 14:00

5 ciphergoth 09 January 2011 03:43PM

Our last London meetup was a fantastic success! We're doing it again, same time same place.  The time: Sunday 6th March, at 2pm.  The place: the Shakespeares Head (official page) on Kingsway near Holborn Tube station.  As before, we'll have a big picture of a paperclip on the table so you can find us; also, I look like this.  I'm hoping that we can graduate to meeting up every other month, on the first Sunday of the month. Hope to see lots of you there!

Weird characters in the Sequences

5 ciphergoth 18 November 2010 08:27AM

When the sequences were copied from Overcoming Bias to Less Wrong, it looks like something went very wrong with the character encoding.  I found the following sequences of HTML entities in words in the sequences:

 

’ê d?tre

Å« M?lamadhyamaka

ĂŚ Ph?drus

— arbitrator?i window?and

ĂŞ b?te m?me

… over?and

รก H?jek

ĂƒÂź G?nther

ĂŠ fianc?e proteg?s d?formation d?colletage am?ricaine d?sir

ĂƒÂŻ na?ve na?vely

ō sh?nen

ö Schr?dinger L?b

ยง ?ion

ĂƒÂś Schr?dinger H?lldobler

Ăź D?sseldorf G?nther

– ? Church? miracles?in Church?Turing

’ doesn?t he?s what?s let?s twin?s aren?t I?ll they?d ?s you?ve else?s EY?s Whate?er punish?d There?s Caledonian?s isn?t harm?s attack?d I?m that?s Google?s arguer?s Pascal?s don?t shouldn?t can?t form?d controll?d Schiller?s object?s They?re whatever?s everybody?s That?s Tetlock?s S?il it?s one?s didn?t Don?t Aslan?s we?ve We?ve Superman?s clamour?d America?s Everybody?s people?s you?d It?s state?s Harvey?s Let?s there?s Einstein?s won?t

ĂĄ Alm?si Zolt?n

ĂŤ pre?mpting re?valuate

≠ ?

è l?se m?ne accurs?d

รฐ Ver?andi

→ high?low low?high

’ doesn?t

ā k?rik Siddh?rtha

รถ Sj?berg G?delian L?b Schr?dinger G?gel G?del co?rdinate W?hler K?nigsberg P?lzl

ĂŻ na?vet

  I?understood ? I?was

Ăś Schr?dinger

ĂŽ pla?t

úñ N?ez

Ĺ‚ Ceg?owski

— PEOPLE?and smarter?supporting to?at problem?and probability?then valid?to opportunity?of time?in true?I view?wishing Kyi?and ones?such crudely?model stupid?which that?larger aside?from Ironically?but intelligence?such flower?but medicine?as

‐ side?effect galactic?scale

´ can?t Biko?s aren?t you?de didn?t don?t it?s

≠ P?NP

窶馬 basically?ot

Ĺ‘ Erd?s

Now, an example like "ö Schr?dinger L?b" I can decode: "C3 B6" is the byte sequence for the UTF-8 encoding of "U+00F6 ö LATIN SMALL LETTER O WITH DIAERESIS".  But "úñ" is not a valid UTF-8 sequence - and those that contain entities larger than 255 are very mysterious.  Anyone able to make any guesses?
EDIT: รถ translated into Windows codepage 874 is C3 B6!

 

 

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It)

32 ciphergoth 30 October 2010 09:31AM

[...] SIAI's Scary Idea goes way beyond the mere statement that there are risks as well as benefits associated with advanced AGI, and that AGI is a potential existential risk.

[...] Although an intense interest in rationalism is one of the hallmarks of the SIAI community, still I have not yet seen a clear logical argument for the Scary Idea laid out anywhere. (If I'm wrong, please send me the link, and I'll revise this post accordingly. Be aware that I've already at least skimmed everything Eliezer Yudkowsky has written on related topics.)

So if one wants a clear argument for the Scary Idea, one basically has to construct it oneself.

[...] If you put the above points all together, you come up with a heuristic argument for the Scary Idea. Roughly, the argument goes something like: If someone builds an advanced AGI without a provably Friendly architecture, probably it will have a hard takeoff, and then probably this will lead to a superhuman AGI system with an architecture drawn from the vast majority of mind-architectures that are not sufficiently harmonious with the complex, fragile human value system to make humans happy and keep humans around.

The line of argument makes sense, if you accept the premises.

But, I don't.

Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It), October 29 2010. Thanks to XiXiDu for the pointer.

View more: Prev | Next