Konkvistador comments on Open Thread, September, 2010-- part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (858)
I don't know what to make of this:
Suicide note
Article
I've begun skimming a few of the chapters (the titles aren't anything if not provocative). On the one hand I am quite predisposed to view the entire work as mostly bunk, because manifestos of this nature often are. However on the other hand, the idea of a philosopher driven to death by his learning is a stimulating archetype enough for me to explore this. And yes I know that considering he quotes:
Its certain he was playing on that.
I've decided to post this here for rationality detox so I don't pick up any craziness (I'd wager a high probability of there being some there).
He seems to have developed what he terms a sociobiolgical analysis of the history of liberal democracy, reminiscent so far in parts of Nietzsche's Genealogy of Morals. This judging by a few excerpts of the ending chapter culminates in a kind of singularitarian view and the inevitability of human extinction at the hands of our self created transhuman Gods.
Mitchell Heisman starts off by saying
This is obviously false - it's up on the internet, it's gotten some press coverage, it quite obviously has not been repressed. But he is right that it won't be judged on its merits, because it's so long that reading it represents a major time commitment, and his suicide taints it with an air of craziness; together, these ensure that very few people will do more than lightly skim it.
The sad thing is, if this guy had simply talked to others as he went along - published his writing a chapter at a time on a blog, or something - he probably could've made a real contribution, with a real impact. Instead, he seems to have gone for 1904 pages with no feedback with which to correct misconceptions, and the result is that he went seriously off the rails.
I just skimmed a few random pages of the book, and ran into this stunning passage:
The small part of the book I've seen so far sounds lucid and without any signs of craziness, and based on this passage, I would guess that there is whole lot of interesting stuff in there. I'll try reading more as time permits.
I don't know how much detox this provides, but this blog has comments from three anonymous posters who claim to have known him.
From the document:
Later, elaborating:
I've stumbled upon some references to the ideas of Fukuyama and a Kurzwell reference, but had no idea he was familiar with Yudkowsky's work. Can you tell me from which page you got this?
Is it possible this guy was a poster here?
pp 226, 294-296 cover all specific namedrops of Yudkowsky.
He is definitely familiar with the idea of an AI Singularity. I came across the EY references while browsing, but can't find them again. 1900 pages!
Interesting stuff, though. Here are some extended quotes regarding Singularity issues:
From a section titled "The dark side of optimism and the bright side of pessimism":
I'm intrigued to find that there's a PDF viewer without a search function. :)
It is humorous in spots:
Oh, the Mac OS X "Preview" has search, but it didn't seem to work on documents this long. However, my revised hypothesis is that I didn't know how to spell Yudkowsky.
From the section "Does Logic Dictate that an Artificial Intelligence Requires a Religion?":
Interesting. But I note that there is nothing by Yudkowsky in the selected bibliography. I get the impression that his knowledge there is secondhand. Maybe if he'd read a bit about rationality, it could have pulled him back to reality. And maybe if he'd read a bit about what death really is, he wouldn'tve taken a several-millenia-old, incorrect Socrates quote as justification for suicide.
The bits about synthetic intelligence mostly seem rather naive - and they seem out of place amidst the long rants about Jesus, Nazis and the Jews. However, a few things are expressed neatly. For example, I liked:
"When it dawns on the most farsighted people that that this technology is the future and whoever builds the first AI could potentially determine the future of the human race, a fierce struggle to be first will obsess certain governments, individuals, businesses, organizations, and otherwise."
However, such statements do really need to be followed by saying that Google wasn't the first search engine, and that Windows wasn't the first operating system. Being first often helps - but it isn't everything.
This is precisely the wrong time to apply outside view thinking without considering the reasoning in depth. That isn't an appropriate reference class. The 'first takes all' reasoning you just finished quoting obviously doesn't apply to search engines. It wouldn't be a matter of "going on to say", it would be "forget this entirely and say..."
Computer software seems like an appropriate "reference class" for other computer software to me.
The basic idea is that developing toddler technologies can sometimes be overtaken by other toddlers that develop and mature faster.
Superficial similarities do scary things to people's brains.
I can see you think that this is a bad analogy. However, what isn't so clear is why you think so.
Early attempts at machine intelligence included Eurisco and Deep Blue. It looks a lot as though being first is not everything in the field of machine intelligence either.
"This new car is built entirely out of radioactive metals and plastic explosives. Farfsighted people have done some analysis of the structure and concluded that when the car has run at full speed for short period of time the plastic explosives will ignite, driving key portions of the radioactive metal together such that it produces a nuclear explosion."
However, such statements do really need to be followed by saying that the T-model Ford was the overwhelmingly dominant car of its era and it never leveled an entire city and Ferrari's go really fast and even then they don't explode.
An AI capable of self improving has more in common with that idiotic nuclear warhead transformer car than it does with MS Windows or Deep Blue. The part of the AI that farsighted people can see taking control of the future light cone is a part that is not present in or even related to internet searching or a desktop OS.
On a related note...
... You aren't allergic to peanuts I hope!
I think that boils down to saying machine intelligence could be different from existing programs in some large but unspecified way that could affect these first-mover dynamics.
I can't say I find that terribly convincing. If Google develops machine intelligence first, the analogy would be pretty convincing (it would be an exact isomorphism) - and that doesn't seem terribly unlikely.
It could be claimed that the period of early vulnerability shortens with the time dilation of internet time. On the other hand, the rate of innovation is also on internet time - effectively providing correspondingly more chances for competitors to get in on the action during the vulnerable period.
So, I expect a broadly similar first mover advantage effect to the one seen in the rest of the IT industry. That is large - but not necessarily decisive.
Recursive self improvement instead of continued improvement by the same external agents. You (I infer from this context) have a fundamentally different understanding of how this difference would play out but if nothing else the difference is specified.
If you mean to refer to the complete automation of all computer-programming-related tasks, then that would probably be a relatively late feature. There will be partial automation before that, much as we see today with refactoring, compilation, code generation, automated testing, lint tools - and so on.
My expectation is that humans will want code reviews for quite a while - so the elimination of the last human from the loop may take a long time. Some pretty sophisticated machine intelligence will likely exist before that happens - and that is mostly where I think there might be an interesting race - rather than one party pulling gradually ahead,
There could be races and competition in the machine world too. We don't yet know if there will be anti-trust organisations there - that deliberately act against monopolies. If so, there may be all-manner of future races and competition between teams of intelligent machines.
The first thing to come to mind: it's hard for me to think of an action less rational than suicide, given this person's overall situation.
"Suicide? To tell you the truth, I disapprove of suicide more than anything."
-Vash the Stampede
Does anyone know where I might find a copy? suicidenote.info is down.
http://www.scribd.com/doc/38104189/Mitchell-Heisman-Suicide-Note
Thank you very much.