After reading the nextbigfuture article and hackernews thread, I still don't understand what this project plans to do.
They appear to be aiming for whole brain emulation, trying to scale up previous efforts that simulated a rat neocortical column.
Here's another interim report on the longitudinal effects of CR on rhesus monkeys, this one a bit more recent (2009) than the one linked in the OP. From the abstract:
We report findings of a 20-year longitudinal adult-onset CR study in rhesus monkeys aimed at filling this critical gap in aging research. In a population of rhesus macaques maintained at the Wisconsin National Primate Research Center, moderate CR lowered the incidence of aging-related deaths. At the time point reported 50% of control fed animals survived compared with 80% survival of CR animals. Further, CR delayed the onset of age-associated pathologies. Specifically, CR reduced the incidence of diabetes, cancer, cardiovascular disease, and brain atrophy. These data demonstrate that CR slows aging in a primate species.
If I were to say, "Evolution is the idea that men are descended from chimpanzees," would you let me have my definition or would you say I was confused?
Have you read A Human's Guide to Words? You seem to be confused about how words work.
Looking back at your posts in this sequence so far, it seems like it's taken you four posts to say "Philosophers are confused about meta-ethics, often because they spend a lot of time disputing defintions." I guess they've been well-sourced, which is worth something. But it seems like we're still waiting on substantial new insights about metaethics, sadly.
Post any "meta" (i.e. anything that's not "I want to save the world") under here to keep things tidy. Thanks.
"Save the world" has icky connotations for me. I also suspect that it's too vague for there to be much benefit to people announcing that they would like to do so. Better to discuss concrete problems, and then ask who is interested/concerned with those problems and who would like to try to work on them.
You are clearly not capable of thinking rationally with respect to a fundamental belief where evidence makes the question overdetermined. Why should I listen to you?
People who hold obviously incorrect beliefs can still be highly intelligent and productive:
- Peter Duesberg (a professor of molecular and cell biology at the University of California, Berkeley) "claimed that AIDS is not caused by HIV, which made him so unpopular that his colleagues and others have — until recently — been ignoring his potentially breakthrough work on the causes of cancer."
- Francisco J. Ayala who “…has been called the “Renaissance Man of Evolutionary Biology” is a geneticist ordained as a Dominican priest. “His “discoveries have opened up new approaches to the prevention and treatment of diseases that affect hundreds of millions of individuals worldwide…”
- Francis Collins (geneticist, Human Genome Project) noted for his landmark discoveries of disease genes and his leadership of the Human Genome Project (HGP) and described by the Endocrine Society as “one of the most accomplished scientists of our time” is a evangelical Christian.
- Georges Lemaître (a Belgian Roman Catholic priest) proposed what became known as the Big Bang theory of the origin of the Universe.
- Kurt Gödel (logician, mathematician and philosopher) who suffered from paranoia and believed in ghosts. “Gödel, by contrast, had a tendency toward paranoia. He believed in ghosts; he had a morbid dread of being poisoned by refrigerator gases; he refused to go out when certain distinguished mathematicians were in town, apparently out of concern that they might try to kill him.”
There are many more examples. All of them are outliers indeed, and I don't think that calcsam has been able to prove that his achievements and general capability to think clearly in some fields does outweigh the heavy burden of being religious. Yet there is evidence that such people do exist and he offers you the chance to challenge him.
Generally I agree with you, but I also think that calcsam provides a fascinating example of the internal dichotomy of some human minds and a case study that might provide insights to how the arguments employed by Less Wrong fail in some cases.
Good reminder that reversed stupidity is not intelligence.
Adding to the list: Hans Berger invented the EEG while trying to investigate telepathy, which he was convinced was real. Even fools can make important discoveries.
I have a LOT to say on this topic (as in sequence-of-front-page-posts-quantity); unfortunately I can't exactly say it right now because I'm at a conference this week.
For the moment, I'll just send out a general warning that the temptation to engage in fake explanations or greedy reductionism seems to be nigh-irresistible in this domain (at least among those who don't opt for outright mysterianism).
In particular, be extremely cautious about trying to do something like this without having studied music (to the point where e.g. you've at least heard of Schenker). Otherwise, chances are you simply won't have a rich enough concept-inventory to capture the subtleties involved.
In general, remember that value is complex.
One thing I didn't see you mention is neuroscience. My understanding is that some AGI researchers are currently taking this route; e.g. Shane Legg, mentioned in another comment, is an AGI researcher who is currently studying theoretical neuroscience with Peter Dayan. Demis Hassabis is another person interested in AGI who's taking the neuroscience route (see his talk on this subject from the most recent Singularity Summit). I'm personally interested in FAI, and I suspect that we need to study the brain to understand in more detail the nature of human preference. In terms of a career path, it's possible I'll go to graduate school at some point in the future, but my current plans are to just get a programming job and study neuroscience in my free time.
Have you given a thought to just taking the day job route? There are some problems, as I've found more than a few journal articles locked behind a paywall, but there are some ways for dealing with this. Furthermore, I've found a surprising number of recent neuro articles are available through open access journals like PNAS, Frontiers and through other routes (Google, Google Scholar, CiteseerX, author websites). If you're interested more in CS research, then I suspect you'll have even less trouble; for some reason recent (CS papers) seem to almost always be available over the internet.
It says "bad *argument" not "Bad person shooting at you". Self-defence (or defence of one's family, country, world, whatever) is perfectly acceptable - initiation of violence never is. It's never right to throw the first punch, but can be right to throw the last.
What about in the case where the first punch constitutes total devastation, and there is no last punch? I.e. the creation of unfriendly AI. It would seem preferable to initiate aggression instead of adhering to "you should never throw the first punch" and subsequently dying/losing the future.
Edit: In concert with this comment here, I should make it clear that this comment is purely concerned with a hypothetical situation, and that I definitely do not advocate killing any AGI researchers.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
That's from the document where Yudkowsky described his "transfer of allegence".
What puzzles me is how the outfit gets any support. I mean, they are a secretive, closed-source machine intelligence outfit who makes no secret of their plan to take over the world. To me, that is like writing BAD GUY in big, black letters on your forehead.
The "He-he - let's construct machine intelligence in our basement" is like something out of Tin-Tin.
Maybe the way to understand the phenomenon is as a personality cult.
What. That quote seems to be directly at odds with the entire idea of "Friendly AI". And of course it is, as a later version of Eliezer refuted it:
I'm also not sure it makes sense to call SIAI a "closed-source" machine intelligence outfit, given that I'm pretty sure there's no code yet.