All of rwallace's Comments + Replies

What is being proposed is a small pause in progress in a particularly dangerous field.

There are no small pauses in progress. Laws, and the movements that drive them, are not lightbulbs to be turned on and off at the flick of a switch. You can stop progress, but then it stays stopped. The Qeng Ho fleets, for example, once discontinued, did not set sail again twenty years later, or two hundred years later.

There also tend not to be narrow halts in progress. In practice, a serious attempt to shut down progress in AI, is going to shut down progress in computers... (read more)

Right, yes, I'm not suggesting the iterated coding activity can or should include 'build an actual full-blown superhuman AGI' as an iterated step.

Are you advocating as option A, 'deduce a full design by armchair thought before implementing anything'? The success probability of that isn't 1%. It's zero, to as many decimal places as makes no difference.

5SarahNibs
We're probably talking past each other. I'm saying "no you don't get to build lots of general AIs in the process of solving the alignment problem and still stay alive" and (I think) you're saying "no you don't get to solve the alignment problem without writing a ton of code, lots of it highly highly related to AI". I think both of those are true.

My argument is not that AI is the same activity as writing a compiler or a search engine or an accounts system, but that it is not an easier activity, so techniques that we know don't work for other kinds of software – like trying to deduce everything by armchair thought, verify after-the-fact the correctness of an arbitrarily inscrutable blob, or create the end product by throwing lots of computing power at a brute force search procedure – will not work for AI, either.

3Donald Hobson
I am not sure what you mean when you say these techniques "don't work". They all seem to be techniques that sometimes produce something, given sufficient resources. They all seem like techniques that have produced something. Researchers have unpicked and understood all sorts of hacker written malware. The first computer program was written entirely by armchair thought, and programming in pencil and paper continues in some tech company interviews today. Brute force search can produce all sorts of things. In conventional programming, a technique that takes 2x as much programmer time is really bad.  In ASI programming, a technique that takes 2x as much programmer time and has 1/2 the chance of destroying the world is pretty good. 
3SarahNibs
There is a massive difference between a technique not working and a technique being way less likely to work. A: 1% chance of working given that we get to complete it, doesn't kill everyone before completing B: 10% chance of working given that we get to complete it, 95% chance of killing everyone before completing You pick A here. You can't just ignore the "implement step produces disaster" bit. Maybe we're not in this situation (obviously it changes based on what the odds of each bit actually are), but you can't just assume we're not in this situation and say "Ah, well, B has a much higher chance of working than A, so that's all, we've gotta go with B".
-2sp1ky
This post was ALL about rational debate. This is a highly calculated assessment of the fragility of Moore's Law. THis is the kind of stuff government advisors would probably have figured out by now. If you say this helps terrorists (which is ironic because the conclusion was only airbombing can stop fabrication, and terrorists don't have access to that yet), well this is also highly useful to anyone who wants to stop terrorists. The conclusion is highly interesting. If a war was to break out today between developed nations, taking out the other's fabricators and nuclear capabilities must be the highest priorities.
5[anonymous]
Gwern wouldn't advocate terrorism as a solution; he already has argued that it's ineffective.
8gwern
This says more about your own beliefs and what you read into the post than anything I actually wrote. I am reminded of a quote from Aleister Crowley, who I imagine would know:
rwallace110

Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.

As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.

0Thrasymachus
Suppose old person and child (perhaps better: young adult) would both gain 2 years, so we equalize payoff. What then? Why not be prioritarian at the margin of aggregate indifference?

That would be cheap and simple, but wouldn't give a meaningful answer for high-cost bugs, which don't manifest in such small projects. Furthermore, with only eight people total, individual ability differences would overwhelmingly dominate all the other factors.

Sorry, I have long forgotten the relevant links.

We know that late detection is sometimes much more expensive, simply because depending on the domain, some bugs can do harm (letting bad data into the database, making your customers' credit card numbers accessible to the Russian Mafia, delivering a satellite to the bottom of the Atlantic instead of into orbit) much more expensive than the cost of fixing the code itself. So it's clear that on average, cost does increase with time of detection. But are those high-profile disasters part of a smooth graph, or is it a step function where the cost of fixing the... (read more)

rwallace150

Because you couldn't. In the ancestral environment, there weren't any scientific journals where you could look up the original research. The only sources of knowledge were what you personally saw and what somebody told you. In the latter case, the informant could be bullshitting, but saying so might make enemies, so the optimal strategy would be to profess belief in what people told you unless they were already declared enemies, but base your actions primarily on your own experience; which is roughly what people actually do.

That's not many worlds, that's quantum immortality. It's true that the latter depends on the former (or would if there weren't other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.

True. The usual reply to that is "we need to reward the creators of information the same way we reward the creators of physical objects," and that was the position I had accepted until recently realizing, certainly we need to reward the creators of information, but not the same way - by the same kind of mechanism - that we reward the creators of physical objects. (Probably not by coincidence, I grew up during the time of shrink-wrapped software, and only re-examined my position on this matter after that time had passed.)

4Mercy
Property laws aren't based on their owners having created them though. Ted Turner is not in the land reclamation business, and if I go down a disused quarry owned by another and build myself a table, I don't gain ownership of the marble. All defenses of actually existing property rights are answers to the question "how do we encourage people to manage resources sensibly".

To take my own field as an example, as one author remarked, "software is a service industry under the persistent delusion that it is a manufacturing industry." In truth, most software has always been paid for by people who had reason other than projected sale of licenses to want it to exist, but this was obscured for a couple of decades by shrinkwrap software, shipped on floppy disks or CDs, being the only part of the industry visible to the typical nonspecialist. But the age of shrinkwrap software is passing - outside entertainment, how often does the typical customer buy a program these days? - yet the industry is doing fine. We just don't need copyright law the way we thought we did.

2fubarobfusco
Well, a lot of "service" software that you interact with is running on someone else's computer. You could rip off the HTML and CSS of a search engine or a web store and not have anything particularly useful without the backend.
rwallace-10

We can't. We can only sensibly define them in the physical universe which is based on matter, with its limitations of "only in one place at a time" and "wears out with use" that make exclusive ownership necessary in the first place. If we ever find a way to transcend the limits of matter, we can happily discard the notion of property altogether.

I took the post to be asking for opinions sufficiently far outside the mainstream to be rarely discussed even here, and I haven't seen a significant amount discussion of this one. Then again, that could be because I wasn't particularly looking; I used to be of the opinion "intellectual property law has gone too far and needs to be cut back, but of course we can't do away with it entirely," and only recently looked more closely at the but of course part and realized it didn't hold water. If this opinion is more common than I had given it credit for, great!

5MixedNuts
I haven't seen a discussion of the concept of intellectual property that did not include a remark to the effect of "Wait, whence the analogy between property of unique objects and control of easily copied information?".
rwallace210

Not only is intellectual property law in its current form destructive, but the entire concept of intellectual property is fundamentally wrong. Creating an X does not give the creator the right to point a gun at everyone else in the universe who tries to arrange matter under their control into something similar to X. In programming terminology, property law should use reference semantics, not value semantics. Of course it is true that society needs to reward people who do intellectual work, just as much as people who do physical work, but there are better justified and less harmful ways to accomplish this than intellectual property law.

7JoshuaZ
This seems to be a pretty mainstream position. Not one I agree with, but not that controversial.
4NihilCredo
Such as?
MixedNuts390

The post asked for opinions so repulsive people have a hard time generating them in the first place. This is a relatively common opinion.

cousin_it110

A funny unrelated question that just occurred to me: how can one define property rights in a mathematical multiverse which isn't ultimately based on "matter"?

DanielLC260

Creating an X does not give the creator the right ...

Of course it doesn't. The question is if the world becomes a better place if they do it anyway.

Ill posed does not necessarily mean impossible. Most of the problems we deal with in real life are ill posed, but we still usually manage to come up with solutions that are good enough for the particular contexts at hand. What it does mean is that we shouldn't expect the problem in question to be definitely solved once and for all. I'm not arguing against attempting to test rationality. I'm arguing against the position some posters have taken that there's no point even trying to make progress on rationality until the problem of testing it has been definitely solved.

2[anonymous]
Ok, that's reasonable. I was taking ill-posed to mean like a confused question. Or something like that.

But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better?

Maybe; there are all sorts of caveats to that. But that aside, more directly on the question of tests:

Possibly in a testable way?

You still run into the problem that the outcome depends greatly on context and phrasing. There is the question with turning over cards to test a hypothesis, on which people's performance dramatically improves when you rephrase it as an isomorphic question about social rules. There are the trolley questions and the specks versu... (read more)

0[anonymous]
Those examples are good evidence for us not being able to test coherently yet, but I don't think they are good evidence that the question is ill-posed. If the question is "how can we test rationality?", and the only answers we've come up with are limited in scope and subject to all kinds of misinterpretation, I don't think that means we can't come up with broad tests that measure progress. I am reminded of a quote: "what you are saying amounts to 'if it is possible, it ought to be easy'" I think the place to find good tests will be instead of looking at how well people do against particular biases, look at what it is we think rationality is good for, and measure something related to that.

Testing rationality is something of an ill posed problem, in part because the result depends greatly on context. People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions. (This is a feature! Given that evolution wasn't able to come up with minds that infallibly distinguish true beliefs from false ones, it's good that at least it came up with a way to reduce the harm from false beliefs.) I'm not sure how ... (read more)

2[anonymous]
But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better? Possibly in a testable way? See MMA. There is still a problem of whether being a good fighter is as important or related to being good at self-defense, but martial arts are now measured at least relative to all fighting styles.

"The price of freedom is eternal vigilance."

It would be wonderful if defending freedom were a one-off job like proving Fermat's Last Theorem. As it turns out, it's an endlessly recurring job like fighting disease; unfortunate, but that's the way it is. And yes, sometimes our efforts fail, and freedoms are lost or people get sick and die. But the answer to that is to work harder and smarter, not to give up.

-1[anonymous]
Who's giving up? You whiners are just playing the politicians' game. Avoidance often works better and wastes less time. Some people have begun work on an alternate domain name type system, and others different style replacements that would be harder to control, for example. If you want to trade quotes, what about
[anonymous]150

it's an endlessly recurring job like fighting disease

Until you eradicate smallpox, or polio, or Congress.

Most of this post, along with the previous posts in the series, is both beautiful and true - the best combination. It's a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don't think that meme is necessary here, any more than it's necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out - discussing it in separate posts if you wish to discuss it - is the major improvement I would suggest.

3Raemon
A few people commented that that section was jarring, and I kept editing it to be less jarring, but if folks on Less Wrong are actively bothered by it then it may simply need to get cut. The ritual book as a whole is meant to reflect my beliefs (and the beliefs of a particular community) at the time that I wrote it. Partly so I can hand it to friends and family who are interested and say "this basically sums up my worldview right now" (possibly modifying them towards my worldview would be a nice plus, but not the main goal." But also so that in 10-20 years I can look back and see a snapshot of what I cared about in 2011. Grappling with the implications of the Singularity was one of the defining things of this process. If it was just a matter of "I care a lot about the world", this whole project wouldn't have had the urgency it did. It would have just been a matter of persuading others to care, or sharing a meme, rather than forcing myself to rebel against a powerful aspect of my psyche. It's important that I took it seriously, and it's important that I ended on a note of "I still do not know the answer to this question, but I think I'd be better able to deal with it if it turned out to be true, and I commit to studying it further." So I'm leaving the Solstice pdf as is. But this post, as a standalone Less Wrong article, may be instrumentally useful as a way to help people take the future seriously in a general sense. I'm going to leave it for now, to get a few more data points on people's reaction, but probably edit it some more in a day or so. There will be a new Solstice book next year, and there's at least a 50% chance that I will dramatically toning down the transhumanist elements to create a more.... secular? (ha) version of it that I can promote to a slightly larger humanist population. One data point btw: My cousin (25 year old male, educated and nerdy but not particularly affiliated with our meme cluster) said that he found the AI section of this essay j
3nshepperd
If the singularity was magical I'd be a lot more hopeful about the future of humankind (even humans aren't clever enough to implement magic). I agree with you a bit though. ETA: Wait, that's actually technically inaccurate. If I believed the singularity was magical I'd be a lot more hopeful about the future of humankind. But I do hope to believe the truth, whatever is really the case.

Good points, upvoted. But in fairness, I think the ink blot analogy is a decent one.

Imagine you asked the question about the ink blot to a philosopher in ancient Greece, how might he answer? He might say there is no definite number. Or he might say there must be some underlying reality, even though he doesn't know for sure what it is; and the best guess says it's based on atoms; so he might reply that he doesn't know the answer, but hopefully it might be possible in principle to calculate it if you could count atoms.

I think that's about where we are regarding the Born probabilities and number or measure of different worlds in MWI right now.

-2[anonymous]
the ink blot analogy is not quite so good for counting worlds because it implies more ambiguity than there is. In reality the most ambiguous amplitude-blot is the inside of a quantum computer. The difference between considering that as all one world or many not-quite-decohered worlds is at most a constant factor on the exponentially growing total number. Most different worlds are very much distinct. The amplitude blots are quite small (atom scale) and infinite-dimensional configuration space is huge. All it takes is one photon to have gone a different way and the blobs are lightyears apart. Assuming all branches are intact and active (as implied by conservation of amplitude), the number of worlds is approximately k*2^(r*t) where k is your strictness of what counts as a world, r is how many decoherence events happen per time, and t is time. I chose a base of 2 because all decoherence complexes can be approximately reduced to single splits. r can be adjusted if some other base is more natural.

There is a wonderfully evocative term, Stand Alone Complex, from the anime series of the same name, which refers to actions taken by people behaving as though they were part of a conspiracy even though no actual conspiracy is present. It's pretty much tailor-made for this case.

Mencius Moldbug calls this instance the Cathedral, in an insightful series of articles indexed here.

You could also trade off things that were more important in the ancestral environment than they are now. For example, social status (to which the neurotypical brain devotes much of its resources) is no longer the evolutionary advantage that it used to be.

3gwern
You two realize you are just reinventing Bostrom's EOCs, right? People, I wrote a thorough essay all about this! If I left something out, just tell me - you don't need to reinvent the wheel! (This goes for half the comments on this page.)
rwallace110

Only if you take 'ten times smarter' to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that's not a meaningful operation; it's essentially a type error. The obvious interpretation of 'ten times smarter' within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.

And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in m... (read more)

2Emile
Yup, that's the way I interpreted it too - going from top 1% to top 0.1%.
0torekp
To me a more natural interpretation from a mathematical POV would use log-odds. So if the author is at the 90% mark, someone 10 times as smart occurs at the frequency of around 1 in 3 billion. But yeah. In context, your way makes more sense, if only because it's more charitable.
3Manfred
I agree, that's likely what Carrier was feeling when he wrote that sentence. But that doesn't let him off the hook, because that way is even worse than Logos'! He's using a definition of "times more intelligent" that is only really capable of differentiating between humans, and trying to apply it to something outside that domain.
3Cthulhoo
I'm not sure if the following could be already encompassed in Amdahl's law, but I think it was worth a comment. Very intelligent humans still need to operate through society to reach their goals. An IQ of 140 may be enough for you to discover and employ the best tools society puts at your disposal. An IQ of 180 (just an abstract example) may let you recognize new and more efficient patterns, but you then have to bend society to exploit them, and this usually means convincing people not as smart as you are, that may very well take a long time to grasp your ideas. As an analogy, think being sent into the stone age. A Swiss knife here is a very useful tool. It's not a revolutionary concept, it's just better than stone knives in cutting meat and working with wood. On the other hand, a set of professional electrical tools, while in principle way more powerful, will be completely useless since you will have to find a way to charge their batteries before.

There is kidnapping for interrogation, slavery and torture today, so there is no reason to believe there won't be such in the future. But I don't believe it will make sense in the future to commit suicide at the mere thought, any more than it does today.

As for whether such a society will exist, I think it's possible it may. It's possible there may come a day when people don't have to die. And there is a better chance of that happening if we refrain from poisoning our minds with scare stories optimized for appeal to primate brains over correspondence to external reality.

0gwern
At least, not unless you are an upload and your suicide trigger is the equivalent of those tripwires that burn the contents of the safe.

I've been snarky for this entire conversation - I find advocacy of death extremely irritating - but I am not just snarky by any means. The laws of physics as now understood allow no such thing, and even the author of the document to which you refer - a master of wishful thinking - now regards it as obsolete and wrong. And the point still holds - you cannot benefit today the way you could in a post-em world. If you're prepared to throw away billions of years of life as a precaution against the possibility of billions of years of torture, you should be prepared to throw away decades of life as a precaution against the possibility of decades of torture. If you aren't prepared to do the latter, you should reconsider the former.

4XiXiDu
I rather subscribe to how Greg Egan describes what the author is doing:

An upload, at least of the early generations, is going to require a supercomputer the size of a rather large building to run, to point out just one of the reasons why the analogy with playing a pirate MP3 is entirely spurious.

Warhammer 40K is one of those settings that is highly is open to interpretation. My interpretation is that it's in a situation where things could be better and could be worse, victory and defeat are both very much on the cards, and hope guided by cold realism is one of the main factors that might tip the balance towards the first outcome. I consider it similar in that regard to the Cthulhu mythos, and for that matter to real life.

rwallace-10

If you postulate ems that can run a million subjective years a minute (which is not at all scientifically plausible), the mainline copies can do that as well, which means talking about wall clock time at all is misleading; the new subjective timescale is the appropriate one to use across the board.

As for the rest, people are just as greedy today as they will be in the future. Organized criminals could torture you until you agree to sign over your property to them. Your girlfriend could pour petrol over you and set you on fire while you're asleep. If you si... (read more)

-2MileyCyrus
Now you're just getting snarky. This document is a bit old, but: No one can hurt me today the way I could be hurt in a post-em world. In a world where human capacity for malevolence is higher, more precaution is required. One should not rule out suicide as a precaution against being tortured for subjective billions of years.
2drethelin
All of those scenarios are not only extremely inconvenient and not very profitable for the people involved, but also have high risks of getting caught. This means that the probability of any of them taking place is marginal, because the incentives just aren't there in almost any situation. On the other hand, a digital file is hugely more easy to acquire, incarcerate, transport, and torture, and also easier to hide from any authorities. If someone gets their hands on a digital copy of you, torturing you for x period of time can be as easy as pressing a button. You might never kidnap an orchestra and force them to play for you, but millions of people download MP3s illegally. I would still rather be uploaded rather than die, but I don't think you're giving the opposing point of view anything like the credit it deserves.

The comment holds regardless. In today's world, you can only be tortured for a few decades, but by the same token you can only lose a few decades of lifespan by committing suicide. If in some future world you can be tortured for a billion years, then you will also be losing a billion years of happy healthy life by committing suicide. If you think the mere possibility of torture - with no evidence that it is at all likely - will be grounds for committing suicide in that future world, then you should think it equally good grounds for committing suicide today. If you agree with me that would be insanely irrational today, you should also agree it will be insanely irrational in that future world.

2MileyCyrus
I, and I suspect the entire human species, is risk averse. Suppose I have to choose between to bets: A: 50% chance of living 100 happy years. 50% chance of living 100 torture years. B: 50% chance of living 1,000,0000 happy years, 50% chance of living 1,000,000 torture years. I will pick the first because it has the better bad option. While additional happy years have diminishing additional utility, additional torture have increasing dis-utility. I would rather a 50% chance of being tortured for 10 years than a 10% chance of being tortured for 50 years. When WBE is invented, the stakes will be upped. The good possibilities get much better, and the bad possibilities get much worse. As a risk averse person, this scares me.
rwallace-10

Also, in the absence of any evidence that this is at all unlikely to occur.

If you think the situation is that symmetrical, you should be indifferent on the question of whether to commit suicide today.

But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility.

If it had been generated as part of an exhaustive listing of all possible scenarios, I would have refrained from comment. As it is, being raised in the context of a discussion on whether one should try for uploading in the unlikely event one l... (read more)

0[anonymous]
Do you have some actual data for me to update on? Otherwise, we're just bickering over unjustifiable priors. That's why I'm withholding judgment. It did come out as this later, but not "obviously" from the original comment.

With the possibility? Of course not. Anything that doesn't involve a logical self-contradiction is possible. My disagreement is with the idea that it is sane or rational to base decisions on fantasies about being kidnapped and tortured in the absence of any evidence that this is at all likely to occur.

5MileyCyrus
Evidence: People are greedy. When people have the opportunity to exploit others, they often take it. If anyone gets a hold of your em, they can torture your for subject aeons. Anyone who has a copy of your em can blackmail you: "Give me 99% of your property. For every minute you delay, I will torture your ems for a million subjective years." And what if someone actually wants to hurt you, instead of just exploit you? You and your romantic partner get in a fight. In a fit of passion, she leaves with a copy of your em. By the time the police find her the next day, you've been tortured for a subjective period of time longer than the universe. Very few, perhaps no one, will have the engineering skill to upload a copy of themselves without someone else's assistance. When you're dead and Apple is uploading your iEm, you're trusting Apple not to abuse you. Is anyone worthy of that trust? And even if you're uploaded safely, how will you store backup copies? And how will you protect yourself against hackers? Sound more plausible now?
1[anonymous]
Also, in the absence of any evidence that this is at all unlikely to occur. But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility. It seems to me you're disagreeing with some phantasm you imported into the conversation.

If you think that kind of argument holds water, you should commit suicide today lest a sadist kidnap you and torture you in real life.

6gwern
I would point out that the scenario I was writing about was clearly one in which ems are common and em society is stable. If you think that in such a society, there won't be em kidnapping or hacking, for interrogation, slavery, or torture, you hold very different views from mine indeed. (If you think such a society won't exist, that's another thing entirely.) As a human, you can only die once.
3NancyLebovitz
I can make a reasonable estimate of the risk of being kidnapped or arrested and being tortured. There's a lot less information about the risk of ems being tortured, and such information may never be available, since I think it's unlikely that computers can be monitored to that extent. People do commit suicide to escape torture, but it doesn't seem to be the most common response. Also, fake executions are considered to be a form of torture because of the fear of death. So far as I know, disappointment at finding out one hasn't been killed isn't considered to be part of what's bad about fake executions.
0[anonymous]
Do you have some substantial disagreement with the possibility of the scenario?
-1MileyCyrus
My physical body can only be tortured a few decades, tops. An em can be tortured for a billion years, along with a billion em copies of myself.

No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don't overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can't say anything about what's in front of us one way or the other). For what it's worth, I think the evidence is decisively in favor of the latter view.

I'm perfectly prepared to bite this bullet. Extending the life of an existing person a hundred years and creating a new person who will live for a hundred years are both good deeds, they create approximately equal amounts of utility and I believe we should try to do both.

1torekp
I agree. Note that this is independent of utilitarianism per se.

Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).

I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.

That's awfully vague. "Whatever window of time we had", what does that mean?

The current state of the world is unusually conducive to technological progress. We don't know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don't think that's a good gamble.

There's one kind of "technological progress" that SIAI opposes as far as I can tell: work

... (read more)
6Morendil
Well, SIAI isn't necessarily a homogenous bunch of people, with respect to what they oppose or endorse, but did you look for instance at Michael Anissimov's entries on MNT? (Focusing on that because it's the topic of Risto's comment and you seem to see that as a confirmation of your thesis.) You don't get the impression that he thinks it's a bad idea, quite the contrary: http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/ Here is Eliezer on the SL4 mailing list: The Luddites of our times are (for instance) groups like the publishing and music industries, the use of that label to describe the opinions of people affiliated with SIAI just doesn't make sense IMO.

Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl's law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.

We've had various kinds of Luddism before, but this one is particularly lethal in being a form that appeals to people who had been technophiles. If it spreads enough, best case scenario is the pool of people willing to work on real technological progress shrinks, worst case scenario is regulation that snuffs out progress entirely, and we get to sit around bickering about primate politics until whatever window of time we had runs out.

3Morendil
That's awfully vague. "Whatever window of time we had", what does that mean? There's one kind of "technological progress" that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness. Now if you happen to think that AGI is a must-have to ensure the long-term survival of humanity, it seems to me that you're already pretty much on board with the essential parts of SIAI's worldview, indistinguishable from them as far as the vast majority is concerned. Otherwise, there's plenty of tech that is entirely orthogonal with the claims of SIAI: cheap energy, health, MNT, improving software engineering (so-called), and so on.

Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I've seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein's vision of nuclear powered space colonization was.

We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.

7Nornagest
The future will certainly resemble a story -- or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you're probably trying to point to is in interpreting any particular plausible story as evidence for its individual components -- or, for that matter, against. The conjunction fallacy implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It's an excellent reason to be skeptical of any highly derived theories of the future -- the specifics of Ray Kurzweil's singularity timeline, for example, or Robin Hanson's Malthusian emverse. But I don't think it's a good reason to be skeptical of any of the dominant singularity models in general form. Those don't work back from a compelling image to first principles; most of them don't even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there's no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there's quite a volume of stories about weak or partial singularities.) So yes, there's not going to be a singleton AI bent on turning us all into paperclips. But that's a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn't fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues. Superhero bias is a more interesting question -- but it's also a more specific one.

Okay, to look at some of the specifics:

Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.

The linked article is amusing but misleading; the described 'ultimate laptop' would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don't know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit ... (read more)

1lessdazed
The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it's impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term. It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it's unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn't able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
6Giles
Just out of interest... assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be "skeptical" of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution. In other words, which direction am I likely to be biased in?

I discuss some of it at length here: http://lesswrong.com/lw/312/the_curve_of_capability/

I'll also ask the converse question: given that you can't typically prove a negative (I can't prove the nonexistence of psychic powers or flying saucers either), if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?

if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?

I'm not marchdown, but:

Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:

  • Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
  • Superior serial power: Evidence against would be an inability to increase the ser
... (read more)

I understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn't entertain such far mode beliefs in the first place. What I'm discussing is the human mind, and the failure modes it actually exhibits.

So your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?

4drethelin
You don't understand. Decompartmentalization doesn't have a direction. You don't go forwards towards a belief or backwards from a belief, or whatever. If your beliefs are decompartmentalized that means that the things you believe will impact your other beliefs reliably. This means that you don't get to CHOOSE what you believe. If you think the singularity is all important and worth working for, it's BECAUSE all of your beliefs align that way, not because you've forced your mind to align itself with that belief after having it.

That's actually a good question. Let me rephrase it to something hopefully clearer:

Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his... (read more)

1Morendil
How do you propose that would happen?
3marchdown
What is that evidence against singularity which you're alluding to?
7drethelin
Compartmentalization may make ridiculous far beliefs have less of an impact on the world, but it also allows those beliefs to exist in the first place. If your beliefs about religion depended on the same sort evidence that underpins your beliefs about whether your car is running, then you could no more be convinced of religion than you could be convinced by a mechanic that your car "works" even though it does not start.

You are worried that, given your assumptions, civilizations might not be willing to pay an extremely high price to do things that aliens would like if they knew about them, which they don't.

But one of your assumptions is that every civilization has a moral system that advocates attacking and enslaving everyone they meet who thinks differently from them.

It would be worrying if a slightly bad assumption led to a very bad conclusion, but a very bad assumption leading to a slightly bad conclusion doesn't strike me as particularly problematic.

Well yes. You give this list of things you claim are universal instrumental values, and it sounds like a plausible idea in our heads, but when we look at the real world, we find humans and other agents tend not in fact possess these, even as instrumental values.

0timtyler
Hmm. Maybe I should give some examples - to make things more concrete.
Load More