Right, yes, I'm not suggesting the iterated coding activity can or should include 'build an actual full-blown superhuman AGI' as an iterated step.
Are you advocating as option A, 'deduce a full design by armchair thought before implementing anything'? The success probability of that isn't 1%. It's zero, to as many decimal places as makes no difference.
My argument is not that AI is the same activity as writing a compiler or a search engine or an accounts system, but that it is not an easier activity, so techniques that we know don't work for other kinds of software – like trying to deduce everything by armchair thought, verify after-the-fact the correctness of an arbitrarily inscrutable blob, or create the end product by throwing lots of computing power at a brute force search procedure – will not work for AI, either.
Example: Most people would save a young child instead of an old person if forced to choose, and it is not not just because the baby has more years left, part of the reason is because it seems unfair for the young child to die sooner than the old person.
As far as I'm concerned it is just because the baby has more years left. If I had to choose between a healthy old person with several expected years of happy and productive life left, versus a child who was terminally ill and going to die in a year regardless, I'd save the old person. It is unfair that an innocent person should ever have to die, and unfairness is not diminished merely by afflicting everyone equally.
That would be cheap and simple, but wouldn't give a meaningful answer for high-cost bugs, which don't manifest in such small projects. Furthermore, with only eight people total, individual ability differences would overwhelmingly dominate all the other factors.
Sorry, I have long forgotten the relevant links.
We know that late detection is sometimes much more expensive, simply because depending on the domain, some bugs can do harm (letting bad data into the database, making your customers' credit card numbers accessible to the Russian Mafia, delivering a satellite to the bottom of the Atlantic instead of into orbit) much more expensive than the cost of fixing the code itself. So it's clear that on average, cost does increase with time of detection. But are those high-profile disasters part of a smooth graph, or is it a step function where the cost of fixing the...
Because you couldn't. In the ancestral environment, there weren't any scientific journals where you could look up the original research. The only sources of knowledge were what you personally saw and what somebody told you. In the latter case, the informant could be bullshitting, but saying so might make enemies, so the optimal strategy would be to profess belief in what people told you unless they were already declared enemies, but base your actions primarily on your own experience; which is roughly what people actually do.
That's not many worlds, that's quantum immortality. It's true that the latter depends on the former (or would if there weren't other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.
True. The usual reply to that is "we need to reward the creators of information the same way we reward the creators of physical objects," and that was the position I had accepted until recently realizing, certainly we need to reward the creators of information, but not the same way - by the same kind of mechanism - that we reward the creators of physical objects. (Probably not by coincidence, I grew up during the time of shrink-wrapped software, and only re-examined my position on this matter after that time had passed.)
To take my own field as an example, as one author remarked, "software is a service industry under the persistent delusion that it is a manufacturing industry." In truth, most software has always been paid for by people who had reason other than projected sale of licenses to want it to exist, but this was obscured for a couple of decades by shrinkwrap software, shipped on floppy disks or CDs, being the only part of the industry visible to the typical nonspecialist. But the age of shrinkwrap software is passing - outside entertainment, how often does the typical customer buy a program these days? - yet the industry is doing fine. We just don't need copyright law the way we thought we did.
We can't. We can only sensibly define them in the physical universe which is based on matter, with its limitations of "only in one place at a time" and "wears out with use" that make exclusive ownership necessary in the first place. If we ever find a way to transcend the limits of matter, we can happily discard the notion of property altogether.
I took the post to be asking for opinions sufficiently far outside the mainstream to be rarely discussed even here, and I haven't seen a significant amount discussion of this one. Then again, that could be because I wasn't particularly looking; I used to be of the opinion "intellectual property law has gone too far and needs to be cut back, but of course we can't do away with it entirely," and only recently looked more closely at the but of course part and realized it didn't hold water. If this opinion is more common than I had given it credit for, great!
Sure. My answer is no, it does not.
Not only is intellectual property law in its current form destructive, but the entire concept of intellectual property is fundamentally wrong. Creating an X does not give the creator the right to point a gun at everyone else in the universe who tries to arrange matter under their control into something similar to X. In programming terminology, property law should use reference semantics, not value semantics. Of course it is true that society needs to reward people who do intellectual work, just as much as people who do physical work, but there are better justified and less harmful ways to accomplish this than intellectual property law.
The post asked for opinions so repulsive people have a hard time generating them in the first place. This is a relatively common opinion.
A funny unrelated question that just occurred to me: how can one define property rights in a mathematical multiverse which isn't ultimately based on "matter"?
Creating an X does not give the creator the right ...
Of course it doesn't. The question is if the world becomes a better place if they do it anyway.
Ill posed does not necessarily mean impossible. Most of the problems we deal with in real life are ill posed, but we still usually manage to come up with solutions that are good enough for the particular contexts at hand. What it does mean is that we shouldn't expect the problem in question to be definitely solved once and for all. I'm not arguing against attempting to test rationality. I'm arguing against the position some posters have taken that there's no point even trying to make progress on rationality until the problem of testing it has been definitely solved.
But doesn't it seem that if you decompartmentalized with correct beliefs you should do way better?
Maybe; there are all sorts of caveats to that. But that aside, more directly on the question of tests:
Possibly in a testable way?
You still run into the problem that the outcome depends greatly on context and phrasing. There is the question with turning over cards to test a hypothesis, on which people's performance dramatically improves when you rephrase it as an isomorphic question about social rules. There are the trolley questions and the specks versu...
Testing rationality is something of an ill posed problem, in part because the result depends greatly on context. People spout all kinds of nonsense in a social context where it's just words, but usually manage to compartmentalize the nonsense in a material context where they will be affected by the results of their actions. (This is a feature! Given that evolution wasn't able to come up with minds that infallibly distinguish true beliefs from false ones, it's good that at least it came up with a way to reduce the harm from false beliefs.) I'm not sure how ...
"The price of freedom is eternal vigilance."
It would be wonderful if defending freedom were a one-off job like proving Fermat's Last Theorem. As it turns out, it's an endlessly recurring job like fighting disease; unfortunate, but that's the way it is. And yes, sometimes our efforts fail, and freedoms are lost or people get sick and die. But the answer to that is to work harder and smarter, not to give up.
it's an endlessly recurring job like fighting disease
Until you eradicate smallpox, or polio, or Congress.
Most of this post, along with the previous posts in the series, is both beautiful and true - the best combination. It's a pity it had to be mixed in with the meme about computers magically waking up with superpowers. I don't think that meme is necessary here, any more than it's necessary to believe the world was created in 4004 BC to appreciate Christmas. Taking it out - discussing it in separate posts if you wish to discuss it - is the major improvement I would suggest.
Good points, upvoted. But in fairness, I think the ink blot analogy is a decent one.
Imagine you asked the question about the ink blot to a philosopher in ancient Greece, how might he answer? He might say there is no definite number. Or he might say there must be some underlying reality, even though he doesn't know for sure what it is; and the best guess says it's based on atoms; so he might reply that he doesn't know the answer, but hopefully it might be possible in principle to calculate it if you could count atoms.
I think that's about where we are regarding the Born probabilities and number or measure of different worlds in MWI right now.
There is a wonderfully evocative term, Stand Alone Complex, from the anime series of the same name, which refers to actions taken by people behaving as though they were part of a conspiracy even though no actual conspiracy is present. It's pretty much tailor-made for this case.
Mencius Moldbug calls this instance the Cathedral, in an insightful series of articles indexed here.
You could also trade off things that were more important in the ancestral environment than they are now. For example, social status (to which the neurotypical brain devotes much of its resources) is no longer the evolutionary advantage that it used to be.
Only if you take 'ten times smarter' to mean multiplying IQ score by ten. But since the mapping of the bell curve to numbers is arbitrary in the first place, that's not a meaningful operation; it's essentially a type error. The obvious interpretation of 'ten times smarter' within the domain of humans is by percentile, e.g. if the author is at the 99% mark, then it would refer to the 99.9% mark.
And given that, his statement is true; it is a curious fact that IQ has diminishing returns, that is, being somewhat above average confers significant advantage in m...
There is kidnapping for interrogation, slavery and torture today, so there is no reason to believe there won't be such in the future. But I don't believe it will make sense in the future to commit suicide at the mere thought, any more than it does today.
As for whether such a society will exist, I think it's possible it may. It's possible there may come a day when people don't have to die. And there is a better chance of that happening if we refrain from poisoning our minds with scare stories optimized for appeal to primate brains over correspondence to external reality.
I've been snarky for this entire conversation - I find advocacy of death extremely irritating - but I am not just snarky by any means. The laws of physics as now understood allow no such thing, and even the author of the document to which you refer - a master of wishful thinking - now regards it as obsolete and wrong. And the point still holds - you cannot benefit today the way you could in a post-em world. If you're prepared to throw away billions of years of life as a precaution against the possibility of billions of years of torture, you should be prepared to throw away decades of life as a precaution against the possibility of decades of torture. If you aren't prepared to do the latter, you should reconsider the former.
An upload, at least of the early generations, is going to require a supercomputer the size of a rather large building to run, to point out just one of the reasons why the analogy with playing a pirate MP3 is entirely spurious.
Warhammer 40K is one of those settings that is highly is open to interpretation. My interpretation is that it's in a situation where things could be better and could be worse, victory and defeat are both very much on the cards, and hope guided by cold realism is one of the main factors that might tip the balance towards the first outcome. I consider it similar in that regard to the Cthulhu mythos, and for that matter to real life.
If you postulate ems that can run a million subjective years a minute (which is not at all scientifically plausible), the mainline copies can do that as well, which means talking about wall clock time at all is misleading; the new subjective timescale is the appropriate one to use across the board.
As for the rest, people are just as greedy today as they will be in the future. Organized criminals could torture you until you agree to sign over your property to them. Your girlfriend could pour petrol over you and set you on fire while you're asleep. If you si...
The comment holds regardless. In today's world, you can only be tortured for a few decades, but by the same token you can only lose a few decades of lifespan by committing suicide. If in some future world you can be tortured for a billion years, then you will also be losing a billion years of happy healthy life by committing suicide. If you think the mere possibility of torture - with no evidence that it is at all likely - will be grounds for committing suicide in that future world, then you should think it equally good grounds for committing suicide today. If you agree with me that would be insanely irrational today, you should also agree it will be insanely irrational in that future world.
Also, in the absence of any evidence that this is at all unlikely to occur.
If you think the situation is that symmetrical, you should be indifferent on the question of whether to commit suicide today.
But notice the original poster does not dwell on the probability of this scenario, only on its mere possibility.
If it had been generated as part of an exhaustive listing of all possible scenarios, I would have refrained from comment. As it is, being raised in the context of a discussion on whether one should try for uploading in the unlikely event one l...
With the possibility? Of course not. Anything that doesn't involve a logical self-contradiction is possible. My disagreement is with the idea that it is sane or rational to base decisions on fantasies about being kidnapped and tortured in the absence of any evidence that this is at all likely to occur.
If you think that kind of argument holds water, you should commit suicide today lest a sadist kidnap you and torture you in real life.
No. The mainstream expectation has pretty much always been that locations conducive to life would be reasonably common; the results of the last couple of decades don't overturn the expectation, they reinforce it with hard data. The controversy has always been on the biological side: whether going from the proverbial warm little pond to a technological civilization is probable (in which case much of the Great Filter must be in front of us) or improbable (in which case we can't say anything about what's in front of us one way or the other). For what it's worth, I think the evidence is decisively in favor of the latter view.
I'm perfectly prepared to bite this bullet. Extending the life of an existing person a hundred years and creating a new person who will live for a hundred years are both good deeds, they create approximately equal amounts of utility and I believe we should try to do both.
Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).
I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.
That's awfully vague. "Whatever window of time we had", what does that mean?
The current state of the world is unusually conducive to technological progress. We don't know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don't think that's a good gamble.
...There's one kind of "technological progress" that SIAI opposes as far as I can tell: work
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl's law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
We've had various kinds of Luddism before, but this one is particularly lethal in being a form that appeals to people who had been technophiles. If it spreads enough, best case scenario is the pool of people willing to work on real technological progress shrinks, worst case scenario is regulation that snuffs out progress entirely, and we get to sit around bickering about primate politics until whatever window of time we had runs out.
Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I've seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein's vision of nuclear powered space colonization was.
We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.
Okay, to look at some of the specifics:
Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
The linked article is amusing but misleading; the described 'ultimate laptop' would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don't know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit ...
I discuss some of it at length here: http://lesswrong.com/lw/312/the_curve_of_capability/
I'll also ask the converse question: given that you can't typically prove a negative (I can't prove the nonexistence of psychic powers or flying saucers either), if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?
if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?
I'm not marchdown, but:
Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:
I understand perfectly well how a hypothetical perfectly logical system would work (leaving aside issues of computational tractability etc.). But then, such a hypothetical perfectly logical system wouldn't entertain such far mode beliefs in the first place. What I'm discussing is the human mind, and the failure modes it actually exhibits.
So your suggestion is that we should de-compartmentalize, but in the reverse direction to that suggested by the OP, i.e. instead of propagating forward from ridiculous far beliefs, become better at back-propagating and deleting same? There is certainly merit in that suggestion if it can be accomplished. Any thoughts on how?
That's actually a good question. Let me rephrase it to something hopefully clearer:
Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his...
You are worried that, given your assumptions, civilizations might not be willing to pay an extremely high price to do things that aliens would like if they knew about them, which they don't.
But one of your assumptions is that every civilization has a moral system that advocates attacking and enslaving everyone they meet who thinks differently from them.
It would be worrying if a slightly bad assumption led to a very bad conclusion, but a very bad assumption leading to a slightly bad conclusion doesn't strike me as particularly problematic.
Well yes. You give this list of things you claim are universal instrumental values, and it sounds like a plausible idea in our heads, but when we look at the real world, we find humans and other agents tend not in fact possess these, even as instrumental values.
There are no small pauses in progress. Laws, and the movements that drive them, are not lightbulbs to be turned on and off at the flick of a switch. You can stop progress, but then it stays stopped. The Qeng Ho fleets, for example, once discontinued, did not set sail again twenty years later, or two hundred years later.
There also tend not to be narrow halts in progress. In practice, a serious attempt to shut down progress in AI, is going to shut down progress in computers... (read more)