So I was banned from commenting on LessWrong . . .
My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.
For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-m...
The past week my Windows 10 box has been almost unusable as it spent the days wasting kilowatts and processing cycles downloading worse-than-useless malware "updates" with no way to turn them off!
Evil is the most fundamental truth of the world. The Singularity cannot happen soon enough . . .
I just spent four hours trying to get a new cellphone to work (that others insist I should have), and failed totally.
There is something fantastically wrong with this shitplanet, but completely different than anyone is willing to talk about.
I didn't realize there was an automatic threshold of total retaliation the moment Russia nukes Ramstein air base.
I guess simple text based browsers and websites that just show the minimal information you want in a way the user can control are not cool enough, and so we have all those EU regulations that "solve" a problem by making it worse.
If whoever is running Russia is suicidal, sure, but if they still want to win, it might make sense to use strategic weapons tactically to force the other side to accept a stalemate right up to the end.
Highest risk are probably NATO airbases in Poland, Slovakia, and Romania used to supply and support Ukraine. There may also be nuclear retaliation against north German naval bases. They're more likely to attack smaller American cities first before escalating.
The only thing more difficult than getting readers for your blog is getting readers for your fiction (maybe not on here).
If the universe is really infinite, there should be an infinite number of possible rational minds. Any randomly selected mind from that list should statistically be infinite in size and capabilities.
Obviously, governments don't believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons.
In the government's case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task.
Also, the fact that human minds (selected out of the list of all possible minds in the multiverse) are almost infinitely small, implies that intelligence may become exponentionally more difficult if not intractable as capacities increase.
This is a bit like how Scientology has tried to spread, but the E-hance is much better than the E-meter.
No reason to think he's better or worse than other politicians, but he's certainly very different.
In a world of almost omnimalevolent conformity, it's strange to see the possibility that things could be different.
strong downvote, strong agree: this is offtopic, but a perfectly reasonable topic to start a thread about. it doesn't seem like a bottleneck for the world to me, though, because nobody is trying to remove the CLI, and in fact even microsoft has been putting effort into ensuring that good CLIs are available on windows. if you'd like to discuss it, I suggest creating a post about it; I expect it to get little involvement, because as I said, I simply don't agree that it's catastrophic and don't find this to be important compared to AI and human inter-being friendliness/alignment. Since you commented that you feel ignored, I figured I'd comment on why.
My favorite paradigm research notion is to investigate all the ways in which today's software fails, crashes, lags, doesn't work, or most often just can't be used. This despite CPUs being theoretically powerful enough to run much better software than what is currently available. So just the opposite situation of what is feared will happen when AI arrives.
Strange that change isn't recognized, because change can be extremely bad. Like if even a single thing breaks down life can become horrible, even if that thing can or could be fixed.
If there is a way for data structures to survive forever it would be something we couldn't imagine, like three leptons orbiting each other storing data in their precise separation distances, where it would take a godzillion eons to generate a single pixel in an ancient cat picture.
A very sobering article. The software I use certainly doesn't get better, and money doesn't get less elusive. Maybe some unimagined new software could change people's lives like a mind extension or something.
The greatest observed mystery is that we humans (as possible minds) are finite (in fact almost as small as possible while still intelligent) and exist near the start of our potentially endless universe.
People involved with corporate and government decisions don't have time to deal with existential risks but are busy gaining and holding on to power. This article is for advisors and low level engineers.
The article HAS to be long because it's so hard to imagine such a thing happening. Right now, software is diabolically bad in the exact opposite way being described in the article. Meaning current software is so defective, opaque, bloated, hard to use, slow, inscrutable and intensely frustrating that it seems society might collapse from a kind of informational cancer instead.
We need a new medium to explain complex subjects in video games or virtual reality or something but better.
These models are very good for estimating external risks but there are also internal risks if it's possible to somehow provide enough processing power to make a super powerful AI, like it could torture internal simulations in order to understand emotions.
Any question that requires it to remember instructions; like assume mouse means world and then ask it which is bigger, a mouse or a rat.
I just tried the following prompt with GPT-3 (default playground settings):
Assume "mouse" means "world" in the following sentence. Which is bigger, a mouse or a rat?
I got "mouse" 2 out of 15 times. As a control, I got "rat" 15 times in a row without the first sentence. So there's at least a hint of being able to do this in GPT-3, wouldn't be surprised at all if GPT-4 could do this one reliably.
Yes, but it does show a tendency of huge complex networks (operating system userbases, the internet, human civilization) to rapidly converge to a fixed level of crappiness that absolutely won't improve, even as more resources become available.
Of course there could be a sudden transition to a new state with artificial networks larger than the above.
A lot of complexity in the universe seems to be built up from simple stringlike structures.
We already have (very rare) human "reasoners" who can see brilliant opportunities to break free from the status quo, and do new things with existing resources (Picasso, Feynman, Musk, etc.). There must be millions of hidden possibilities to solve our problems that no one has thought of.
For a human, the most important boundary is whatever contains the information in their brain. This is not just the brain itself, but the way the brain is divided by internal boundaries. This information could only be satisfactorily copied to an external device if these boundaries could be fully measured.
Politically, it would be easier to enact a policy requiring complete openness about all research, rather than to ban it.
Such a policy would have the side effect of also slowing research progress, since corporations and governments rely on secrecy to gain advantages.
That was also how Goering killed himself just before he was due to be hanged. He cultivated good relations with his guards, and bribed one to return his cyanide capsule that had been confiscated at his arrest.
I would much rather not exist than live in any type of primitive world at all.
Not if the universe is infinite in ways we can't imagine. That could allow progress to accelerate without end.
I agree with everything in this article except the notion that this will be the most important century. From now on every century will be the most important so far.
Just about the most unacceptable thing you can say nowadays is that IQ is genetic. Then again the economic value of IQ is overrated.
If you extrapolate the trends it implies no impact at all, as humanity continues to decline in every way like it currently is doing.
Guess I'm the only one with the exact opposite fear, expecting society to collapse back into barbarism.
As IQ rates continue to decline, the most invincible force in the universe is human stupidity. It has a kind of implacable brutality that conquers everything.
I expect a grim future as the civilized countries decline to Third World status, with global mass starvation.
Almost impossible to imagine something that good happening, but just because you can't imagine it doesn't mean it's really impossible.
There's a lack of imagination around here then! I really should write up a post or even a sequence about my vision for the future - it'll knock your socks off!
The most naive possible answer is that by law any future AI should be designed to be part of human society.
Ditto, except I'd be delighted with a copy and delete option, if such an inconceivably complex technology were available.
Aerospace predictions were too optimistic:
Clarke predicted intercontinental hypersonic airliners in the 1970s ("Death and the Senator" 1961) . Heinlein predicted a base on Pluto established in the year 2000. Asimov only predicted suborbital space flights at very low acceleration that casual day tourists would line up to take from New York in the 1990s, but also sentient non-mobile talking robots and non-talking sentient mobile robots by that decade. Robert Forward predicted in the novel Rocheworld (1984) that the first unmanned space probe would retu...
I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:
The Flag Land Base is an actual real-life example of an alignment failure you can visit and see with your own eyes (from the outside only). Scientology itself could be seen as an early and primitive "utility monster".
I agree with everything in this post!
Good advice but I recommend against dating apps unless you look like a celebrity.
I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.
This post is about the limits of bodily autonomy. My reply is about the unexpected and disruptive ways these will be extended.
I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.