So I was banned from commenting on LessWrong . . .
My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.
For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-m...
Obviously, governments don't believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons.
In the government's case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task.
strong downvote, strong agree: this is offtopic, but a perfectly reasonable topic to start a thread about. it doesn't seem like a bottleneck for the world to me, though, because nobody is trying to remove the CLI, and in fact even microsoft has been putting effort into ensuring that good CLIs are available on windows. if you'd like to discuss it, I suggest creating a post about it; I expect it to get little involvement, because as I said, I simply don't agree that it's catastrophic and don't find this to be important compared to AI and human inter-being friendliness/alignment. Since you commented that you feel ignored, I figured I'd comment on why.
My favorite paradigm research notion is to investigate all the ways in which today's software fails, crashes, lags, doesn't work, or most often just can't be used. This despite CPUs being theoretically powerful enough to run much better software than what is currently available. So just the opposite situation of what is feared will happen when AI arrives.
The article HAS to be long because it's so hard to imagine such a thing happening. Right now, software is diabolically bad in the exact opposite way being described in the article. Meaning current software is so defective, opaque, bloated, hard to use, slow, inscrutable and intensely frustrating that it seems society might collapse from a kind of informational cancer instead.
I just tried the following prompt with GPT-3 (default playground settings):
Assume "mouse" means "world" in the following sentence. Which is bigger, a mouse or a rat?
I got "mouse" 2 out of 15 times. As a control, I got "rat" 15 times in a row without the first sentence. So there's at least a hint of being able to do this in GPT-3, wouldn't be surprised at all if GPT-4 could do this one reliably.
Yes, but it does show a tendency of huge complex networks (operating system userbases, the internet, human civilization) to rapidly converge to a fixed level of crappiness that absolutely won't improve, even as more resources become available.
Of course there could be a sudden transition to a new state with artificial networks larger than the above.
For a human, the most important boundary is whatever contains the information in their brain. This is not just the brain itself, but the way the brain is divided by internal boundaries. This information could only be satisfactorily copied to an external device if these boundaries could be fully measured.
Guess I'm the only one with the exact opposite fear, expecting society to collapse back into barbarism.
As IQ rates continue to decline, the most invincible force in the universe is human stupidity. It has a kind of implacable brutality that conquers everything.
I expect a grim future as the civilized countries decline to Third World status, with global mass starvation.
Aerospace predictions were too optimistic:
Clarke predicted intercontinental hypersonic airliners in the 1970s ("Death and the Senator" 1961) . Heinlein predicted a base on Pluto established in the year 2000. Asimov only predicted suborbital space flights at very low acceleration that casual day tourists would line up to take from New York in the 1990s, but also sentient non-mobile talking robots and non-talking sentient mobile robots by that decade. Robert Forward predicted in the novel Rocheworld (1984) that the first unmanned space probe would retu...
I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:
Good advice but I recommend against dating apps unless you look like a celebrity.
I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.
I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.