All of Flaglandbase's Comments + Replies

I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.

So I was banned from commenting on LessWrong . . .

My whole life I've been ranting about how incomprehensibly evil the world is. Maybe I'm the only one who thinks things shouldn't be difficult in the way they are.
Evil is things that don't work, but can't be avoided. A type of invincible stupidity.

For example, software is almost supernaturally evil. I've been tortured for a quarter century by computer systems that are inscrutable, deliberately dysfunctional, unpredictable; and above all the freezing and crashing.
The unusability of software is a kind of man-m... (read more)

7Raemon
Hey Flagland, I feel a bit bad about how this played out, but after thinking more and reading this, the mod team has decided to full restrict your commenting permissions. I don't really expect you posting about your interests here on shortform to be productive for you or for LW.  We're also experimenting more with moderating in public so it's clearer to everyone where are our boundaries are. (I think expect this to feel a bit more intense as a person-getting-moderated, but to probably be better overall for transparency) To be clear, I think your topics have been totally fine things to think about and discuss on LessWrong. The problem is that, well, ranting and hate-filled screeds just aren't very productive most of the time. If it seemed like you were here to think clearly and figure out solutions, that'd be a pretty different situation.
0Flaglandbase
I used to believe the world is so unimaginably horrible that we should do everything possible to accelerate AI progress, regardless of the risk, even if a runaway AI inadvertently turns the earth into a glowing orb dedicated to dividing by zero. I still believe that, but I also used to believe that in the past.

The past week my Windows 10 box has been almost unusable as it spent the days wasting kilowatts and processing cycles downloading worse-than-useless malware "updates" with no way to turn them off! 

Evil is the most fundamental truth of the world. The Singularity cannot happen soon enough . . .

I just spent four hours trying to get a new cellphone to work (that others insist I should have), and failed totally.

There is something fantastically wrong with this shitplanet, but completely different than anyone is willing to talk about. 

I didn't realize there was an automatic threshold of total retaliation the moment Russia nukes Ramstein air base.

1the gears to ascension
it's called mutually assured destruction for a reason.

I guess simple text based browsers and websites that just show the minimal information you want in a way the user can control are not cool enough, and so we have all those EU regulations that "solve" a problem by making it worse.

If whoever is running Russia is suicidal, sure, but if they still want to win, it might make sense to use strategic weapons tactically to force the other side to accept a stalemate right up to the end.

4Dagon
Yup, it's tricky to know what's "tolerable", and there's also an option for deniable terrorist use of "stolen" nukes.  But in any case, it won't be a gradual escalation - use of nukes against a NATO member or on US soil is the classic Schelling line that can't be crossed slowly.   (Literally; that's a large part of Thomas Schelling's work).
Answer by Flaglandbase0-7

Highest risk are probably NATO airbases in Poland, Slovakia, and Romania used to supply and support Ukraine. There may also be nuclear retaliation against north German naval bases. They're more likely to attack smaller American cities first before escalating.

3Dagon
Disagree.  Nuclear attacks on NATO, let alone US cities, is already escalated to full retaliatory engagement.  Nuclear war won't be a gradual escalation - it'll be small-scale and "tolerable" by the US and allies until it's not, at which point it's a step-change to armageddon.

The only thing more difficult than getting readers for your blog is getting readers for your fiction (maybe not on here).

2Henrik Karlsson
You need communities to start out, and you need to hone the craft, but it is not by far as hard as getting readers for fiction!

If the universe is really infinite, there should be an infinite number of possible rational minds. Any randomly selected mind from that list should statistically be infinite in size and capabilities. 

3the gears to ascension
not if measure decreases faster than linearly as size increases

Obviously, governments don't believe in autonomous AI risk, only in the risk that AI can be used to invent more powerful weapons. 

In the government's case, that doubt may come from their experience that vastly expensive complex systems are always maximally dysfunctional, and require massive teams of human experts to accomplish a well-defined but difficult task. 

Also, the fact that human minds (selected out of the list of all possible minds in the multiverse) are almost infinitely small, implies that intelligence may become exponentionally more difficult if not intractable as capacities increase.

2porby
How so? It may suggest that hitting a perfectly humanlike mind out of all possible minds is hard (which I'd agree with), but hitting any functional mind would be made easier with more available paths. If you're including completely dysfunctional "minds" that can't do anything in the set of possible minds, I suppose that could pose a larger challenge for finding them using something like random search. Except our search isn't random; it's guided by pretty powerful optimizers (gradient descent, obviously, but also human intelligence). Also, random search works weirdly well sometimes, which is evidence against even this version of the idea.

This is a bit like how Scientology has tried to spread, but the E-hance is much better than the E-meter.

No reason to think he's better or worse than other politicians, but he's certainly very different. 

In a world of almost omnimalevolent conformity, it's strange to see the possibility that things could be different.

strong downvote, strong agree: this is offtopic, but a perfectly reasonable topic to start a thread about. it doesn't seem like a bottleneck for the world to me, though, because nobody is trying to remove the CLI, and in fact even microsoft has been putting effort into ensuring that good CLIs are available on windows. if you'd like to discuss it, I suggest creating a post about it; I expect it to get little involvement, because as I said, I simply don't agree that it's catastrophic and don't find this to be important compared to AI and human inter-being friendliness/alignment. Since you commented that you feel ignored, I figured I'd comment on why.

My favorite paradigm research notion is to investigate all the ways in which today's software fails, crashes, lags, doesn't work, or most often just can't be used. This despite CPUs being theoretically powerful enough to run much better software than what is currently available. So just the opposite situation of what is feared will happen when AI arrives.

Strange that change isn't recognized, because change can be extremely bad. Like if even a single thing breaks down life can become horrible, even if that thing can or could be fixed. 

Answer by Flaglandbase10

If there is a way for data structures to survive forever it would be something we couldn't imagine, like three leptons orbiting each other storing data in their precise separation distances, where it would take a godzillion eons to generate a single pixel in an ancient cat picture. 

A very sobering article. The software I use certainly doesn't get better, and money doesn't get less elusive. Maybe some unimagined new software could change people's lives like a mind extension or something.

The greatest observed mystery is that we humans (as possible minds) are finite (in fact almost as small as possible while still intelligent) and exist near the start of our potentially endless universe.

3avturchin
This mystery has a simple and unpleasant possible explanation: we will go extinct soon. Doomsday argument is normal as I wrote in another post.

People involved with corporate and government decisions don't have time to deal with existential risks but are busy gaining and holding on to power. This article is for advisors and low level engineers.

The article HAS to be long because it's so hard to imagine such a thing happening. Right now, software is diabolically bad in the exact opposite way being described in the article. Meaning current software is so defective, opaque, bloated, hard to use, slow, inscrutable and intensely frustrating that it seems society might collapse from a kind of informational cancer instead. 

2the gears to ascension
reasonable-ish, though I would claim that the article needing to be long doesn't obviate the need for a hook in the intro that justifies itself honestly. honestly, though, it seems to me that a superintelligent system would have exactly the same kind of informational cancer, the worry could be poetically summarized as superintelligence is like injecting a massive overdose of growth hormone into an already cancer-afflicted patient.

We need a new medium to explain complex subjects in video games or virtual reality or something but better.

These models are very good for estimating external risks but there are also internal risks if it's possible to somehow provide enough processing power to make a super powerful AI, like it could torture internal simulations in order to understand emotions. 

Any question that requires it to remember instructions; like assume mouse means world and then ask it which is bigger, a mouse or a rat.

3Matt Goldenberg
Using the prompt that the other commenter use, GPT solved this: If we replace the word "mouse" with "world" in the given context, the question would now read: "Which is bigger, a world or a rat?" In this context, a world is bigger than a rat.

I just tried the following prompt with GPT-3 (default playground settings):

Assume "mouse" means "world" in the following sentence. Which is bigger, a mouse or a rat?

I got "mouse" 2 out of 15 times. As a control, I got "rat" 15 times in a row without the first sentence. So there's at least a hint of being able to do this in GPT-3, wouldn't be surprised at all if GPT-4 could do this one reliably.

Yes, but it does show a tendency of huge complex networks (operating system userbases, the internet, human civilization) to rapidly converge to a fixed level of crappiness that absolutely won't improve, even as more resources become available.
Of course there could be a sudden transition to a new state with artificial networks larger than the above.

A lot of complexity in the universe seems to be built up from  simple stringlike structures.

6the gears to ascension
haaaa string theory joke I get it

We already have (very rare) human "reasoners" who can see brilliant opportunities to break free from the status quo, and do new things with existing resources (Picasso, Feynman, Musk, etc.). There must be millions of hidden possibilities to solve our problems that no one has thought of. 

For a human, the most important boundary is whatever contains the information in their brain. This is not just the brain itself, but the way the brain is divided by internal boundaries. This information could only be satisfactorily copied to an external device if these boundaries could be fully measured. 

Politically, it would be easier to enact a policy requiring complete openness about all research, rather than to ban it. 

Such a policy would have the side effect of also slowing research progress, since corporations and governments rely on secrecy to gain advantages.

3NickGabs
They rely on secrecy to gain relative advantages, but absolutely speaking, openness increases research speed; it increases the amount of technical information available to every actor.

That was also how Goering killed himself just before he was due to be hanged. He cultivated good relations with his guards, and bribed one to return his cyanide capsule that had been confiscated at his arrest. 

I would much rather not exist than live in any type of primitive world at all.

Not if the universe is infinite in ways we can't imagine. That could allow progress to accelerate without end.

3Noosphere89
The problem is you need to be alive to experience infinity, and most proposals for infinite lifetime rely on whole brain emulations and black holes/wormholes, and biological life doesn't survive the trip, which is why this is the most important century: We have a chance of achieving infinity by creating Transformative AI in a Big World.

I agree with everything in this article except the notion that this will be the most important century. From now on every century will be the most important so far.

4Shiroe
Yes, but this century will be the decisive one. The phrase "most important century" isn't claiming that future centuries lack moral significance, but the contrary.

Just about the most unacceptable thing you can say nowadays is that IQ is genetic. Then again the economic value of IQ is overrated.

2David Hugh-Jones
I think almost everyone (who isn't daft) accepts IQ is partly genetic - and the author does too. But the question is whether there's a gene-environment interaction in parenting styles, which is slightly different.

If you extrapolate the trends it implies no impact at all, as humanity continues to decline in every way like it currently is doing. 

2Mohammed Choudhary
and yet GDP per capita is 10 times higher than it was 2 centuries ago on average across the world. and iq is over 30 points higher on average than 1 century ago  could it be that you are allowing personal feelings about how bad things are to muddle your reasoning about how bad the world actually is?
Answer by Flaglandbase1-11

Guess I'm the only one with the exact opposite fear, expecting society to collapse back into barbarism. 
As IQ rates continue to decline, the most invincible force in the universe is human stupidity. It has a kind of implacable brutality that conquers everything.
I expect a grim future as the civilized countries decline to Third World status, with global mass starvation.

2PipFoweraker
This implies your timelines for any large impact from AI would span multiple future generations, is that correct?

Almost impossible to imagine something that good happening, but just because you can't imagine it doesn't mean it's really impossible.

MSRayne1010

There's a lack of imagination around here then! I really should write up a post or even a sequence about my vision for the future - it'll knock your socks off!

Answer by Flaglandbase-3-4

The most naive possible answer is that by law any future AI should be designed to be part of human society. 

0Richard_Kennaway
You are completely correct. That is indeed the most naive possible answer. And also the most X, for various values of X, none of them being good things for an answer to be.

Ditto, except I'd be delighted with a copy and delete option, if such an inconceivably complex technology were available.

Aerospace predictions were too optimistic: 

Clarke predicted intercontinental hypersonic airliners in the 1970s ("Death and the Senator" 1961) . Heinlein predicted a base on Pluto established in the year 2000. Asimov only predicted suborbital space flights at very low acceleration that casual day tourists would line up to take from New York in the 1990s, but also sentient non-mobile talking robots and non-talking sentient mobile robots by that decade. Robert Forward predicted in the novel Rocheworld (1984) that the first unmanned space probe would retu... (read more)

I'm completely opposed to any type of censorship whatsoever, but this site might have two restrictions:

  • Descriptions of disruptive or dangerous new technology that might threaten mankind
  • Politically or socially controversial speech considered beyond the pale by the majority of members or administrators

The Flag Land Base is an actual real-life example of an alignment failure you can visit and see with your own eyes (from the outside only). Scientology itself could be seen as an early and primitive "utility monster". 

4MSRayne
Cults in general are like that. They're essentially maximizer demons running on human wetware. (As, I would suggest, are corporations.)

I agree with everything in this post!

2MSRayne
Seriously? Wow. Thanks lol. Btw, every time I've seen your name on this site I've wanted to ask... why did you name your account after the capital of Scientology?

Good advice but I recommend against dating apps unless you look like a celebrity.

  • EDIT: of course the above advice against dating sites only applies if you're male.

I believe it should be possible at every Lesswrong post to make "low quality" comments that would be automatically hidden at the bottom of each comment section, underneath the "serious" comments. So you would have to click on them to make them visible. Such comments would be automatically given -100 points, but in a way that doesn't count against the poster's "account karma". The only requirement would be that the commenter should genuinely believe they're making a true statement. Replies to such comments would be similarly hidden. Also certain types of "unacceptable" speech could be banned by the site. This would stimulate out-of-the-box discussion and brainstorming.

8MondSemmel
By which mechanism do you expect to improve discussion by introducing censorship?

This post is about the limits of bodily autonomy. My reply is about the unexpected and disruptive ways these will be extended.

5ChristianKl
Given that the post is not specifically about the ruling, a comment that talks about the ruling and not about the post should have no place here. 
Load More