All of BeyondTheBorg's Comments + Replies

It's learned helplessness. People have seen loved ones die and remember they could do nothing to stop it. Past longevity research has not panned out, and people have grown rightfully skeptical about a cure for what has up to this point just been the human condition. Though I suspect they'd gladly take such a cure if one existed.

We also think of death as a great equalizer that allows new (maybe better) people to succeed the old (bad) people (e.g. Supreme Court justices). There will arise tough questions about labor, retirement, marriages, population, and democracy currently solved by death, that our existing political institutions are not remotely ready to answer in its absence.

 

Very comprehensive. I can think of a few more:

Transcendant AI: AGI discovers exotic physics beyond human comprehension and ways to transcend physical reality, and largely leaves us alone in our plane of reality. Kind of magical thinking, but this is the canonical explanation for AI friendliness in Iain M. Banks' Culture series, with the Sublime.

Matrix AI: We're in a Simulation of "the peak of humanity" and the laws of the Simulation prevent AGI.

Pious AI: AGI adopts one of the major human religions and locks in its values. Vast amounts of superintelligent c... (read more)

5Bart Bussmann
Thanks, good suggestions! I've added the following: Pious AI: Humanity builds AGI and adopts one of the major religions. Vast amounts of superintelligent cognition is devoted to philosophy, theology, and prayer. AGI proclaims itself to be some kind of Messiah, or merely God's most loyal and capable servant on Earth and beyond. I think Transcendant AI is close enough to Far far away AI, where in this case far far away means another plane of physics. Similarly, I think your Matrix AI scenario is captured in: where the weird reason in this case is that we live in the matrix.  

If we don't have AGI at the level of diamondoid nanotech bacteria, it may be possible to reliably identify humans using some kind of physical smart card system requiring frequent or continuous re-authentication via biometric sensors, similar to breathalyzers / ignition interlock devices installed in the cars of DUI offenders.

Not the most practical or non-invasive method that could be deployed for online services, but it is fairly secure if you're in a lab trying to keep an AGI in a box.

As for online solutions not requiring new hardware, recently I had to t... (read more)

1MrThink
'identify humans using some kind of physical smart card system requiring frequent or continuous re-authentication via biometric sensors' This is a really fascinating concept. Maybe the captcha could work in a way like "make a cricle with your index finger" or some other strange movement, and the chip would use that data to somehow verify that the action was done. If no motion is required I guess you could simply store the data outputted at one point and reuse it? Or the hacker using their own smart chip to authenticate them without them actually having to do something... Deepfakes are still detectable using AI, especially if you do complicated motions like putting your hand on your face, or talk (which also gives us sound to work with).

Oops, turns out I confused r > g with something else I heard. Going to retract, maybe I can salvage this and rewrite for the next open thread.

0Douglas_Knight
Not only are your examples different from r > g, they imply g > r.
6Viliam
I generally agree with you, but the part "when Americans think of socialism or communism, they think of authoritarian interpretations" has some good reasons. By the way, I am not an American, but that statement is still true for me, maybe even more so: I remember the regime where I grew up, and I imagine that the same humans would most likely produce the same outcome. I am not saying it's inevitable; only that the burden of proof is on the people who say "trust me, this time it will be completely different". I have the impression that when people propose something called "socialism", they usually don't even think about how specifically they would design the system to prevent the standard historical outcome (a few million people killed or starved to death). They just optimistically assume that this time the problem will magically solve itself. Because they are nice people, or something. (Like that would change something; there were also many nice people in Soviet Russia, but they were not able to stop Stalin.) It's like talking with a guy who already built three nuclear power plants, and within a month each of them exploded and killed everyone around. But the guy just shrugs, says it was probably some irelevant random technical issue, and then proposes to build another nuclear power plant on your backyard. Giving the previous failures some thought is the least one can do in such situation. Another thing that feeds my distrust is that when groups who want to build some kind of "socialism" contain more than dozen members, they usually already have some authoritarian personalities in their positions of power. The corruption is already there, even while their power is almost zero compared to what they aim to achieve, and they cannot fix it now, but they believe the problem will disappear later. It works exactly the other way round: the more power you get, the more psychopaths will be attracted to join you and climb to the top. Similarly, if someone talks how free spe

IRC and Study Hall lurker here, thought I'd post a Reddit-tier ramble not up to par with the rest of this site. Without further ado, my first post:

I've been all over the spectrum. I'm highly skeptical of big corporate capitalism these days, but I do believe in free markets. The rules of classical economics are logically sound, but they're not very humane.

The sad truth is that employees are expendable, and only paid as much as there are people able and willing to do the job. Today's job shortage and labor surplus means low wages and benefits for those lu... (read more)

[This comment is no longer endorsed by its author]Reply
1BeyondTheBorg
Oops, turns out I confused r > g with something else I heard. Going to retract, maybe I can salvage this and rewrite for the next open thread.