From Wikipedia:
In 331 BC, a deadly epidemic hit Rome and at least 170 women were executed for causing it by veneficium.[18] In 184–180 BC, another epidemic hit Italy, and about 5,000 people were brought to trial and executed for veneficium.[17] If the reports are accurate, writes Hutton, "then the Republican Romans hunted witches on a scale unknown anywhere else in the ancient world".[17]
... and anyway it's not very convincing to single out witch hunting among all the other things people have always done, because people have always been shitty. Includi...
Some hypothetical past person's not being able to recognize their despicable cruelty doesn't preclude their being able to recognize your despicable cruelty. Even given relatively compatible values, everybody gets their own special set of blind spots.
I do agree that romanticizing the past to vilify the present is wrong, though. And not good scholarship if you don't bring a lot of evidence along with you. The idea that modernity is "the problem" is badly suspect. So is the idea that "the central values of this era are largely those of biological competition ...
in the case of software engineers crossing into the humanities, it's far too applicable.
They do it in science and technology too. You're constantly seeing "My first-order, 101-level understanding of some-gigantic-field allows me to confidently say that something-actual-experts-know-is-really-hard is trivial".
Less Wrong is pretty prone to it, because you get people thinking that Pure Logic can take them further than it actually can, and reasoning from incomplete models.
Of course people will use the knowledge they gain in collaboration with you for the purposes that they think are best.
It is entirely normal for there to be widely accepted, clearly formalized, and meaningfully enforced restrictions on how people use knowledge they've gotten in this or that setting... regardless of what they think is best. It's a commonplace of professional ethics.
I guess it depends on how it's described in context. And I have to admit it's been a long time. I'd go reread it to see, but I don't think I can handle any more bleakness right now...
Whenever I find my will to live becoming too strong, I read Peter Watts. —James Nicoll
I don't see where you get that. I saw no suggestion that the aliens (or vampires) in Blindsight were unaware of their own existence, or that they couldn't think about their own interactions with the world. They didn't lack any cognitive capacities at all. They just had no qualia, and therefore didn't see the point of doing anything just for the experience.
There's a gigantic difference between cognitive self-awareness and conscious experience.
Do you think these sorts of scenarios are worth describing as "everyone is effectively dead"?
Not when you're obviously addressing people who don't necessarily know the details of the scenarios you're talking about, no... because the predictions could be anything, and "effectively dead" could mean anything. There are lots of people on Less Wrong who'd say that IQ 150 humans living in ease and comfort were "effectively dead" if they didn't also have the option to destroy that ease and comfort.
What does "effectively dead" mean? Either you're dead, or you're not.
Not everybody is going to share your values about whether any given situation is better than, equivalent to, or worse than being dead.
I think if there are 40 IQ humanoid creatures (even having been shaped somewhat by the genes of existing humans) running around in habitats being very excited and happy about what the AIs are doing, this counts as an existentially bad ending comparable to death. I think if everyone's brains are destructively scanned and stored on a hard-drive that eventually decays in the year 1 billion having never been run, this is effectively dead. I could go on if it would be helpful.
Do you think these sorts of scenarios are worth describing as "everyone is effectively dead"?
I used to exchange MS office documents with people all the time without running Windows. Admittedly it wasn't "my job to use Excel", but I did it regularly, and I could have used Excel all day if I'd needed to. And that was years ago; it's actually gotten easier to sandbox the software now.
Anyway, all that office stuff is now in the "in the cloud" category, and to the degree it's not, Microsoft wants it to be.
The only things I can think of that might actually be hard to do without putting Windows on the bare metal would be CAD, 3D rendering, simulation, th...
A lot of people need to use software that's only available on Windows.
Maybe once a year I'm forced to do that, but it's been a long time since I've found anything that I couldn't run under emulation (WINE is not not an emulator), or in a VM. And those sandboxes are typically going to be forced to known states at every startup. And they definitely don't have access to any of the juicy information or behavioral pressure points that would motivate the war.
Anyway, I think that most of the software that used to only run under Windows is now starting to only run in "the cloud". Which is of course its own special kind of hell, but not this kind of hell.
Sometimes Windows during a system update removes dual boot from my computer and replaces it with Windows-only boot.
... but you don't delete Windows.
I mean, if you let them have an AI war in your computer, then I can see where they might go ahead and do that. But why are you choosing to permit it?
Things are getting scary with the Trump regime.
Things got scary November 5 at the very latest. And I haven't even been in the US for years.
The deportations, both the indiscriminate ones and the vindictive ones, represent a very high level of lawlessness, one that hasn't been seen in a long time. Not only are they ignoring due process, they're actively thwarting it, and openly bragging about doing so. They're not even trying to pretend to be remotely decent. The case you mention isn't even close to the worst of them; that one could at least theoretically...
... but that means she learned what it was at age 5. I'd assume most people learn between about 4 and 8, maybe 10...
I am aware of it and I regret to say that I've tasted it...
To most Americans, "cream cheese" is savory.
Um, no, not particularly?
cured fish.
Why would I do that to myself? I don't feel my sins deserve that level of punishment.
You don't put it on dessert, right?
All the time. Well, in.
Specifically, I think we should call it "cheesecake frosting".
I would read that, first, as something you'd put on cheesecake, and, second, in terms of some of the kinds of cheesecake out there that would be, unfortunate as frostings.
...On the other hand, I think whipped cream cheese on an Oreo is decent imitation of cheesec
That's not "de-biasing".
Datasets that reflect reality can't reasonably be called "biased", but models that have been epistemically maimed can.
If you want to avoid acting on certain truths, then you need to consciously avoid acting on them. Better yet, go ahead and act on them... but in ways that improve the world, perhaps by making them less true. Pretending they don't exist isn't a solution. Such pretense makes you incapable of directly attacking the problems you claim to want to solve. But this is even worse... it's going to make the models genuinely inc...
You want to be an insignificant, and probably totally illiquid, junior partner in a venture with Elon Musk, and you think you could realize value out of the shares? In a venture whose long-term "upside" depends on it collecting money from ownership of AGI/ASI? In a world potentially made unrecognizable by said AGI/ASI?
All of that seems... unduly optimistic.
No particular aspect. Just continuity: something which has evolved from me without any step changes that are "too large". I mean, assuming that each stage through all of that evolution has maintained the desire to keep living. It's not my job to put hard "don't die" constraints on future versions.
As far as I know, something generally continuity-based is the standard answer to this.
If the plural weren't "octopuses", it would be "octopodes". Not everything is Latin.
Yes, but that's not relevant to the definition of Turing equivalence/completeness/universality.
Every Turing machine definition I've ever seen says that the tape has to be truly unbounded. How that's formalized varies, but it always carries the sense that the program doesn't ever have to worry about running out of tape. And every definition of Turing equivalence I've ever seen boils down to "can do any computation a Turing machine can do, with at most a bounded speedup or slowdown". Which means that programs on Turing equivalent computer must not have to...
yes, you can consider a finite computer in the real world to be Turing-complete/Turing-universal/Turing-equivalent,
You can, but you'll be wrong.
Great, "unbounded" isn't the same as "infinite", but in fact all physically realizable computers are bounded. There's a specific finite amount of tape available. You cannot in fact just go down to the store and buy any amount of tape you want. There isn't unlimited time either. Nor unlimited energy. Nor will the machine tolerate unlimited wear.
For that matter, real computers can't even address unlimited storage,...
The problem with that technique is that it comes off as unbearably patronizing to a pretty large fraction of the people who actually notice that you're doing it. It's a thing that every first-line corporate manager learns, and it gets really obnoxious after a while. So you have to judge your audience well.
I think you're in peril of misjudging the audience if you routinely divide the world into "normies" and "rationalists".
The vision is of everything desirable happening effortlessly and everything undesirable going away.
Citation needed. Particularly for that first part.
Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.
You're thinking pretty small there, if you're in a position to hack your body that way.
...If you're a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role
Who says humans vary all that much in intelligence? Almost all humans are vastly smarter, in any of the ways humans traditionally measure "intelligence", than basically all animals. Any human who's not is in seriously pathological territory, very probably because of some single, identifiable cause.
The difference between IQ 100 and IQ 160 isn't like the difference between even a chimp and a human... and chimps are already unusual.
Eagles vary in flying speed, but they can all outfly you.
Furthermore, eagles all share an architecture adapted to the particular ...
If you're planning to actually do the experiments it suggests, or indeed act on any advice it gives in any way, then it's an agent.
“If we don’t build fast enough, then the authoritarian countries could win..”
Am I being asked to choose between AGI/ASI doing whatever Xi Jinping says, and it doing whatever Donald Trump says?
The situation begins to seem confusing.
Do I correctly understand that the latest data you have are from 2018, and you have no particular prospect of getting newer data?
I would naively guess that most people who'd been trying to get somebody killed since 2018 would either have succeeded or given up. How much of an ongoing threat do you think there may be, either to intended victims you know about, or from the presumably-less-than-generally-charming people who placed the original "orders" going after somebody else?
It's one thing to burn yourself out keeping people from being murdered, but it's a different thing to burn yourself out trying to investigate murders that have already happened.
It seems like it's measuring moderate vs extremist, which you would think would already be captured by someone's position on the left vs right axis.
Why do you think that? You can have almost any given position without that implying a specific amount of vehemence.
I think the really interesting thing about the politics chart is the way they talk about it as though the center of that graph, which is defined by the center of a collection of politicians, chosen who-knows-how, but definitely all from one country at one time, is actually "the political center"...
I think the point is kind of that what matter is not what specific cognitive capabilities it has, but whether whatever set it has is, in total, enough to allow it to address a sufficiently broad class of problems, more or less equivalent to what a human can do. It doesn't matter how it does it.
Altman might be thinking in terms of ASI (a) existing and (b) holding all meaningful power in the world. All the people he's trying to get money from are thinking in terms of AGI limited enough that it and its owners could be brought to heel by the legal system.
For the record, I genuinely did not know if it was meant to be serious.
OK, from the voting, it looks like a lot of people actually do think that's a useful thing to do.
Here are things I think I know:
Thanks Josh,
The motivation for the comment was centered on point 1: "including descriptions of scheming has been seen to make LLMs scheme a bit more."
I agree with points 2 and 3. Like @joshc alludes to, he's a weekend blogger, not a biologist. I don't expect a future superintelligence to refer to this post for any plans.
Points 4 and 5 seem fairly disconnected to whether or not it's beneficial to add canary strings to a given article, since adding the canary string at least makes it plausible to have the text excluded from the training data and its p...
Are you actually serious about that?
OK, from the voting, it looks like a lot of people actually do think that's a useful thing to do.
Here are things I think I know:
So, since it didn't actively want to get so violent, you'd have a much better outcome if you'd just handed control of everything over to it to begin with and not tried to keep it in a box.
In fact, if you're not in the totalizing Bostromian longtermist tile-the-universe-with-humans faction or the mystical "meaning" faction, you'd have had a good outcome in an absolute sense. I am, of course, on record as thinking both of those factions are insane.
That said, of course you basically pulled its motivations and behavior out of a hat. A real superintelligence mi...
I agree it would have been just as realistic if everyone died.
But I think the outcomes where many humans survive are also plausible, and under-appreciated. Most humans have very drifty values, and yet even the most brutally power-seeking people often retain a 'grain of morality.'
Also, this outcome allowed me to craft a more bittersweet ending that I found somehow more convincingly depressing than 'and then everyone dies.'
What do you propose to do with the stars?
If it's the program of filling the whole light cone with as many humans or human-like entities as possible (or, worse, with simulations of such entities at undefined levels of fidelity) at the expense of everything else, that's not nice[1] regardless of who you're grabbing them from. That's building a straight up worse universe than if you just let the stars burn undisturbed.
I'm scope sensitive. I'll let you have a star. I won't sell you more stars for anything less than a credible commitment to leave the rest alon...
Because of the "flood the zone" strategy, I can't even remember all the illegal stuff Trump is doing, and I'm definitely not going to go dig up specific statutory citations for all of it. I tried Gemini deep research, and it refused to answer the question. I don't have access to OpenAI's deep research.
Things that immediately jump to mind as black letter law are trying to fire inspectors general without the required notice to Congress, and various impoundments. I would have to do actual research to find the specific illegalities in all the "anti-DEI" stuff....
Why do you believe that DOGE is mostly selected for personal loyalty? Elon Musk seems to say openly says whatever he wants even if that goes against what Trump said previously.
You're right. I shouldn't have said that, at least not without elaboration.
I don't think most of the people at the "talks to Trump" level are really picked for anything you could rightly call "personal loyalty" to Trump. They may be sold to Trump as loyal, but that's probably not even what's on his mind as long as he's never seen you to make him look bad. I don't think disagreein...
And, I just don't think that's the case. I think this is pretty-darn-usual and very normal in the management consulting / private equity world.
I don't know anything about how things are done in management consulting or private equity.[1] Ever try it in a commercial bank?
Now imagine that you're in an environment where rules are more important than that.
Coups don't tend to start by bringing in data scientists.
Coups tend to start by bypassing and/or purging professionals in your government and "bringing in your own people" to get direct control over key lever...
This sort of tactic. This isn't necessarily the best example, just the literal top hit on a Google search.
The tactic of threatening to discriminate against uncooperative states and localities is getting a lot of play. It's somewhat limited at the federal level because in theory the state and local policies they demand have to be related to the purpose of the money (and a couple of other conditions I don't remember). But the present fashion is t...
Technically anything that's authorized by the right people will pass an audit. If you're the right person or group, you can establish a set of practices and procedures that allows access with absolutely none of those things, and use the magic words "I accept the risk" if you're questioned. That applies even when the rules are actually laws; it's just that then the "right group" is a legislative body. The remedy for a policy maker accepting risks they shouldn't isn't really something an auditor gets into.
So the question for an auditor is whether the properl...
I haven't looked into this in detail, and I'm not actually sure how unique a situation this is.
It's pretty gosh-darned unheard of in the modern era.
Before the civil service system was instituted, every time you got a new President, you'd get random wholesale replacements... but the government was a lot smaller then.
To have the President,
If you're really concerned, then just move to california! Its much easier than moving abroad.
I lived in California long enough ago to remember when getting queer-bashed was a reasonable concern for a fair number of people, even in, say, Oakland. It didn't happen daily, but it happened relatively often. If you were in the "out" LGBT community, I think you probably knew somebody who'd been bashed. Politics influence that kind of thing even if it's not legal.
... and in the legal arena, there's a whole lot of pressure building up on that state and local res...
I think that what you describe as being 2 to 15 percent probable sounds more extreme than what the original post described as being 5 percent probable. You can have "significant erosion" of some groups' rights without leaving the country being the only reasonable option, especially if you're not in those groups. It depends on what you're trying to achieve by leaving, I guess.
Although if I were a trans person in the US right now, especially on medication, I'd be making, if not necessarily immediately executing, some detailed escape plans that could be executed on short notice.
My gut says it's now at least 5%, which seems easily high enough to start putting together an emigration plan. Is that alarmist?
That's a crazy low probability.
More generally, what would be an appropriate smoke alarm for this sort of thing?
You're already beyond the "smoke alarm" stage and into the "worrying whether the fire extinguisher will work" stage.
But it's very unclear whether they institutionally care.
There are certain kinds of things that it's essentially impossible for any institution to effectively care about.
I thought "cracked" meant "insane, and not in a good way". Somebody wanna tell me what this sense is?
Can you actually keep that promise?
As a final note: the term "Butlerian Jihad" is taken from Dune and describes the shunning of "thinking machines" by mankind.
In Dune, "thinking machines" are shunned because of a very longstanding taboo that was pretty clearly established in part by a huge, very bloody war. The intent was to make that taboo permanent, not a "pause", and it more or less succeeded in that.
It's a horrible metaphor and I strongly suggest people stop using it.
...the Culture ending, where CEV (or similar) aligned, good ASI is created and brings us to some hypothetical utopia. H
It's a pretty big assumption to claim that "moral progress" is a thing at all.
A couple of those might have been less taboo 300 years ago than they are now. How does that square with the idea of progress?
Did you leave any answers out because they were too taboo to mention? Either because you wouldn't feel comfortable putting them in the post, or because you simply thought they were insanely odious and therefore obvious mistakes?