I’m not sure I understand your weighting argument. Some capabilities are “convergently instrumental” because they are useful for achieving a lot of purposes. I agree that AIs construction techniques will target obtaining such capabilities, precisely because they are useful.
But if you gain a certain convergently instrumental capability, it then automatically allows you to do a lot of random stuff. That’s what the words mean. And most of that random stuff will not be safe.
I don’t get what the difference is between “the AI will get convergently instrumental c...
Exactly. You can’t generalize from “natural” examples to adversarial examples. If someone is trying hard to lie to you about something, verifying what they say can very well be harder than finding the truth would have been absent their input, particularly when you don’t know if and what they want to lie about.
I’m not an expert in any of these and I’d welcome correction, but I’d expect verification to be at least as hard as “doing the thing yourself” in cases like espionage, hacking, fraud and corruption.
AI accelerates the timetable for things we know how to point AI at
It also accelerates the timetable for random things that we don’t expect and don’t even try to point the AI at but that just happen to be easier for incrementally-better AI to do.
Since the space of stuff that helps alignment seems much smaller than the space of dangerous things, you’d expect most things the AI randomly accelerates without us pointing it at will be dangerous.
See above. Don’t become a munitions engineer, and, being aware that someone else will take that role, try to prevent anyone from taking that role. (Hint: That last part is very hard.)
The conclusions might change if planet-destroying bombs are necessary for some good reason, or if you have the option of safely leaving the planet and making sure nobody that comes with you will also want to build planet-destroying bombs. (Hint: That last part is still hard.)
For what it’s worth, the grammar and spelling was much better than is usual for even the native English part of the Internet. That’s probably fainter praise than it deserves, I don’t remember actually noticing any such fault, which probably means there are few of them.
The phrasing and wording did sound weird, but I guess that’s at least one reason why you’re writing, so congratulations and I hope you keep it up! I’m quite curious to see where you’ll take it.
Indeed, the only obvious “power” Harry has that is (as far as we know) unique to him is Partial Transfiguration. I’m not sure if Voldie “knows it not”; as someone mentioned last chapter, Harry used it to cut trees when he had his angry outburst in the Forbidden Forest, and in Azkhaban as well. In the first case Voldie was nearby, allegedly to watch out for Harry, but far enough that to be undetectable via their bond, so it’s possible he didn’t see what exact technique Harry used. In Azkhaban as well he was allegedly unconscious.
I can’t tell if he could ha...
Well, we only know that Harry feels doom when near Q and/or his magic, and that in one case in Azkhaban something weird happened when Harry’s Patronus interacted with what appeared to be an Avada Kedavra bolt, and that Q appears to avoid touching Harry.
Normally I’d say that faking the doom sensations for a year, and faking being incapacitated while trying to break someone out of Azkhaban, would be too complicated. But in this case...
Both good points, thank you.
Thank you, that was very interesting!
I sort of get your point, but I’m curious: can you imagine learning (with thought-experiment certainty) that there is actually no reality at all, in the sense that no matter where you live, it’s simulated by some “parent reality” (which in turn is simulated, etc., ad infinitum)? Would that change your preference?
most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth
How do you know? (Not rhethorical, I have no idea and I’m curious.)
If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.)
Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin?
I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t...
Honestly, I can’t really find anything significant in this comment I disagree with.
It's a bit like opening a thread arguing that the Spanish inquisition was right for torturing nonbelievers because they they acted under the assumption that they could save souls from eternal damnation by doing so.
But the OP didn’t argue in support of torturing people, as far as I can tell. In the terms of your analogy, my reading was of the OP was a bit like:
“Hey, if the Spanish Inquisition came to you and offered the following two options, would you pick either of them, or refuse both? The options are (1) you’re excommunicated, then you get all the ca...
Sure, but then why do you expect memory and experience would also behave in a common sense manner? (At least, that’s what I think you did in your first comment.)
I interpreted the OP as “I’m confused about memory and experience; let’s try a thought experiment about a very uncommon situation just to see what we think it would happen”. And your first comment reads to me as “you picked a bad thought experiment, because you’re not describing a common situation”. Which seems to completely miss the point, the whole purpose of the thought experiment was to investi...
Once AI is developed, it could "easily" colonise the universe.
I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?
Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by...
The problem with that is that life on Earth appeared about 4 billion years ago, while the Milky Way is more than 13 billion years old. If life were somewhat common, we wouldn’t expect to be the first, because there was time for it to evolve several times in succession, and it had lots of solar systems where it could have done it.
A possible answer could be that there was a very strong early filter during the first part of the Milky Way’s existence, and that filter lessened in intensity in the last few billion years.
The only examples I can think of are elem...
Your rephrasing essentially says that you torture an identical copy of a person for a week.
If you read it carefully, my first rephrasing actually says that you torture the original person for a week, and then you (almost) perfectly erase their memories (and physical changes) during that week.
This is not changing the nature of the thought experiment in the OP; it is exactly the same experiment, plus a hypothetical example of how it could be achieved technically, because you implied that the experiment in the OP is impossible to achieve and thus ill-pose...
It seems rather silly to argue about that, when the thought experiment starts with Omega and bets for amounts of a billion dollars. That allows glossing over a lot of details. Your position is like objecting to a physics thought experiment that assumes frictionless surfaces, while the same thought experiment also assumes mass-less objects.
As a simple example: Omega might make a ridiculously precise scan of your entire body, subject you to the experiment (depending on which branch you chose), then restore each molecule to the same position and state it was ...
perhaps costly, but worth the price
How about extending the metaphor and calling these techniques "Rituals" (they require a sacrifice, and even though it’s not as “permanent” as in HPMOR, it’s usually dangerous), reserving “Dark” for the arguably-immoral stuff?
The nice thing about hacking instrumental goals into terminal goals is that while they’re still instrumental you can easily change them.
In your case: You have the TG of becoming fit (BF), and you previously decided on the IG of going to the gym (GG). You’re asking about how to turn GG into a TG, which seems hard.
But notice that you picked GG as an instrument towards attaining BF before thinking about Terminal Goal Hacking (TGH), which suggests it’s not optimal for attainging BF via TGH. The better strategy would be to first ask yourself if another IG woul...
It doesn’t work if you just click the link, but if you copy the link address and paste it in a browser then it works. (Because there isn’t a referrer header anymore.)
Medical issues that make life miserable but can be fixed with ~1M$ would be a (bit more concrete) example. Relatively rare, as you said.
I have a rare but recurring dream that resembles very much what you describe.
There's no good reason to assume
I agree, but I’m not sure the examples you gave are good reasons to assume the opposite. They’re certainly evidence of intelligence, and there are even signs of something close to self-awareness (some species apparently can recognize themselves in mirrors).
But emotions are a rather different thing, and I’m rather more reluctant to assume them. (Particularly because I’m even less sure about the word than I am about “intelligence”. But it also just occurred to me that between people emotions seem much easier to fake than i...
Personally I agree, but if I were a devil I’d just fall in love with the kind of double-think you’d need to . After all, I wouldn’t actually want to suppress faith, I’d just want to create in people’s minds associations between atheism and nice places like Stalinist Russia. Phrases like “scientific socialism” would just send nice little shivers of pleasure down any nice devil’s spine, wouldn’t they?
Funny how if I were a devil, and I tried to make the world miserable through faith, and I were getting concerned about those dangerous anti-faith ideas, I’d try to create a horrible political regime based on suppressing faith ;)
I see your point (I sometimes get the same feeling), but if you think about it, it’d be much more astonishing if someone built a universal computer before having the idea of a universal computer. It’s not really common to build something much more complex than a hand ax by accident. Natural phenomena are often discovered like that, but machines are usually imagined a long time before we can actually build them.
Everyone can remember a phone number because it's three numbers, where they might have problems remembering ten separate digits
This is slightly irrelevant, but for some reason I can’t figure out at all, pretty much all phone numbers I learned (and, incidentally, the first thirty or so decimals of π) I learned digit-by-digit rather than in groups. The only exception was when I moved to France, I learned my french number by-separate-digits (i.e., five-eight instead of fifty-eight) in my native language but grouped in tens (i.e., by pairs) in French. This isn’t a characteristic of my native language, either, nobody even in my family does this.
You’re right, I remember now.
Hmm, it still sounds like they should be used more often. If you’re falsely accused and about to be condemned to Azkhaban, wouldn’t you sacrifice a portion on of your magic if it could compel your accuser to confess? As corrupt as the Wizengamot is, it should still happen on occasion.
Yeah, but I think he was mentioned before (and he shows up in most of the guards books). Vetinari is awesome in kind of an obvious way, but he’s not very relevant outside the city. (Well, except for a few treaties with dwarves and the like.)
In contrast, Granny (and sometimes the other witches) arguably saved the entire world several times. There are other characters who do that, but it’s more... luck I guess. The witches actually know what they’re doing, and work hard to achieve their goals.
(For example, though it’s never explicitly said, I got a very strong suspicion that Granny remained a life-long virgin specifically because she expected that it might be useful against unicorns.)
I’ve seen people here repeatedly mention the city watch books, but I’m surprised the witches books are almost never mentioned. Seriously, am I the only one who thought Granny Weatherwax and her team are basically the most useful people on the disc?
Perhaps, although “story logic” can imply parents being willing to sacrifice for their children. That’s a problem with thinking of the world in terms of stories, you can find a trope to justify almost anything. Authors always can (and often do) pull deus ex machinas out of their nether regions.
I wouldn’t be surprised if it did happen, at least once or twice. After all, it happened with the adults too, e.g. Juergen or whatever his name was.
Well, yes, but the whole point of building AI is that it work for our gain, including deciding what that means and how to balance between persons. Basically if you include in “US legal system” all three branches of government, you can look at it as a very slow AI that uses brains as processor elements. Its friendliness is not quite demonstrated, but fortunately it’s not yet quite godlike.
A couple more recent thoughts:
Dodging Deatheaters (at least competent ones) on a broom is not something I expect to happen in MoR. Well, not unless it’s rocket powered, and I wouldn’t expect that to work more than once either.
Most of the big, non-line-of-sight weapons we (muggles) have arose for the purpose of killing lots of people in big battles (even though we’re using them of other stuff now), which isn’t really useful for wizards due to their low numbers, but:
The Interdict of Merlin is MoR-specific, and at the beginning of the Ministry chapters
Well, she certainly was, and plausibly will be. I’m not quite sure about is, but it’s mostly because my intuition seems to think that either evaluating age() on a currently-not-living person should throw an IllegalStateException, or comparing its result with that for living persons should throw ClassCastException. But that’s probably just me :)
Well, regardless of whatever other plotting among each other, all participants actually do have a very good reason to join—their kids still go to Hogwarts, they want to keep them safe but at the same time groom them to inherit the family fortunes, and as was pointed out explicitly in the chapter, there are still good reasons, both politically and for safety, not to go to the other schools. An (at least temporary) alliance for the protection of their children is actually quite logical for all concerned.
Well, Hermione is (well, was) slightly older than Harry, and she seemed to have entered the romantic stage already. A couple years to let Harry catch up might not be such a bad thing.
I agree with your analysis, but I also thought this was intended as a straightforward signal to the other students that “we have to fight for ourselves” is not just the usual adult “lording over” the kids. I think it was meant to reinforce solidarity, defuse instinctive teenage rebellion against “the adults’ rules”, and also reinforces the message that the professors are no longer to be trusted to handle things.
I think the rapid expansion when the transfiguration ends would be enough to set it off.
Also: Metallic sodium and potassium, as well as phosphorus, are quite reactive with human tissue. For bonus points you can make a mixed projectile with two transfigurations, e.g. a core transfigured from ice surrounded by a shell transfigured from sodium, which will explode once the two transfigurations end.
To be fair, it ate her legs, not just her feet.
To be even fairer, that might be just because the legs were bite-sized, and polite trolls are taught by their mothers not to nibble their food.
only person Dumbledore knows and has access to that really matters to Harry
Well, he could have killed Harry’s parents. It might not trigger Harry’s “kill death by any means necessary” reaction, but then I don't think anyone would have anticipated that in-universe, given that even Q was surprised by the prophecy.
but of higher status than craftsmen and peasants.
I don’t think that was intrinsic to being a merchant, just a consequence of (some of them) being richer.
With regards to (2), I think you’re confusing first-year war games with actual combat magic.
Actual “I really want to kill you” spells are probably much more powerful. Fiendfyre for example has at least the destructive potential of a tank, and in canon even Goyle could cast it. (It’s hard to control, but then again so is a tank.) Avada Kedavra can probably kill you even through a nuclear bunker wall, and it can be used by at least some teenagers. Sectumsempra is probably a instant-kill against a muggle, even with body armor, and it was invented by Snape wh...
ask whether wizard shields actually do prevent inert lumps of lead from hitting their caster
Almost certainly they do. Minerva mentions that guns aren’t a big threat to a prepared witch, and even if you assume she’s not really knowledgeable, I’m pretty sure someone would have tried throwing (with magic) hard, heavy things at their opponent during life-and-death fights. Or at least using bows and arrows.
In the end, at the highest level, their life is a story
I wouldn’t put it above Eliezer to find a way of having Harry be “the End of the World” literally by just ending the story somehow. But I can’t think of any explanation in that vein for destroying the stars, other than maybe breaking the ceiling in the Hogwards hall, which doesn’t fit. And style-wise it doesn’t feel right.
and Draco can attest to these under Veritaserum!
Technically speaking, Draco can only attest that Harry claimed those things. (Harry’s an Occlumens, and the way Occlumency works in MoR implies that an Occlumens is very good at lying. So he can plausibly claim that he lied to his enemies.)
I don’t remember, does Eliezer allow unbreakable vows, or are those nerfed in MoR like Felix Felicis? Because I’m pretty sure even an Occlumens can’t lie if he vows to say the truth without suffering the penalty.
You might want to know that I took a look through the site, and was curious, but I just closed the page the moment the “Calculate your contribution” form refused to show me the pricing options unless I gave it an email address.