All of Arcayer's Comments + Replies

Moldova isn't the only plausible option or anything, my reasoning is just, it has good land, the population is low enough that they could be bought out a price that isn't too absurd, they're relatively poor and could use the money, it's a relatively new country with a culture similar to a number of other countries and it's squarely in western territory and thus shouldn't be much of a source of conflict.

Cultural identity, in any reasonable world, is about the people around you and your way of life, not where you are on a map.

(In theory you could buy a piece of land, but in practice, countries are unwilling to sell.)

Buying land from governments really hasn't been a very legitimate concept from the beginning. Even if they are willing to sell, the people living there probably don't want you ruling them, and where they don't want to sell, I fail to see the crime against humanity in paying people to move to another country until there are few enough left that you can walk in, become the super majority, and declare yourself the new government.

Of course, that doesn't mean men with gu... (read more)

Answer by Arcayer2-9

Realistically, Israel and the west already have their plans laid and aren't going to change them. In that sense, there are no options.

Unrealistically, Israel should relocate. To Moldova, specifically. As for the Moldovans, buy them out. Offer up enough money and choices for new citizenship that the vast majority accept and leave and Israel can accept the remainder as full citizens without having to worry about cultural dilution/losing democratic elections/etc.

In a even more unrealistically reasonable world, middle eastern countries would be willing to fund this, as they're the main beneficiaries.

On that note, Taiwan should relocate next.

1Jay
I don't know about Moldova, but it seems obvious that the creation of modern Israel depended on the idea that the Palestinians could be managed and equally obvious that it hasn't worked out that way.  The only real endgames are genocide or leaving and personally I'd vote for leaving.
5Yair Halberstadt
In a world where Jews have so little cultural identity that they're happy to relocate Israel to Moldova, Palestinians and Israelis might as well have so little national identity that they're happy to live together in a one state solution.

Somewhat nitpicking

this has not led to a biological singularity.

I would argue it has. Fooms have a sort of relativistic element, where being inside a foom does not feel special. Just because history is running millions of times faster than before, doesn't really feel like anything.

With all of that said, what is and isn't a foom is somewhat blurry at the edges, but I'd argue that biology, brains, and farming all qualify. Conversely, that more has happened in the last couple centuries than the previous couple eons. Of course, this claim is heavily dependent ... (read more)

3snewman
So to be clear, I am not suggesting that a foom is impossible. The title of the post contains the phrase "might never happen". I guess you might reasonably argue that, from the perspective of (say) a person living 20,000 years ago, modern life does in fact sit on the far side of a singularity. When I see the word 'singularity', I think of the classic Peace War usage of technology spiraling to effectively infinity, or at least far beyond present-day technology. I suppose that led me to be a bit sloppy in my use of the term. The point I was trying to make by referencing those various historical events is that all of the feedback loops in question petered out short of a Vingian singularity. And it's a fair correction that some of those loops are actually still in play. But many are not – forest fires burn out, the Cambrian explosion stopped exploding – so we do have existence proofs that feedback loops can come to a halt. I know that's not any big revelation, I was merely attempting to bring the concept to mind in the context of RSI. In any case, all I'm really trying to do is to argue that the following syllogism is invalid: 1. As AI approaches human level, it will be able to contribute to AI R&D, thus increasing the pace of AI improvement. 2. This process can be repeated indefinitely. 3. Therefore, as soon as AI is able to meaningfully contribute to its own development, we will quickly spiral to a Vingian singularity. This scenario is certainly plausible, but I frequently see it treated as a mathematical certainty. And that is simply not the case. The improvement cycle will only exhibit a rapid upward spiral under certain assumptions regarding the relationship of R&D inputs to gains in AI capability – the r term in Davidson's model. (Then I spend some time explaining why I think r might be lower than expected during the period where AI is passing through human level. Again, "might be".)
1guy_from_finland
I don't think that there has been foom related to biology or brains. Earth is 4.5 billion years old. Single-celled life has existed about 3.5 billion years. Multicellular life is about 1.5 billion years old. Animals with brains have existed about 500 million years. This is not a foom timeline of events.
Arcayer1-8

I direct skepticism at boosters supporting fast enough timelines to reach AGI within the near future, that sounds like a doomer only position.

In the end, children are still humans.

Half of childhood is a social construct. (In particular, most of the parts pertaining to the teenage years)

Half of the remainder won't apply to a given particular child. Humans are different.

A lot of that social construct was created as part of a jobs program. You shouldn't expect it to be sanely optimized towards excuses made up fifty years after the fact.

Childhood has very little impact on future career/social status/college results. They've done all sorts of studies, and various nations have more or less education, ... (read more)

1Sable
I agree to some extent with what you're saying - but in today's society, (at least in the U.S. and, to my understanding, many parts of East Asia) children are subjected to optimization pressures from colleges and other selective institutions. I think there's a lack in clarity of thought in society at large about the effect this has on children, and more importantly, what childhood ought to be. To your point, less optimization pressure on children does not seem to result in less achievement in adulthood - so perhaps that's the direction we ought to be aiming for?

I note one of my problems with "trust the experts" style thinking, is a guessing the teacher's password problem.

If the arguments for flat earth and round earth sound equally intuitive and persuasive to you, you probably don't actually understand either theory. Sure, you can say "round earth correct", and you can get social approval for saying correct beliefs, but you're not actually believing anything more correct than "this group I like approves of these words."

2jimmy
It's not that flat earth arguments sound equally persuasive to people (they don't). It's that the reason they don't sound persuasive is that "this group they like" says not to take the arguments seriously enough to risk being persuaded by them, and they recognize that they don't actually understand things well enough for it to matter. The response to a flat earth argument is "Haha! What a silly argument!", but when you press them on it, they can't actually tell you what's wrong with it. They might think they can, but if pressed it falls apart. This is more subtle than the "guessing the teachers password" problem, because it's not like the words have no meaning to them. People grasp what a ball is, and how it differs from a flat disk. People recognize bas things like "If you keep going long enough in the same direction, you'll end up back where you started instead of falling off". It's just that the reasoning required to figure out which is true isn't something they really understand. In order to reason about what it implies when things disappear over the horizon, you have to contend with atmospheric lensing effects, for example. In a case like that, you actually have to lean on social networks. Reasoning well in such circumstances has to do with how well and how honestly you're tracking what is convincing you and why.

My experience is that rationalists are hard headed and immune to evidence?

More specifically, I find that the median takeaway from rationalism is that thinking is hard, and you should leave it up to paid professionals to do that for you. If you are a paid professional, you should stick to your lane and never bother thinking about anything you're not being paid to think about.

It's a serious problem rationalism that half of the teachings are about how being rational is hard, doesn't work, and takes lots of effort. It sure sounds nice to be a black belt truth ... (read more)

2Jiro
i'd say more "jumps on one idea and follows it to its conclusion without doing any sanity checks and while refusing to discard the idea when it produces absurd results". Not far from this post is a post about how we should care a great deal about fish suffering.

This is sort of restating the same argument in a different way, but:

it is not in the interests of humans to be Asmodeus's slaves.

From there I would state, does assigning the value [True] to [Asmodeus], via [Objective Logic] prove that humans should serve Asmodeus, or does it prove that humans should ignore objective logic? And if we had just proven that humans should ignore objective logic, were we ever really following objective logic to begin with? Isn't it more likely that that this thing we called [Objective Logic] was in fact, not objective logic to b... (read more)

Answer by Arcayer-2-6

Because AI safety sucks?

Yes, yes, convenient answer, but the phrasing of the question seriously does make me think the other side should take this as evidence that AI safety is just not a reasonable concern. This is basically saying that there's a strong correlation between having a negative view of X, and being reliable on issues that aren't X, that would make a lot of sense if X was bad.

Arcayer3-9

So, a number of issues stand out to me, some have been noted by others already, but:

My impression is that there are also less endorsable or less altruistic or more silly motives floating around for this attention allocation.

A lot of this list looks to me like the sort of heuristics where, societies that don't follow them inevitably crash, burn and become awful. A list of famous questions where the obvious answer is horribly wrong, and there's a long list of groups who came to the obvious conclusion and became awful, and it's become accepted wisdom to not d... (read more)

Something I would Really really like anti-AI communities to consider is that regulations/activism/etc aimed to harm AI development and slow AI timelines do not have equal effects on all parties. Specifically, I argue that the time until the CCP develops CCP aligned AI is almost invariant, whilst the time until Blender reaches sentience potentially varies greatly.

I am Much much more hope for likeable AI via open source software rooted in a desire to help people and make their lives better, than (worst case scenario) malicious government actors, or (second) ... (read more)

2Donald Hobson
I don't think AGI is on the CCP radar. 

However, these hypotheses are directly contradicted by the results of the "win-win" condition, where participants were given the ability to either give to their own side or remove money from the opposition.

I would argue this is a simple stealing is bad heuristic. I would also generally expect subtraction to anger the enemy and cause them stab more kittens.

Republicans are the party of the rich, and they get so much money that an extra $1,000,000 won’t help them.

Isn't this a factual error?

5DirectedEvolution
It's just supposed to represent a thought process someone might go through as an illustrative example, not to be factually accurate. Sorry that wasn't clear!

With the standard warning that this is just my impression and is in no way guaranteed to be actually good advice:

My largest complaint is that the word to content ratio is too high. As an example:

It was an hour and a half trip for this guy when he flew and pushed himself, and about two and a half at what he thought was a comfortable pace.

Could drop one half and be almost as informative. Just:

This guy could've made the trip within a few hours at a comfortable pace.

Would've been fine. It can be inferred that he can go faster if that's a comfortable pace, and ... (read more)

1Timothy Underwood
While I think there are cases where condensing world details is better writing, I think in general that is more of a style preference than actual good or bad.  Some people like jargon heavy fantasy/ sci-fi, and I'm one of them.  But the second point that I should pay more attention to how what the character notices says about him is completely right, and probably by shifting that around more is a strong way to improve the viewpoint.

Zelensky clearly stated at the Munich Security Conference that if the west didn't give him guarantees that he wasn't going to get he would withdraw from the Budapest Memorandum. This is a declared intent to develop nuclear weapons, and is neither in doubt nor vague in meaning.

Russia also accuses Ukraine of developing bioweapons. All of the evidence for this comes through Russia, so I wouldn't expect someone who didn't already believe Russia's narrative to believe said accusations, but in any case, bioweapons development is held by Russia to be among the primary justifications of the invasion.

Arcayer-40

One thing I've been noting, which seems like the same concept as this is:

Most "alignment" problems are caused by a disbalance between the size of the intellect and the size of the desire. Bad things happen when you throw ten thousand INT at objective: [produce ten paperclips].

Intelligent actors should only ever be asked intelligent questions. Anything less leads at best to boredom, at worst, insanity.

3Viliam
Here we need the two dimensional voting, because these statements are all technically true. For example, considering point C, almost every country is producing or researching something that could be used as a weapon of mass destruction.

2: No.

If an AI can do most things a human can do (which is achievable using neurons apparently because that's what we're made of)

Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that's what's being said, and if so, on what basis.

3: I'm not asking what makes intelligent AI dangerous. I'm asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I unde... (read more)

1Yonatan Cale
2. I don't think humans are deep learning algorithms. I think human (brains) are made of neurons, which seems like a thing I could simulate in a computer, but not just deep learning. 3. I don't expect just-deep-learning to become an AGI. Perhaps [in my opinion: probably] parts of the AGI will be written using deep-learning though, it does seem pretty good at some things. [I don't actually know, I can think out loud with you].

1: This doesn't sound like what I'm hearing people say? Using the word sentience might have been a mistake. Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?

2a: Should we be convinced that neurons are basically doing deep learning? I didn't think we understood neurons to that degree?

2b: What is meant by [most things a human can do]? This sounds to me like an empty statement. Most things a human can do are completely pointless flailing actions. Do we mean, most jobs in modern America? Do we expect roombas... (read more)

1Yonatan Cale
1. The relevant thing in [sentient / smart / whatever] is "the ability to achieve complex goals" 2. a. Are you asking if an AI can ever be as "smart" [good at achieving colas] as a human? 3. b. The dangerous part of the AGI being "smart" are things like "able to manipulate humans" and "able to build an even better AGI" Does this answer your questions? Feel free to follow up
1Sphinxfire
In a sense, yeah, the algorithm is similar to a squirrel that feels a compulsion to bury nuts. The difference is that in an instrumental sense it can navigate the world much more effectively to follow its imperatives.  Think about intelligence in terms of the ability to map and navigate complex environments to achieve pre-determined goals. You tell DALL-E2 to generate a picture for you, and it navigates a complex space of abstractions to give you a result that corresponds to what you're asking it to do (because a lot of people worked very hard on aligning it). If you're dealing with a more general-purpose algorithm that has access to the real world, it would be able to chain together outputs from different conceptual areas to produce results - order ingredients for a cake from the supermarket, use a remote-controlled module to prepare it, and sing you a birthday song it came up with all by itself! This behaviour would be a reflection of the input in the distorted light of the algorithm, however well aligned it may or may not be, with no intermediary layers of reflection on why you want a birthday cake or decision being made as to whether baking it is the right thing to do, or what would be appropriate steps to take for getting from A to B and what isn't. You're looking at something that's potentially very good at getting complicated results without being a subject in a philosophical sense and being able to reflect into its own value structure.
1[comment deleted]

A lot of predictions about AI psychology are premised on the AI being some form of deep learning algorithm. From what I can see, deep learning requires geometric computing power for linear gains in intelligence, and thus (practically speaking) cannot scale to sentience.

For a more expert/in depth take look at: https://arxiv.org/pdf/2007.05558.pdf

Why do people think deep learning algorithms can scale to sentience without unreasonable amounts of computational power?

1Yonatan Cale
1. An AGI can be dangerous even if it isn't sentient 2. If an AI can do most things a human can do (which is achievable using neurons apparently because that's what we're made of), and if that AI can run x10,000 as fast (or if it's better in some interesting way, which computers sometimes are compared to humans), then it can be dangerous Does this answer your question? Feel free to follow up

Humans are easily threatened. They have all sorts of unnecessary constraints on their existence, dictated by the nature of their bodies. They're easy to kill and deathly afraid of it too, they need to eat and drink, they need the air to be in a narrow temperature range. Humans are also easy to torture, and there's all sorts of things that humans will do almost anything to avoid, but don't actually kill said human, or even affect them outside of making them miserable.

Threatening humans is super profitable and easy, and as a result, most humans are miserable... (read more)

2TekhneMakre
Ok, adding (my interpretation). 
Answer by Arcayer170

First, a meta complaint- People tend to think that complicated arguments require complicated counter arguments. If one side presents entire books worth of facts, math, logic, etc, a person doesn't expect that to be countered in two sentences. In reality, many complex arguments have simple flaws.

This becomes exacerbated as people in the opposition lose interest and leave the debate. Because the opposition position, while correct, is not interesting.

The negative reputation of doomerism is in large part, due to the fact that doomist arguments tend to be longe... (read more)

We're moving towards factual disputes that aren't easy to resolve in logical space, and I fear any answers I give are mostly repeating previous statements. In general I hold that you're veering toward a maximally wrong position with completely disastrous results if implemented. With that said:

But unfortunately, it's way more complicated than that.

I dispute this.

how to control something a) more intelligent than ourselves, b) that can re-write its own code and create sub-routines therefore bypassing our control mechanisms.

Place an image of the status quo in ... (read more)

1superads91
Ps: 2 very important things I forgot to touch. "This also implies that every trick you manage to come up with, as to how to get a C compiler adjacent superintelligence to act more human, is not going to work, because the other party isn't C compiler adjacent. Until we have a much better understanding of how to code efficiently, all such efforts are at best wasted, and likely counterproductive." Not necessarily. Even the first steps on older science were important to the science of today. Science happens through building blocks of paradigms. Plus, there are mathematical and logical notions which are simply fundamental and worth investigating, like decision theory. "Humans have certain powers and abilities as per human nature. Math isn't one of them. I state that trying to solve our problems with math is already a mistake, because we suck at math. What humans are good at is image recognition. We should solve our problems by "looking" at them." Ok, sorry, but here you just fall into plain absurdity. Of course it would be great just to look at things and "get" them! Unfortunately, the language of computers, and of most science, is math. Should we perhaps drop all math in physics and just start "looking" instead? Please don't actually say yes... (To clarify, I'm not devaluing the value of "looking", aka philosophy/rationality. Even in this specific problem of AI alignment. But to completely discard math is just absurd. Because, unfortunately, it's the only road towards certain problems (needless to say there would be no computers without math, for instance)).
1superads91
I'm actually sympathetic towards the view that mathematically solving alignment might be simply impossible. I.e. it might be unsolvable. Such is the opinion of Roman Yalmpolsky, an AI alignment researcher, who has written very good papers on its defense. However, I don't think we lose much by having a couple hundred people working on it. We would only implement Friendly AI if we could mathematically prove it, so it's not like we'd just go with a half-baked idea and create hell on Earth instead of "just" a paperclipper. And it's not like Friendly AI is the only proposal in alignment either. People like Stuart Russell have a way more conservative approach, as in, "hey, maybe just don't build advanced AI as utility maximizers since that will invariably produce chaos?". Some of this concepts might even be dangerous, or worse than doing nothing. Anyway, they are still in research and nothing is proven. To not try to do anything is just not acceptable, because I don't think that the FIRST transformative/dangerous AI will be super virtuous. Maybe a very advanced AI would necessarily/logically be super virtuous. But we will build something dangerous before we get to that. Say, an AI that is only anything special in engineering, or even a specific type of engineering like nanotechnology. Such AI, which might even not be properly AGI, might already be extremely dangerous, for the obvious reason of having great power (from great intelligence in some key area(s)) without great values (orthogonality thesis). "Furthermore, implementation of AI regulation is much easier than its removal. I suspect that once you ban good men from building AI, it's over, we're done, that's it." Of course it wouldn't be just any kind of regulation. Say, if you restrict access/production to supercomputers globally, you effectively slow AI development. Supercomputers are possible to control, laptops obviously aren't. Or, like I also said, a narrow AI nanny. Are these and other similar measures dan

I agree that personal values (no need to mystify) are important

The concepts used should not be viewed as mystical, but as straightforward physical objects. I don't think personal values is a valid simplification. Or rather, I don't think there is a valid simplification, hence why I use the unsimplified form. Preferably, egregore or hyperbeing, or shadow, or something, should just become an accepted term, like dog, or plane. If you practice "seeing" them, they should exist in a completely objective and observable sense. My version of reality isn't like a mo... (read more)

1superads91
"I hold that the most important action a person is likely to make in his life is to check a box on a survey form." If only life (or, more specifically, our era, or even more specifically, AI alignment) was that simple. Yes, that's the starting point, and without it you can do nothing good. And yes, the struggle between the fragility of being altruistic and the less-fragility of being machiavellic has never been more important. But unfortunately, it's way more complicated than that. Way more complicated than some clever mathematical trick too. In fact, it's the most daunting scientific task ever, which might not even be possible. Mind you that Fermat's last theorem took 400 years to prove, and this is more than 400 times more complicated. It's simple: how to control something a) more intelligent than ourselves, b) that can re-write its own code and create sub-routines therefore bypassing our control mechanisms. You still haven't answered this. You say that we can't control something more intelligent than ourselves. So where does that leave us? Just create the first AGI, tell it to "be good" and just hope that it won't be a sophist? That sounds like a terrible plan, because our experience with computers tells us that they are the biggest sophists. Not because they want to! Simply because effectively telling them how to "do what I mean" is way harder than telling another human. Any programmer would agree a thousand times. Maybe you anthropomorphize AGI too much. Maybe you think that, because it will be human-level, it will also be human like. Therefore it will just "get" us, we just need to make sure that the first words it hears is "be good" and never "be evil". If so, then you couldn't be more mistaken. Nothing tells us that the first AGI (in fact I dislike the term, I prefer transformative AI) will be human-like. In all probability (considering 1) the vast space of possible "mind types", and 2) how an advanced computer will likely function much more similarly

Note A- I assert that what the original author is getting at is extremely important. A lot of what's said here is something I would have liked to say but couldn't find a good way to explain, and I want to emphasize how important this is.

Note B- I assert that a lot of politics is the question of how to be a good person. Which is also adjacent to religion and more importantly, something similar to religion but not religion, which is basically, which egregore should you worship/host. I think that the vast majority of a person's impact in this world is what hy... (read more)

1superads91
" I think that the vast majority of a person's impact in this world is what hyperbeings he chooses to host/align to, with object level reality, barely even mattering." I agree that personal values (no need to mystify) are important, but action is equally important. You can be very virtuous, but if you don't take action (by, for instance, falling for the Buddhist-like fallacy that sitting down and meditating will eventually save the world by itself), your impact will be minor. Specially in critical times like this. Maybe sitting down and meditating would be ok centuries ago where no transformative technologies were in sight. Now, with transformative technologies decade(s) off, it's totally different. We do have to save the world. "I assert that alignment is trivially easy." How can you control something vastly more intelligent than yourself (at least in key areas), or that can simply re-write its own code and create sub-routines, therefore bypassing your control mechanisms? Doesn't seem easy at all. (In fact, some people like Roman Yalmpolsky have been writing papers on how it's in fact impossible.) Even with the best compiler in the world (not no mention that nothing guarantees that progress in compilers will accompany progress in black boxes like neural networks). "If our a measurement of "do I like this" is set to "does it kill me" I see this at best ending in a permanent boxed garden where life is basically just watched like a video and no choices matter, and nothing ever changes, and at worst becoming an eternal hell (specifically, an eternal education camp), with all of the above problems but eternal misery added on top." I agree with this. The alignment community is way over-focused on x-risk and way under-focused on s-risk. But after this, your position becomes a bit ambiguous. You say: "We should work together, in public, to develop AI that we like, which will almost certainly be hostile, because only an insane and deeply confused AI would possibly be

I observe a higher correlation between "people said mean things about X" and "X is murdering people now" than you make out? Most countries do go to war after they're sufficiently vilified/provoked? The difficult question to me seems more direction of causation. IE, the west claims it insults people because they're villains, I claim that around half our enemies, became so because we insulted them.

The problem with attacking the army in east Ukraine and ignoring Kyiv, is that it doesn't result in an apology from the west. They physically prevent further acts ... (read more)

Russia wants Ukraine to stop doing ethnic cleansing in Donbas? I'll propose that the world makes perfect sense and everything fits together if you believe the Russian narrative, whilst nothing makes sense and nothing fits together if you believe the western narrative.

Going forward from there, the problem is, so, Ukraine shot a plane down, and the west blames Russia. Do the Russians swallow their pride and put up with this, or do they escalate? And of course they escalate. So, the Russians "suddenly and without provocation" seize Crimea and Donbas in a wild... (read more)

8Brendan Long
I'm confused about why Russia needs to conquer the entirely of Ukraine if their goal is just to make their side win in Donbas. Wouldn't they have just invad Donbas if that was the goal? The "people said mean things about Russia so obviously the Russian military is murdering people now" theory seems to explain too much. People say mean things about a lot countries, but most countries don't start wars like this. What's special about Russia here?
3superads91
Very interestingly put, haha! "Ukraine shot a plane down, and the west blames Russia. Do the Russians swallow their pride and put up with this, or do they escalate? And of course they escalate." And to think the assassination of some Archduke was a petty starting event for a world war... (No sarcasm, the world is indeed stupid, like that.)

A large issue I'm noting here, all of this assumes that sanctions are undesirable, and not desirable.

Yet, my reading of history is, sanctions massively drive up support for politicians, military and government. Sure they hurt the economy, but you can more than make up for that with higher taxes, which people are willing to pay now, due to their heightened patriotism. Which brings me to the further statement that being sanctioned is not a costly side effect, but the end goal. That Russia, and specifically, Russian politicians and the Russian government are acting in a way specifically designed to rile the west, because doing so is profitable.

We should be willing to end sanctions iff Russia calls off the invasion and brings its soldiers home

I'll just note that this would mean, assuming Ukraine surrenders in the next few weeks, and Putin then does exactly what he says he'll do and withdraws, our sanctions are withdrawn almost immediately after implementation, and Russia is implicitly vindicated. Which is an outcome I would be fine with? It seems like it would end with lots of small wars and no nuclear extinction, no genocide and no global empire.

But I would be very surprised to see a general consensus that it's fine to invade another country and force them to sign treaties so long as you aren't occupying them or [X,Y,Z].

1Dumbledore's Army
Yeah, maybe I should have worded that better. I was thinking more of the scenario where one of Putin’s underlings assassinated him, takes power and cancels the invasion and returns to the status quo ante. Which is behaviour that we do want to incentivise. 

I can see arguments as to why some people would feel cheated at 20 thousand. I wouldn't agree. People have gotten too used to fake wars, and are too willing to call just about anything total warfare.

I don't think the modern warfare thing is enough to change anything. World War two was pretty deadly. Vietnam had millions of deaths.

I should be clear I was thinking all deaths caused by the war, on both sides, civilian and military. The question is how hard the Ukrainians will fight, not how effectively. My general perception is that Iraq is not generally cons... (read more)

No. I'm going to judge my prediction by the number of deaths, not days (or weeks, or months. Years would really mess with my idea of what's going on.)

Insignificant: Less than 20,000. If single battles in the American Civil War are larger than this entire conflict, then the sides must not have been fighting very hard.

Total War: I would normally say millions. I expect that the original prediction did not actually mean that? So, I'll say the other side was right if it's over a hundred thousand. More right than me at above 50,000. Of course, I'm also wrong if ... (read more)

7spkoc
These numbers are absurd, in my opinion. 10s of thousands of military dead is massive numbers in a modern context. You cannot compare 1800s warfare to modern war, people literally lined up in a square and shot at each other until half of them were dead/injured back then. And due to crap med tech tons of injured didn't survive. Modern conflicts have MUCH MUCH lower death ratios. America finished the conquest of Iraq with like 150 dead(granted Iraqi army folded). Over the course of the whole occupation(2003-2011) America lost around 4500 soldiers. If Russia loses like 1000 soldiers before taking over Ukraine that's absolutely brutal resistance. Iraqi force's losses were much higher, but still not over 20k during the invasion. Keep in mind there WAS a lot of resistance. The invasion took like a month or something, so wasn't just a trivial walk through the country. Source: https://en.wikipedia.org/wiki/Iraq_War Yemeni civil war isn't even at 20k yet after 8 years, as far as I can tell https://en.wikipedia.org/wiki/Yemeni_Civil_War_(2014%E2%80%93present) I think 20k combined military civilian deaths in the next 2 weeks would be absolutely massive resistance and probably the bloodiest war in decades. The real question to me is if the Ukrainians are holding all major cities by the end of this week. At that point substantial military aid from the EU will be steadily flowing in through the west and it becomes a lot less clear how Russia makes progress. Mass bombardment of cities... doesn't do anything if people are angry and stubborn enough to keep fighting. 
Arcayer420

The Ukrainian government will fight a total war to defend its sovereignty.

Counterprediction: The Ukrainian government will fold without a (significant) fight.

6Jayson_Virissimo
Care to specify over what time horizon you expect(ed) it to fold?
lc*360

For what it's worth, I think this counter-prediction already seems almost certainly wrong.

lsusr500

I appreciate you registering your counterprediction on a public forum.

Answer by Arcayer-10

Human utility is basically a function of image recognition. Which is sort of not a straight forward thing that I can say, "This is that." Sure, computers can do image recognition, what they are doing is that which is image recognition. However, what we can currently describe algorithmically is only a pale shadow of the human function, as proven by all recaptcha everywhere.

Given this, the complex confounder is that our utility function is part of the image.

Also, we like images that move.

In sum, modifying our utility function is natural and normal, and is ac... (read more)

they would then only need a slight preponderance of virtue over vice

This assumes that morality has only one axis, which I find highly unlikely. I would expect the seed to quickly radicalize, becoming good in ways that the seed likes, and becoming evil in ways that the seed likes. Under this model, if given a random axis, the seed comes up good 51% of the time, I would expect the aligned AI to remain 51% good.

Assuming the axes do interact, if they do so inconveniently, for instance if we posit that evil has higher evolutionary fitness, or that self destruct... (read more)

2Mitchell_Porter
This is a good and important point. A more realistic discussion of aligning with an idealized human agent might consider personality traits, cognitive abilities, and intrinsic values as among the properties of the individual agent that are worth optimizing, and that's clearly a multidimensional situation in which the changes can interact, even in confounding ways.  So perhaps I can make my point more neutrally as follows. There is both variety and uniformity among human beings, regarding properties like personality, cognition, and values. A process like alignment, in which the corresponding properties of an AI are determined by the properties of the human being(s) with which it is aligned, might increase this variety in some ways, or decrease it in others. Then, among the possible outcomes, only certain ones are satisfactory, e.g. an AI that will be safe for humanity even if it becomes all-powerful.  The question is, how selective must one be, in choosing who to align the AI with. In his original discussions of this topic, back in the 2000s, Eliezer argued that this is not an important issue, compared to identifying an alignment process that works at all. He gave as a contemporary example, Al Qaeda terrorists: with a good enough alignment process, you could start with them as the human prototype, and still get a friendly AI, because they have all the basic human traits, and for a good enough alignment process, that should be enough to reach a satisfactory outcome. On the other hand, with a bad alignment process, you could start with the best people we have, and still get an unfriendly AI.  Well, again we face the fact that different software architectures and development methodologies will lead to different situations. Earlier, it was that some alignment methodologies will be more sensitive to initial conditions than others. Here it's the separability of intelligence and ethics, or problem-solving ability and problems that are selected to be solved. There are def

Given: Closed source Artificial General Intelligence requires all involved parties to have no irreconcilable differences.

Thence: The winner of a closed source race will inevitably be the party with the highest homogeneity times intelligence.

Thence: Namely, the CCP.

Given: Alignment is trivial.

Thence: The resulting AI will be evil.

Given: Alignment is difficult.

Given: It's not in the CCP's character to care.

Thence: Alignment will fail.

Based on my model of reality, closed sourcing AI research approaches the most wrong and suicidal decisions possible (if you're... (read more)

2Mitchell_Porter
One formula for friendly AI is, that it should be the kind of moral agent a human being would become, if they could improve themselves in a superhuman fashion. (This may sound vague; but imagine that this is just the informal statement, of a precise recipe expressed in terms of computational neuroscience.)  If one were using a particular human or class of humans as the "seed", they would then only need a slight preponderance of virtue over vice (benevolence over malevolence? reason over unreason?) to be suitable, since such a being, given the chance to self-improve, would want to increase its good and decrease its bad; and, in this scenario, the improved possible person is what the AI is supposed to align with, not the flawed actual person.  One of the risks of open sourcing, arises from the separation between general problem-solving algorithms, and the particular values or goals than govern them. If you share the code for the intelligent part of your AI, you are allowing that problem-solving power to be used for any goal at all. One might therefore wish to only share code for the ethical part of the AI, the part that actually makes it civilized.