Cultural identity, in any reasonable world, is about the people around you and your way of life, not where you are on a map.
(In theory you could buy a piece of land, but in practice, countries are unwilling to sell.)
Buying land from governments really hasn't been a very legitimate concept from the beginning. Even if they are willing to sell, the people living there probably don't want you ruling them, and where they don't want to sell, I fail to see the crime against humanity in paying people to move to another country until there are few enough left that you can walk in, become the super majority, and declare yourself the new government.
Of course, that doesn't mean men with gu...
Realistically, Israel and the west already have their plans laid and aren't going to change them. In that sense, there are no options.
Unrealistically, Israel should relocate. To Moldova, specifically. As for the Moldovans, buy them out. Offer up enough money and choices for new citizenship that the vast majority accept and leave and Israel can accept the remainder as full citizens without having to worry about cultural dilution/losing democratic elections/etc.
In a even more unrealistically reasonable world, middle eastern countries would be willing to fund this, as they're the main beneficiaries.
On that note, Taiwan should relocate next.
Somewhat nitpicking
this has not led to a biological singularity.
I would argue it has. Fooms have a sort of relativistic element, where being inside a foom does not feel special. Just because history is running millions of times faster than before, doesn't really feel like anything.
With all of that said, what is and isn't a foom is somewhat blurry at the edges, but I'd argue that biology, brains, and farming all qualify. Conversely, that more has happened in the last couple centuries than the previous couple eons. Of course, this claim is heavily dependent ...
I direct skepticism at boosters supporting fast enough timelines to reach AGI within the near future, that sounds like a doomer only position.
In the end, children are still humans.
Half of childhood is a social construct. (In particular, most of the parts pertaining to the teenage years)
Half of the remainder won't apply to a given particular child. Humans are different.
A lot of that social construct was created as part of a jobs program. You shouldn't expect it to be sanely optimized towards excuses made up fifty years after the fact.
Childhood has very little impact on future career/social status/college results. They've done all sorts of studies, and various nations have more or less education, ...
I note one of my problems with "trust the experts" style thinking, is a guessing the teacher's password problem.
If the arguments for flat earth and round earth sound equally intuitive and persuasive to you, you probably don't actually understand either theory. Sure, you can say "round earth correct", and you can get social approval for saying correct beliefs, but you're not actually believing anything more correct than "this group I like approves of these words."
My experience is that rationalists are hard headed and immune to evidence?
More specifically, I find that the median takeaway from rationalism is that thinking is hard, and you should leave it up to paid professionals to do that for you. If you are a paid professional, you should stick to your lane and never bother thinking about anything you're not being paid to think about.
It's a serious problem rationalism that half of the teachings are about how being rational is hard, doesn't work, and takes lots of effort. It sure sounds nice to be a black belt truth ...
This is sort of restating the same argument in a different way, but:
it is not in the interests of humans to be Asmodeus's slaves.
From there I would state, does assigning the value [True] to [Asmodeus], via [Objective Logic] prove that humans should serve Asmodeus, or does it prove that humans should ignore objective logic? And if we had just proven that humans should ignore objective logic, were we ever really following objective logic to begin with? Isn't it more likely that that this thing we called [Objective Logic] was in fact, not objective logic to b...
Because AI safety sucks?
Yes, yes, convenient answer, but the phrasing of the question seriously does make me think the other side should take this as evidence that AI safety is just not a reasonable concern. This is basically saying that there's a strong correlation between having a negative view of X, and being reliable on issues that aren't X, that would make a lot of sense if X was bad.
So, a number of issues stand out to me, some have been noted by others already, but:
My impression is that there are also less endorsable or less altruistic or more silly motives floating around for this attention allocation.
A lot of this list looks to me like the sort of heuristics where, societies that don't follow them inevitably crash, burn and become awful. A list of famous questions where the obvious answer is horribly wrong, and there's a long list of groups who came to the obvious conclusion and became awful, and it's become accepted wisdom to not d...
Something I would Really really like anti-AI communities to consider is that regulations/activism/etc aimed to harm AI development and slow AI timelines do not have equal effects on all parties. Specifically, I argue that the time until the CCP develops CCP aligned AI is almost invariant, whilst the time until Blender reaches sentience potentially varies greatly.
I am Much much more hope for likeable AI via open source software rooted in a desire to help people and make their lives better, than (worst case scenario) malicious government actors, or (second) ...
However, these hypotheses are directly contradicted by the results of the "win-win" condition, where participants were given the ability to either give to their own side or remove money from the opposition.
I would argue this is a simple stealing is bad heuristic. I would also generally expect subtraction to anger the enemy and cause them stab more kittens.
Republicans are the party of the rich, and they get so much money that an extra $1,000,000 won’t help them.
Isn't this a factual error?
With the standard warning that this is just my impression and is in no way guaranteed to be actually good advice:
My largest complaint is that the word to content ratio is too high. As an example:
It was an hour and a half trip for this guy when he flew and pushed himself, and about two and a half at what he thought was a comfortable pace.
Could drop one half and be almost as informative. Just:
This guy could've made the trip within a few hours at a comfortable pace.
Would've been fine. It can be inferred that he can go faster if that's a comfortable pace, and ...
Zelensky clearly stated at the Munich Security Conference that if the west didn't give him guarantees that he wasn't going to get he would withdraw from the Budapest Memorandum. This is a declared intent to develop nuclear weapons, and is neither in doubt nor vague in meaning.
Russia also accuses Ukraine of developing bioweapons. All of the evidence for this comes through Russia, so I wouldn't expect someone who didn't already believe Russia's narrative to believe said accusations, but in any case, bioweapons development is held by Russia to be among the primary justifications of the invasion.
One thing I've been noting, which seems like the same concept as this is:
Most "alignment" problems are caused by a disbalance between the size of the intellect and the size of the desire. Bad things happen when you throw ten thousand INT at objective: [produce ten paperclips].
Intelligent actors should only ever be asked intelligent questions. Anything less leads at best to boredom, at worst, insanity.
2: No.
If an AI can do most things a human can do (which is achievable using neurons apparently because that's what we're made of)
Implies that humans are deep learning algorithms. This assertion is surprising, so I asked for confirmation that that's what's being said, and if so, on what basis.
3: I'm not asking what makes intelligent AI dangerous. I'm asking why people expect deep learning specifically to become (far more) intelligent (than they are). Specifically within that question, adding parameters to your model vastly increases use of memory. If I unde...
1: This doesn't sound like what I'm hearing people say? Using the word sentience might have been a mistake. Is it reasonable to expect that the first AI to foom will be no more intelligent than say, a squirrel?
2a: Should we be convinced that neurons are basically doing deep learning? I didn't think we understood neurons to that degree?
2b: What is meant by [most things a human can do]? This sounds to me like an empty statement. Most things a human can do are completely pointless flailing actions. Do we mean, most jobs in modern America? Do we expect roombas...
A lot of predictions about AI psychology are premised on the AI being some form of deep learning algorithm. From what I can see, deep learning requires geometric computing power for linear gains in intelligence, and thus (practically speaking) cannot scale to sentience.
For a more expert/in depth take look at: https://arxiv.org/pdf/2007.05558.pdf
Why do people think deep learning algorithms can scale to sentience without unreasonable amounts of computational power?
Humans are easily threatened. They have all sorts of unnecessary constraints on their existence, dictated by the nature of their bodies. They're easy to kill and deathly afraid of it too, they need to eat and drink, they need the air to be in a narrow temperature range. Humans are also easy to torture, and there's all sorts of things that humans will do almost anything to avoid, but don't actually kill said human, or even affect them outside of making them miserable.
Threatening humans is super profitable and easy, and as a result, most humans are miserable...
First, a meta complaint- People tend to think that complicated arguments require complicated counter arguments. If one side presents entire books worth of facts, math, logic, etc, a person doesn't expect that to be countered in two sentences. In reality, many complex arguments have simple flaws.
This becomes exacerbated as people in the opposition lose interest and leave the debate. Because the opposition position, while correct, is not interesting.
The negative reputation of doomerism is in large part, due to the fact that doomist arguments tend to be longe...
We're moving towards factual disputes that aren't easy to resolve in logical space, and I fear any answers I give are mostly repeating previous statements. In general I hold that you're veering toward a maximally wrong position with completely disastrous results if implemented. With that said:
But unfortunately, it's way more complicated than that.
I dispute this.
how to control something a) more intelligent than ourselves, b) that can re-write its own code and create sub-routines therefore bypassing our control mechanisms.
Place an image of the status quo in ...
I agree that personal values (no need to mystify) are important
The concepts used should not be viewed as mystical, but as straightforward physical objects. I don't think personal values is a valid simplification. Or rather, I don't think there is a valid simplification, hence why I use the unsimplified form. Preferably, egregore or hyperbeing, or shadow, or something, should just become an accepted term, like dog, or plane. If you practice "seeing" them, they should exist in a completely objective and observable sense. My version of reality isn't like a mo...
Note A- I assert that what the original author is getting at is extremely important. A lot of what's said here is something I would have liked to say but couldn't find a good way to explain, and I want to emphasize how important this is.
Note B- I assert that a lot of politics is the question of how to be a good person. Which is also adjacent to religion and more importantly, something similar to religion but not religion, which is basically, which egregore should you worship/host. I think that the vast majority of a person's impact in this world is what hy...
I observe a higher correlation between "people said mean things about X" and "X is murdering people now" than you make out? Most countries do go to war after they're sufficiently vilified/provoked? The difficult question to me seems more direction of causation. IE, the west claims it insults people because they're villains, I claim that around half our enemies, became so because we insulted them.
The problem with attacking the army in east Ukraine and ignoring Kyiv, is that it doesn't result in an apology from the west. They physically prevent further acts ...
Russia wants Ukraine to stop doing ethnic cleansing in Donbas? I'll propose that the world makes perfect sense and everything fits together if you believe the Russian narrative, whilst nothing makes sense and nothing fits together if you believe the western narrative.
Going forward from there, the problem is, so, Ukraine shot a plane down, and the west blames Russia. Do the Russians swallow their pride and put up with this, or do they escalate? And of course they escalate. So, the Russians "suddenly and without provocation" seize Crimea and Donbas in a wild...
A large issue I'm noting here, all of this assumes that sanctions are undesirable, and not desirable.
Yet, my reading of history is, sanctions massively drive up support for politicians, military and government. Sure they hurt the economy, but you can more than make up for that with higher taxes, which people are willing to pay now, due to their heightened patriotism. Which brings me to the further statement that being sanctioned is not a costly side effect, but the end goal. That Russia, and specifically, Russian politicians and the Russian government are acting in a way specifically designed to rile the west, because doing so is profitable.
We should be willing to end sanctions iff Russia calls off the invasion and brings its soldiers home
I'll just note that this would mean, assuming Ukraine surrenders in the next few weeks, and Putin then does exactly what he says he'll do and withdraws, our sanctions are withdrawn almost immediately after implementation, and Russia is implicitly vindicated. Which is an outcome I would be fine with? It seems like it would end with lots of small wars and no nuclear extinction, no genocide and no global empire.
But I would be very surprised to see a general consensus that it's fine to invade another country and force them to sign treaties so long as you aren't occupying them or [X,Y,Z].
I can see arguments as to why some people would feel cheated at 20 thousand. I wouldn't agree. People have gotten too used to fake wars, and are too willing to call just about anything total warfare.
I don't think the modern warfare thing is enough to change anything. World War two was pretty deadly. Vietnam had millions of deaths.
I should be clear I was thinking all deaths caused by the war, on both sides, civilian and military. The question is how hard the Ukrainians will fight, not how effectively. My general perception is that Iraq is not generally cons...
No. I'm going to judge my prediction by the number of deaths, not days (or weeks, or months. Years would really mess with my idea of what's going on.)
Insignificant: Less than 20,000. If single battles in the American Civil War are larger than this entire conflict, then the sides must not have been fighting very hard.
Total War: I would normally say millions. I expect that the original prediction did not actually mean that? So, I'll say the other side was right if it's over a hundred thousand. More right than me at above 50,000. Of course, I'm also wrong if ...
The Ukrainian government will fight a total war to defend its sovereignty.
Counterprediction: The Ukrainian government will fold without a (significant) fight.
For what it's worth, I think this counter-prediction already seems almost certainly wrong.
I appreciate you registering your counterprediction on a public forum.
Human utility is basically a function of image recognition. Which is sort of not a straight forward thing that I can say, "This is that." Sure, computers can do image recognition, what they are doing is that which is image recognition. However, what we can currently describe algorithmically is only a pale shadow of the human function, as proven by all recaptcha everywhere.
Given this, the complex confounder is that our utility function is part of the image.
Also, we like images that move.
In sum, modifying our utility function is natural and normal, and is ac...
they would then only need a slight preponderance of virtue over vice
This assumes that morality has only one axis, which I find highly unlikely. I would expect the seed to quickly radicalize, becoming good in ways that the seed likes, and becoming evil in ways that the seed likes. Under this model, if given a random axis, the seed comes up good 51% of the time, I would expect the aligned AI to remain 51% good.
Assuming the axes do interact, if they do so inconveniently, for instance if we posit that evil has higher evolutionary fitness, or that self destruct...
Given: Closed source Artificial General Intelligence requires all involved parties to have no irreconcilable differences.
Thence: The winner of a closed source race will inevitably be the party with the highest homogeneity times intelligence.
Thence: Namely, the CCP.
Given: Alignment is trivial.
Thence: The resulting AI will be evil.
Given: Alignment is difficult.
Given: It's not in the CCP's character to care.
Thence: Alignment will fail.
Based on my model of reality, closed sourcing AI research approaches the most wrong and suicidal decisions possible (if you're...
Moldova isn't the only plausible option or anything, my reasoning is just, it has good land, the population is low enough that they could be bought out a price that isn't too absurd, they're relatively poor and could use the money, it's a relatively new country with a culture similar to a number of other countries and it's squarely in western territory and thus shouldn't be much of a source of conflict.