All of bokov's Comments + Replies

bokov10

The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.

E.g. picking one box even though I and everybody else knows I'm a two-boxer if I believe that in this case two-boxing will kill me

Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.

Are these on the right track?

bokov10

Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.

Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.

Every example I have seen for CDT falling short has been a hypothetical scena... (read more)

bokov10

Thank you for responding to my post despite its negative rating.

Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?

By non-naive I mean CDT that isn't myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.

1bokov
The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope. E.g. picking one box even though I and everybody else knows I'm a two-boxer if I believe that in this case two-boxing will kill me Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation. Are these on the right track?
1bokov
Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic. Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas. Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened. But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn't expect smart people on LW to spend so much time and energy on them.
bokov10

In other words, what Putin has already been doing more and more, but with a specific deadline attached?

2RHollerith
With a specific deadline and a specific threat of a nuclear attack on the US.
bokov10

Perhaps we should brainstorm leading indicators of nuclear attack.

3RHollerith
The strongest sign an attack is coming that I know of is firm evidence that Russia or China is evacuating her cities. Another sign that would get me to flee immediately (to a rural area of the US: I would not try to leave the country) is a threat by Moscow that Moscow will launch an attack unless Washington takes action A (or stops engaging in activity B) before specific time T.
bokov10

I always found that aspect weak. It is clearly and sadly evident that utility pessimization (I assume roughly synonymous with coercion?) is effective and stable, both on Golarion and Earth. Yet half the book seems to be gesturing at what a suboptimal strategy it is without actually spelling out how you can defeat an agent who pursues such a strategy (without having magic and some sort of mysterious meta-gods on your side).

bokov10

Update:

I went and read the background material on acausal trade and narrowed even further where it is I'm confused. It's this paragraph:

> Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, coun... (read more)

bokov40

Acausally separate civilizations should obtain our consent in some fashion before invading our local causal environment with copies of themselves or other memes or artifacts.

Aha! Finally, there it is, a statement that exemplifies much of what I find confusing about acausal decision theory.

1. What are acausally separate civilizations? Are these civilizations we cannot directly talk to and so we model their utility functions and their modelling of our utility functions etc. and treat that as a proxy for interviewing them?

2. Are these civilizations we haven't... (read more)

1bokov
Update: I went and read the background material on acausal trade and narrowed even further where it is I'm confused. It's this paragraph: > Another objection: Can an agent care about (have a utility function that takes into account) entities with which it can never interact, and about whose existence it is not certain? However, this is quite common even for humans today. We care about the suffering of other people in faraway lands about whom we know next to nothing. We are even disturbed by the suffering of long-dead historical people, and wish that, counterfactually, the suffering had not happened. We even care about entities that we are not sure exist. For example:  We might be concerned by news report that a valuable archaeological artifact was destroyed in a distant country, yet at the same time read other news reports stating that the entire story is a fabrication and the artifact never existed. People even get emotionally attached to the fate of a fictional character. My problem is lack of evidence that genuine caring about entities with which one can never interact really is "quite common even for humans today", after factoring out indirect benefits/costs and social signalling.  How common, sincerely felt, and motivating should caring about such entities be for acausal trade to work?  Can you still use acausal trade to resolve various game-theory scenarios with agents whom you might later contact while putting zero priority on agents that are completely causally disconnected from you? If so, then why so much emphasis on permanently un-contactable agents? What does it add?
bokov10

What is meant by 'reflecting'?

  • reflecting on {reflecting on whether to obey norm x, and if that checks out, obeying norm x} and if that checks out, obeying norm x

Is this the same thing as saying "Before I think about whether to obey norm x, I will think about whether it's worth thinking about it and if both are true, I will obey norm x"? 
 

bokov32

I've been struggling to understand acausal trade and related concepts for a long time. Thank you for a concise and simple explanation that almost gets me there, I think...

Am I roughly correctly in the following interpretation of what I think you are saying?

Acausal norms amount to extrapolating the norms of people/aliens/AIs/whatever whom we haven't met yet and know nothing about other than what can be inferred from us someday meeting them. If we can identify norms that are likely to generalize to any intelligent being capable of contact and negotiation and... (read more)

bokov20

Would you mind sharing how you allocated the ratio of these positions?

bokov20

Maybe the key is not to assume the entire economy will win, but make some attempt to distinguish winners from losers and then find ETFs and other instruments that approximate these sectors.

So, some wild guesses...

  • AI labs and their big-tech partners: winners
  • Cloud hosting: winners
  • Commercial real estate specializing in server farms: winners
  • Whoever comes up with tractable ways to power all these server farms: winners
  • AI-enabling hardware companies: winners until the Chinese blockade Taiwan and impose an embargo on raw materials... after that... maybe lose
... (read more)
bokov50

I'm trying out this strategy on Investopedia's simulator (https://www.investopedia.com/simulator/trade/options)

The January 15 2027 call options on QQQ look like this as of posting (current price 481.48):

Strike Black-Scholes Ask
485 64.244 77.4
500 57.796 69.83
... ... ...
675 14.308 14
680 13.693 13.5
685 13.077 12.49
... ... ...
700 11.446 10.5
... ... ...
720 9.702 8.5

So, if you were following this strategy and buying today, would you buy 485 because it has the lowest OOM strike price? Would you buy 675 because it's the lowe... (read more)

bokov10

So, how can we improve this further?

Some things I'm going to look into, please tell me if it's a waste of time:

  • Seeing if there are any REITs that specialize in server farms or chip fabs and have long-term options
  • Apparently McKinsey has a report about what white-collar jobs are most amenable to automation. Tracking down this report (they have lots) if it's not paywalled or at least learning enough about it to get the gist of which (non-AI) companies would save the most money by "intelligent automation".
    • From first principles I'd expect companies/industr
... (read more)
bokov10

A risk I see is China blockading Taiwan and/or limiting trade with the US and thus slowing AI development until a new equilibrium is reached through onshoring (and maybe recycling or novel sources of materials or something?)

On the other hand maybe even the current LLMs already have the potential to eliminate millions of jobs and it's just going to take companies a while to do the planning and integration work necessarily to actually do it.

So one question is, will the resulting increase in revenue offset the revenue losses from a proxy war with China?

bokov10

I guess scenarios where humans occupy a niche analogous to animals that we don't value but either cannot exterminate or choose not to.

2ChristianKl
Given need a lot of space to grow food and live that AGI could use for other things. Humans don't to "niche" well.
bokov10

Parfitt's Hitchhiker and transparent Newcomb: So is the interest in UDT motivated by the desire for a rigorous theory that explains human moral intuitions? Like, it's not enough that feelings of reciprocity must have conveyed a selective advantage at the population level, we need to know whether/how they also are net beneficial to the individuals involved?

bokov10

What should one do if in a Newcomb's paradox situation but Omega is just a regular dude who thinks they can predict what you will choose, by analysing data from thousands of experiments on e.g. Mechanical Turk?

Do UDT and CDT differ in this case? If they differ then does it depend on how inaccurate Omega's predictions are and in what direction they are biased?

bokov10

Thank you for answering.

I'm excluding simulations by construction.

Amnesia: So does UDT roughly-speking direct you to weigh your decisions based on your guesstimate of what decision-relevant facts apply in that scenario? And then choose among available options randomly but weighted by how likely each option is to be optimal in whatever scenario you have actually found yourself in?

Identical copies, (non-identical but very similar players?), players with aligned interests,: I guess this is a special case of dealing with a predictor agent where our predictions... (read more)

bokov10

Are there any practical applications of UDT that don't depend on uncertainty as to whether or not I am a simulation, nor on stipulating that one of the participants in a scenario is capable of predicting my decisions with perfect accuracy?

3cousin_it
Simulations; predictors (not necessarily perfect); amnesia; identical copies; players with aligned interests.
bokov10

I appreciate your feedback and take it in the spirit it is intended. You are in no danger of shitting on my idea because it's not my idea. It's happening with or without me.

My idea is to cast a broad net looking for strategies for harm reduction and risk mitigation within these constraints.

I'm with you that machines practising medicine autonomously is an bad idea, as do doctors. Because, idealistically, they got into this work in order to help people, and cynically, they don't want to be rendered redundant.

The primary focus looks like workflow management, ... (read more)

3Yaakov T
https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem https://www.lesswrong.com/tag/vnm-theorem 
Answer by bokov159

The first step is to see a psychiatrist and take the medication they recommend. For me it was an immediate night-and-day difference. I don't know why the hell I wasted so much of my life before I finally went and got treatment. Don't repeat my mistake.

bokov10

I actually tried running your essay through ChatGPT to make it more readable but it's way too long. Can you at least break it into non-redundant sections not more than 3000 words each? Then we can do the rest.

bokov10

I second that. I actually tried to read your other posts because I was curious to find out why you are getting downvoted-- maybe I can learn something outside the LW party-line from you.

But unfortunately, you don't explain your position in clear, easy to understand terms so I'm going to have to put off sorting through your stuff until I have more time.

3the gears to ascension
Hmm! Interesting point. Yes, I have been having trouble explaining my position in clear and easy terms. I'll think about how I could do that, thanks for the push! edit: oh wait, were you talking about OP? I suppose it's good advice for both of us, isn't it :)
bokov10

I meant prepping metaphorically, in the see of being willing to delve into the specifics of a scenario most other people would dismiss as unwinnable. The reason I posted this is that though it's obvious that the bunker approach isn't really the right one, I'm drawing a blank for what the right approach would even look like.

That being said, I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI. Are you saying scenarios where many but not all people die due to political/economic/environmental consequences ... (read more)

2ChristianKl
After the nuclear war caused by the AI, there's likely still an unaligned AI out there. That AI is likely going to kill the survivors of the nuclear war. 
bokov10

It's ironic that you're so excited about autonomous weapons but the first video you posted is a dramatic depiction created by a YouTube account called "Stop Autonomous Weapons".

I think the idea of this video was to scare the public by how powerful, precise, and possibly opaque these weapons are.

But I agree with you-- ethical or not, groups that limit their use of these weapons will be at a disadvantage against groups that do not. That's a microcosm of the whole AI regulatory problem right there.

bokov10

I'm sad to see him go. I don't know enough about LWs history and have too little experience with forum moderation to agree or disagree with your decision. Though LW had been around for a very long time without imploding so that's evidence you guys know what you're doing.

Please don't take down his post though. I believe somewhere in there is a good faith opinion at odds with my own. I want to read and understand it. Just not ready for this much reading tonight.

I wish I could write so prolifically! Or maybe it's a curse rather than a blessing because then it becomes an obstacle to people understanding your point of view.

4Ruby
I am a bit sad too. You might be reassured to know that we are generally very reluctant to remove content once posted and practically never do so excepting spam, even if we didn't think it was great content.
bokov30

Are there any links we can read about non-appeasing de-escalation strategies?

Either theoretical ones or ones that have been tried in the past are fine.

bokov2619

There have been "Nuclear first-use and threats or advocacy thereof" and those are easy to condemn. But as far as I know they are coming unilaterally from the Russian side and already being widely condemned by those not on the Russian side. But it sounds like you are looking for some broader consensus to condemn escalation on both sides.

Unfortunately neither this post nor the open letter you linked give any specifics about what other behaviours you are asking us to condemn. I'm reluctant to risk endorsing a false-equivalence argument by signing a blank chec... (read more)

3ChristianKl
I agree that specifics would be useful. It's bad to be too vague to be wrong. The more vague an open letter happens to be the easier it is to ignore it.  As it stands the effects of the letter likely don't go beyond signaling because in the abstract anyone can agree with it, but that's not going to change anyone's actions. When it comes to nuclear first use, the US does threaten Iran with a nuclear first strike by saying: Nuclear first strikes capability is one element of US national power. As far as I know, past attempts to get the US to explicitely rule out using a nuclear first strike against Iran and North Korea always failed. If you actually want the US to stop making nuclear first strike threats, being explicit about the threads against Iran not being okay would be taking a stance. You would likely get some opposition for taking the stance, but at least it's something concrete. When it comes to "reckless escalation" I find it likely that neither the US nor Ukraine would say they engage in reckless escalation. If you want them to change what they are doing you likely need to be more concrete. 
bokov11

The EU approach to getting Ukraine to protect the rights of minorities seems more... sustainable... than Russia's approach, so I propose a different compromise:

How about Russia withdraw all its troops back to the 2014 borders and we all give the slow, non-violent path a chance to work.

2ChristianKl
That's unlikely going to happen in the real world.  When thinking about the world it makes sense to think in terms of what's actually possible. 
bokov10

I'm not equating the West and Anti-West in terms of power. I agree that the Anti-West is much weaker. That doesn't mean it's incapable of becoming a threat in the future. 

bokov107

Furthermore, it's up to the Ukrainian people to confront their dark past. Not Russians to do it for them. 

Just like it's up to Americans to confront and atone for America's history of slavery. Not some neighbouring country to roll in with tanks and turn our historical/cultural/political problem into a military one.

bokov3031

This is basically a false equivalence "there are good/bad people on both sides" type of argument. 

If some other country sent troops inside Russia's borders and held a referendum for whether or not the regions they occupied want to be annexed, I would consider Russia to be the victim no matter how screwed up its internal politics are. Furthermore, such a referendum would not be legitimate no matter how honestly executed it is because the presence of foreign troops and displacement of civilians already hopelessly biases the outcome. 

For the same re... (read more)

-7ChristianKl
bokov10

A decisively defeated Russia will have fewer resources with which to coerce him. And if he's smart and keeps his powder dry like he has, he will have more resources with which to resist.

And if he gets overthrown in a color revolution, the Belarussians have not yet gotten so much blood on their hands as to preclude support from the West.

bokov1-1

So I support a ceasefire and I oppose sponsorship of insurgency in Russia. But my opinions don't count. 

You opinions count, though most of us disagree with you. Thus, the replies.

Let's suppose that supporting Ukraine does further empower 'our globe-spanning military-industrial complex'. But failing to support Ukraine empower the rival globe-spanning military-industrial complex that in addition to Russia includes Iran, Syria, and China.

A ceasefire that results in Russia keeping more Ukrainian land than it started will empower this rival military-indust... (read more)

1Mitchell_Porter
One may live under a variety of political orders. Life becomes difficult when you're caught between two systems fighting each other. As an Australian, I had no problems with the rise of China, until the Trump presidency forced Australia to choose between its economic provider and its security provider.  Actually, while he was campaigning, Trump had an advisor, Carter Page, who proposed an entente between China, Russia, and America. But Page was purged along with all the Russophiles, and Trump wanted his trade war with China, and now under Biden, the idea that all nations should be liberal democracies has been restored to the list of reasons why east and west are at odds. And maybe the odds were always against a LaRouche-style peaceful coexistence of such different powers.  The way I see it, America has had supreme power in the world twice, and has a chance at a third time. First was in 1945, when only the USA has the bomb, and everywhere else was in ruins. Then came 1991, when American information society was suddenly the only serious political and economic model remaining. The third chance is due to artificial intelligence, although perhaps it's more accurate to say that, whatever posthuman order characterizes the era of AI, it's most likely to first take shape on the territory of America.  So personal preferences aside, there is a sense in which I judge the meta-alliance of "NATO+Quad" as more likely to win than "SCO+Iran". But winning only because of AI, and only in the sense that it gets to be ground zero of the AI-driven transformation of the world. If it weren't for AI, I would not expect America to ever be on top again. 
2clone of saturn
It's absurd to equate the shaky and informal coalition of Russia, China, Iran, and Syria with the 750+ extraterritorial bases, worldwide naval dominance, and global surveillance network of the US Military.
Answer by bokov10

I wonder what the feasibility is for a group of LW-ers somehow putting on retainer a charter flight to NZ?

bokov70

How would a nuclear test demonstrate that Putin is not bluffing?

It only demonstrates that he has nukes, which we already know.

7Viliam
It would stop people joking "I bet their nukes probably aren't working either". I don't think that the military takes these jokes seriously. But for the general population of NATO countries, this kind of humor helps reduce the anxiety about WW3. A nuclear test would help restore the anxiety.
4Big Tony
Conducting a nuclear test indicates a much higher willingness to use nuclear than just keeping them in storage does.
bokov0-1

I'm also biting the bullet and saying that this is probably what we should aim for, barring pivotal acts because I see AGI development as mostly inevitable, and there are far worse outcomes than this.

Dead is dead, whether due to AGI or due to a sufficient percentage of smart people convincing themselves that destructive uploading is good enough and continuity is a philosophical question that doesn't matter.

bokov10

Now, if synchronizing minds is possible, it would address this problem.

But I don't see nearly as much attention being put into that as into uploading. Why?

bokov10

A copy of you ceases to exist and then another copy comes into existence with the exact same sense of memories/continuity of self etc. That's like going to sleep and waking up.

Even when it becomes possible to do this at sufficient resolution, I see no reason it won't be like going to sleep and never waking up.

It's not as if there is a soul to transfer or share between the two instances. No way to sync the experiences of the two instances.

So I don't see a fundamental difference between "You go to sleep and an uploaded you wakes up" vs "You go to sleep and a... (read more)

2Shamash
Consider the following thought experiment: You discover that you've just been placed into a simulation, and that every night at midnight you are copied and deleted instantaneously, and in the next instant your copy is created where the original once was. Existentially terrified, you go on an alcohol and sugary treat binge, not caring about the next day. After all, it's your copy who has to suffer the consequences, right? Eventually you fall asleep.  The next day you wake up hungover as all hell. After a few hours of recuperation, you consider what has happened. This feels just like waking up hungover before you were put into the simulation. You confirm that the copy and deletion did occur. It is confirmed. Are you still the same person you were before? You're right that it's like going to sleep and never waking up, but Algon was also right about it being like going to sleep and waking up in the morning, because from the perspective of "original" you those are both the same experience. 
1green_leaf
Your instance is the pattern, and the pattern is moved to the computer. Since consciousness is numerically identical to the pattern (or, more precisely, the pattern being processed), the question of how to get my consciousness in the computer after the pattern is already there doesn't make sense. The consciousness is already there, because the consciousness is the pattern, and the pattern is already there.
1bokov
Now, if synchronizing minds is possible, it would address this problem. But I don't see nearly as much attention being put into that as into uploading. Why?
bokov10

What I like about this story is that it makes more accessible the (to me) obvious fact that, in the absence of technology to synchronize/reintegrate memories from parallel instances, uploading does not solve any problems for you-- it at best spawns a new instance of you that doesn't have those problems, but you still do.

Yet uploading is so much easier than fixing death/illness/scarcity in the physical world that people want to believe it's the holy grail. And may resist evidence to the contrary.

Destructive uploads are murder and/or suicide.

1Noosphere89
I note a distributional shift issue, in that the concept of a single, continuous you only exists due to limitations of biology, and once digital uploads can happen, the concept of personality can get very weird indeed. The real question can be, does it matter then? Well, that's a question that won't be solved by philosophers. So the real thing that is a lesson is be wary of distributional shift mucking up your consciousness. I'm also biting the bullet and saying that this is probably what we should aim for, barring pivotal acts because I see AGI development as mostly inevitable, and there are far worse outcomes than this.
5Algon
Wait, why are destructive uploads murder/suicide? A copy of you ceases to exist and then another copy comes into existence with the exact same sense of memories/continuity of self etc. That's like going to sleep and waking up. Non-destructive uploads are plausibly like murder/suicide, but you don't need to do down that route.
bokov60

Are there any specific examples of anybody working on AI tools that autonomously look for new domains to optimize over?

  • If no, then doesn't the path to doom still amount to a human choosing to apply their software to some new and unexpectedly lethal domain or giving the software real-world capabilities with unexpected lethal consequences? So then, shouldn't that be a priority for AI safety efforts?
  • If yes, then maybe we should have a conversation about which of these projects is most likely to bootstrap itself, and the likely paths it will take?
bokov10

Now we know more than nothing about the real-world operational details of AI risks. Albeit mostly banal everyday AI that we can't imagine harming us at scale. So maybe that's what we should try harder to imagine and prevent. 

Maybe these solutions will not generalize out of this real-world already-observed AI risk distribution. But even if not, which of these is more dignified? 

  • Being wiped out in a heartbeat by some nano-Cthulu in pursuit of some inscrutable goal that nobody genuinely saw coming
  • Being killed even before that by whatever is the most
... (read more)