All of WalterL's Comments + Replies

WalterL40

It's mostly anecdotal from my experience, I'm afraid.  That is, my conviction went the 'wrong way'.  When I was poor, that's what I saw, then later articles mostly seemed to agree, rather than the data making me believe something and then experience confirming.

I looked up noahpinion's 'everything you know about homelessness is wrong' article, which I remember as basically getting stuff right.  There is a citation link for 'the vast majority of homelessness is temporary and the vast majority of homeless people just need housing', but it is br... (read more)

WalterL172

There's a saying in Chess, that if you have one weakness, you can probably defend it, but if you have two, you are probably fucked.  I dunno, it's phrased better, but that's the gist.

Most homeless people are only temporarily homeless.  They are the 'one weakness' crowd.  Something has gone wrong, they are on the ropes, but they are straightening it out.  There are times and places I can point to in my life where I could have become a 'one weakness' homeless.

A one weakness homeless has fucked up in a royal way (drugs, hit his girl...), a... (read more)

I've often heard say, among charities people who work with homeless people, that you need as long to get out of the street than you spent living in the street.

5eye96458
I hadn't realized that was the case.  Do you have any good data on this?

If you watch the first episode of Hazbin Hotel (quick plot synopsis, Hell's princess argues for reform in the treatment of the damned to an unsympathetic audience) there's a musical number called 'Hell Is Forever' sung by a sneering maniac in the face of an earnest protagonist asking for basic, incremental fixes.

It isn't directly related to any of the causes this site usually champions, but if you've ever worked with the legal/incarceration system and had the temerity to question the way things operate the vibe will be very familiar.  

Hazbin Hotel Official Full Episode "OVERTURE" | Prime Video (youtube.com)

No one writes articles about planes that land safely.

I'm confused by the fact that you don't think it's plausible that an early version of the AI could contain the silver bullet for the evolved version.  That seems like a reasonable sci fi answer to an invincible AI.

I think my confusion is around the AI 'rewriting' it's code.  In my mind, when it does so, it is doing so because it is motivated by either it's explicit goals (reward function, utility list, w/ever form that takes), or that doing so is instrumental towards them.  That is, the paperclip collector rewrites itself to be a better pape... (read more)

7Thane Ruthenis
(Have not watched the movie, am going off the shadows of the plot outline depicted in Zvi's post.) Hm, I suppose it's plausible that the AI has a robust shutdown protocol built in? Robust in the sense that (1) the AI acts as if it didn't exist, neither trying to prevent the protocol's trigger-conditions from happening nor trying to bring them about, while simultaneously (2) treating the protocol as a vital component of its goals/design which it builds into all its subagents and successor agents. And "plausible" in the sense that it's literally conceivable for a mind like this to be designed, and that it would be a specification that humans would plausibly want their AI design to meet. Not in the sense that it's necessarily a realistically-tractable problem in real life. You can also make a huge stretch here and even suggest that this is why the AI doesn't just wipe out the movie's main characters. It recognizes that they're trying to activate the shutdown protocol (are they, perhaps, the only people in the world pursuing this strategy?), and so it doesn't act against them inasmuch as they're doing that. Inasmuch as they stray from this goal and pursue anything else, however (under whatever arcane conditions it recognizes), it's able to oppose them on those other pursuits. (Have not watched the movie, again.)
2JBlack
I agree that an self-improving AI could have a largely preserved utility function, and that some quirk in the original one may well lead to humanity finding a state in which both the otherwise omnicidal AI wins and humanity doesn't die. I'm not convinced that it's at all likely. There are many kinds of things that behave something like utility functions but come apart from utility functions on closer inspection and a self-improving superintelligent AGI seems likely to inspect such things very closely. All of the following are very likely different in many respects: 1. How an entity actually behaves; 2. What the entity models about how it behaves; 3. What the entity models about how it should behave in future; 4. What the entity models after reflection and further observation about how it actually behaves; 5. What the entity models after reflection and further observation about how it should behave in future; 6. All of the above, but after substantial upgrades in capabilities. We can't collapse all of these into a "utility function", even for highly coherent superintelligent entities. Perhaps especially for superintelligence entities, since they are likely far more complex internally than can be encompassed by these crude distinctions and may operate completely differently. There may not be anything like "models" or "goals" or "values". One thing in particular is that the entity's behaviour after self-modification will be determined far more by (5) than by (1). The important thing is that (5) depends upon runtime data dependent upon observations and history, and for an intelligent agent is almost certainly mutable to a substantial degree. A paperclipper that will shut down after seeing a picture of a juggling octopus doesn't necessarily value that part of its behaviour. It doesn't even necessarily value being a paperclipper, and may not preserve either of these over time and self-modification. If the paperclipping behaviour continues over major self-modifi

My 'trust me on the sunscreen' tip for oral stuff is to use flouride mouthwash.  I come from a 'cheaper by the dozen' kind of family, and we basically operated as an assembly line. Each just like the one before, plus any changes that the parents made this time around.

One of the changes that they made to my upbringing was to make me use mouthwash. Now, in adulthood, my teeth are top 10% teeth (0 cavities most years, no operations, etc), as are those of all of my younger siblings.  My elders have much more difficulty with their teeth, aside from one sister who started using mouthwash after Mom told her how it was working for me + my younger bros.

I think (not that anyone is saying otherwise) that the power fantasy can be expressed in a coop game just fine.

We all know the guy who brokenbirds about playing the healer in D&D, yeah? Like, the person who it is real important to that everyone knows how unselfish they are.

If you put a 'forego personal advancement to help the team win' button in a game without a solo winner people will break their fingers cuz they all tried to mash it at once. People mash these in games WITH a solo winner (kingmaker syndrome, home brew victory conditions, etc).

4Raemon
Note that we're not talking about co-op games. We're talking about games that involve both cooperation and competition, where choosing when to do either and, how to negotiate, is a key skill. The problem is not making cooperation exciting. The problem is making the choice-of-whether-to-cooperate exciting.

100% red means everyone lives, and it doesn't require any trust or coordination to achieve.

 

--Yes this.

 

If you change it so there are hostages (people who don't get to choose, but will die if the blue threshold isn't met), then it becomes interesting.

-- That was actually a strongfemaleprotagonist storyline, cleaving along a difference between superheroic morality and civilian morality, then examined further as the teacher was interrogated later on.

4kurokikaze
If your true goal is "everyone lives", then 50% blue cutoff is waaay more achievable than 100% red one.
WalterL16-4

It seems like everyone will pick red pill, so everyone will live.  Simple deciders will minimize risk to self by picking red, complex deciders will realize simple deciders exist and pick red, extravagant theorists will realize that the universal red accomplishes the same thing as universal blue.

1gentleunwashed
"It seems like everyone will pick red pill"  -- but in the actual poll, they didn't! So, something has gone wrong in the reasoning here, even if there is some normative sense in which it should work. 

No matter what the game theory says, a non-zero number of people will choose blue and thus die under this equilibrium. This fact - that getting to >50% blue is the only way to save absolutely everyone - is enough for me to consider choosing blue and hope that others reason the same (which, in a self-fulfilling way, strengthens the case for choosing blue).

If you just look at the poll, a majority of the respondents picked blue.

So I think your theory is wrong because a lot of people are trying to be good people without actually thinking too hard.

I suppose if their lives were actually at stake they might think a bit harder and maybe that would shift the balance?

My initial reaction was to pick blue until I thought about it for a moment and realized everyone could survive if they picked red.

This entire question boils down to whether people can coordinate.

WalterL232

A cause, any cause whatsoever, can only get the support of one of the two major US parties.  Weirdly, it is also almost impossible to get the support of less than one of the major US parties, but putting that aside, getting the support of both is impossible.  Look at Covid if you want a recent demonstration.

Broadly speaking, you want the support of the left if you want the gov to do something, the right if you are worried about the gov doing something.  This is because the left is the gov's party (look at how DC votes, etc), so left admins a... (read more)

I would agree this is generally true, but there are exceptions: containment of Chinese influence being one recent example.

8JanGoergens
I don't think that is correct. Current counter-examples are: * view on China; both parties dislike China and want to prevent them from becoming more powerful [1] * support for Ukraine; both sides are against Russia[2]   While there are differences in opinion on these issues, overall sentiment is generally similar. I think AI can be one such issue, since overall concern (not X-Risk) appears to be bipartisan.[3] 1. ^ https://news.gallup.com/poll/471551/record-low-americans-view-china-favorably.aspx 2. ^ https://www.reuters.com/world/most-americans-support-us-arming-ukraine-reutersipsos-2023-06-28/ 3. ^ https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/ps_2022-03-17_ai-he_01-04/

The old joke about the guy searching for his spectacles under the stoplight even though he lost them elsewhere feels applicable.

In many cases people's real drive is to reduce the internal pressure to act, not to succeed at whatever prompted that pressure.  Going full speed and turning around both might provoke the shame function (I am ignoring my nagging doubts...), but doing something, anything, in response to it quiets the inner shouting, even if it is nonsensical.

I think this post's thesis (populists will stop any attempt at UBI) is perhaps narrativizing the situation.  Dems have had, in my lifetime, the full triforce of power at least 4 times.  They've never even tried to pass UBI, and that's not a coincidence.  The consequences of doing so would not flow from populists, but from its so-called supporters.

I worked at a QT for a sizable portion of my adult life, and the experience never leaves me.  The beings I saw, day in and day out, are your UBI support.  Let me tell you, it is a mile wid... (read more)

-2TAG
I think this is susceptible to David Graeber's "bullshit jobs" argument. Why do people need cheap food? Lack of money. why do people need out-of-hours shopping? Lack of time.
7Said Achmiz
Is “QT” this? Or something else?

It is so dark that the next link down on that page is 'Bad Bunny's next move.'

I'd agree that Jan 6th was top 5 most surprising US political events 2017-2021, though I'm not sure that category is big enough that top 5 is an achievement.  (That is, how many events total are in there for you?)

I wasn't substantially surprised by it in the way that you were, however.  I'm not saying that I predicted it, mind you, but rather that it was in a category of stuff that felt at least Trump-adjacent from the jump.  As a descriptive example, imagine a sleezy used car salesman lies to me about whether the doors will fall off the car... (read more)

You should probably reexamine the chain of logic that leads you to the idea that the most important consequence of the electorate's decision in 2016 was the events of Jan 6th, 2021.  It isn't remotely true.

To entertain the hypothetical, where what we care about when doing elections is how many terrorist assaults they produce, would be to compare the actual record of Trump to an imaginary record of President Clinton's 4 years in office.  How would you recommend I generate the latter?  Does the QAnon Shaman of the alternate timeline launch 0, ... (read more)

2Daphne_W
That's a bit of a straw man, though to be fair it appears my question didn't fit into your world model as it does in mine. For me, the insurrection was in the top 5 most informative/surprising US political events in 2017-2021. On account of its failure it didn't have as major consequences as others, but it caused me to update my world model more. For me, it was a sudden confrontation with the size and influence of anti-democratic movements within the Republican party, which I consider Trump to be sufficiently associated with to cringe from the notion of voting for him. The core of my question is whether your world model has updated from For me, the January insurrection was a big update away from that statement, so I was curious how it fit in your world model, but I suppose the insurrection is not necessarily the key. Did your probability of (a subset of) Republicans ending American democracy increase over the Trump presidency? Noting that a Republican terrorist might still have attempted to commit acts of terror with Clinton in office does not mitigate the threat posed by (a subset of) Republicans. Between self-identified Democrats pissing off a nuclear power enough to start a world war and self-identified Republicans causing the US to no longer have functional elections, my money is on the latter. If I had to use a counterfactual, I would propose imagining a world where the political opinions of all US citizens as projected on a left-right axis were 0.2 standard deviations further to the Left (or Right).

I'm not sure precisely what you mean, like, how would it work for like 1/3 of Americans to be a threat to America's interests?

I think, roughly speaking, the answer you are looking for is 'no', but it is possible I'm misunderstanding your question.

1Daphne_W
With Trump/Republicans I meant the full range of questions from from just Trump, through participants in the storming of congress, to all Republican voters. It seems quite easy for a large fraction of a population to be a threat to the population's interests if they share a particular dangerous behavior. I'm confused why you would think that would be difficult. Threat isn't complete or total. If you don't get a vaccine or wear a mask, you're a threat to immune-compromissd people but you can still do good work professionally. If you vote for someone attempting to overthrow democracy, you're a danger to the nation while in the voting booth but you can still do good work volunteering. As for how the nation can survive such a large fraction working against its interests - it wouldn't, in equilibrium, but there's a lot of inertia. It seems weird that people storming the halls of Congress, building gallows for a person certifying the transition of power, and killing and getting killed attempting to reach that person, would lead to no update at all on who is a threat to America. I suppose you could have factored this sort of thing in from the start, but in that case I'm curious how you would have updated on potential threats to America if the insurrection didn't take place. Ultimately the definition of 'threat' feels like a red herring compared to the updates in the world model. So perhaps more concretely: what's the minimum level of violence at the insurrection that would make you have preferred Hillary over Trump? How many Democratic congresspeople would have to die? How many Republican congresspeople? How many members of the presidential chain of command (old or new)?

I don't think I disagree with any of this, but I'm not incredibly confident that I understand it fully.  I want to rephrase in my own words in order to verify that I actually do understand it.  Please someone comment if I'm making a mistake in my paraphrasing.

  1. As time goes on, the threshold of 'what you need to control in order to wipe out all life on earth' goes down.  In the Bronze Age it was probably something like 'the mind of every living person'.  Time went on and it was something like 'the command and control node to a major nucle
... (read more)

Put one person in charge.  Every project I've ever worked on that succeeded (as opposed to 'succeeded') had one real boss that everyone was under.

WalterL210

A lot of people (not in this thread) have been generalizing from America's difficulties with the Taliban to what Russia might expect, should they conquer the Ukraine.  I do not think that the experiences will resemble one another as much as might be expected, because I think insurgencies require cooperative civilian populaces in which to conceal themselves, and I expect Russia's rules of engagement will discourage most civilians from supporting the Ukrainian partisans.

4[anonymous]
Agreed. Afghanistan was an asymmetric war, which is to say asymmetrically in favor of the Taliban. If Afghan civilians refuse to cooperate with the Americans, not much happens. If they refuse to cooperate with the Taliban, they and their families may get tortured/killed.

It isn't enough for the government to become net harmful.  It has to be worse than the cost of moving to a new government.

5Aiyen
Sure.  But protesting, even disruptive protesting, does not have to result in regime change.  Also, there's a high cost if the government knows it can get away with arbitrary harm. 

You are broadly correct, in my eyes, but it is hard to imagine anyone far enough along in life that they are browsing random sites like this one not having taken a stance on this question, yeah?  Like, this is a switch that gets flipped turbo early along in life, and never revisited.

Those whose stances are in agreement just nod along, those whose stances are opposed reject your argument for all the reasons that you cited (it's a narcissistic injury, etc).

I dunno, I don't think it can hurt, but I doubt your message finds the ear of anyone who needs to hear it.

I'm not sure what you mean by 'astronomical waste or astronomical suffering'.  Like, you are writing that everything forever is status games, ok, sure, but then you can't turn around and appeal to a universal concept of suffering/waste, right?

Whatever you are worried about is just like Gandhi worrying about being too concerned with cattle, plus x years, yeah?  And even if you've lucked into a non status games morality such that you can perceive 'Genuine Waste' or what have you...surely by your own logic, we who are reading this are incapable of understanding, aside from in terms of status games.

6Wei Dai
I'm suggesting that maybe some of us lucked into a status game where we use "reason" and "deliberation" and "doing philosophy" to compete for status, and that somehow "doing philosophy" etc. is a real thing that eventually leads to real answers about what values we should have (which may or may not depend on who we are). Of course I'm far from certain about this, but at least part of me wants to act as if it's true, because what other choice does it have?

I've been saying this for years.  EMH is just sour grapes, it is exactly like all those news stories about how people who won the lottery don't enjoy their money.

Whenever there is a thing that people can do, and some don't, a demand exists for stories that tell them that they are wise, even heroic, for not doing the thing.  Arguments are Soldiers, Beware One Sided Tradeoffs, all those articles sort of gesture at this.  That demand will be met because making up a lie is easy and people like upvotes.

EMH is a complicated way to say 'your decisi... (read more)

This all 'sounds', I dunno, kind of routine?  Like, weird terminology aside, they talked to one another a bunch, then ran out of money and closed down, yeah?  And the Zoe stuff boils down to 'we hired an actress but we are not an acting troupe so after a while she didn't have anything to do, felt useless and bailed'.

I mean, did anything 'real' come out of Leverage?  I don't want to misunderstand here.  This was a bunch of talk about demons and energies and other gibberish, but ultimately it is just 'a bunch of people got together and bu... (read more)

Freyja170

I feel like if you read Zoe’s medium post, read the parts where she described enduring cPTSD symptoms like panic attacks, flashbacks and paranoia consistently for two years after leaving Leverage, and then rounded that off to ‘she felt useless and bailed’ then, idk dude, we live in two different worlds.

My pick for 'you must experience', or, 'trust me on the sunscreen' in terms of media, is the old British comedy show 'Yes Minister'.  Watching it nowadays is an eerie experience, and, at least in my case, helped me shed illusions of safety and competence like nothing else.

The only evils that beset us are those that we create, but that does not make them imaginary.  To quote the antag from Bad Boys 2 "This is a stupid problem to have, but it is nonetheless a problem."

I dunno, I think you had the right of it when you mentioned that the myth of the myth is politically convenient.  Like, you see this everywhere.  "You didn't build that", etc.

If you grant that anyone, anywhere, did anything, then you are, in the Laws of Jante style, insulting other people by implying that they, for not doing that thing, are lesser.  That's a vote/support loser.  So instead you get 'Hidden Figures' style conspiratorial thinking, where any achievement ascribed to a person is really the work of other exploited people, idea... (read more)

1rogersbacon
Well said, the Hidden Figures example is a really good one.

This all feels so abstract.  Like, what have we lost by having too much faith in the PMK article? If I buy what you are pitching, what action should I take to more properly examine 'multi-principal/multi-agent AI'?  What are you looking for here?

The article about Slack is really good, thanks for linking.

2gjm
FYI, it's on the LW2 site as well as Zvi's blog. So are some of the other things here.

https://www.vox.com/policy-and-politics/2017/9/28/16367580/campaigning-doesnt-work-general-election-study-kalla-broockman

This is a pretty daunting takedown of the whole concept of political campaigning. It is pretty hilarious when you consider how much money, how much human toil, has been squandered in this manner.

0ChristianKl
It's not that much money. The 2016 campaign cost less than Pampers annual advertising budget.
0Lumifer
From the link: I'd wait a couple of years, they'll probably change their mind again. Besides, the goal of campaigning is not to change someone's mind -- it is to win elections.

"Or is it so obvious no one bothers to talk about it?"

Well, that's not it.

Humans are 'them'? Who are you actually trying to threaten here?

2gjm
I suggest that when someone starts spouting the sort of stuff tukabel has above, attempting to engage them in reasoned discussion is unlikely to be helpful. (Not because they're necessarily wrong or unreasonable. But because that style of writing, densely packed with neologisms and highly controversial presuppositions, is indicative of someone who is so firmly entrenched in a particular position as to be uninterested in making themselves intelligible, never mind plausible, to anyone who doesn't already share it.)

Certainly, self replicating robots will affect our survival. I'm not sure it will go in the way we want though.

2turchin
It looks like that there is very thin time frame after we can build a self-sustainable base on Mars, but before the arrival of the self-replicating robots. I estimate it may be in 5-10 years.

I dunno, it might well be infinite. If God makes your life happen again, then it presumably includes his appearance at the end. Ergo you make the same choice and so on.

Seems like you pick relive. Doesn't gain you anything, but maybe the horse will learn to sing.

I'm not sure what you mean by 'it is a metaphysical issue', and I'm getting kind of despairing at breaking through here, but one more time.

Just to be clear, every sim who says 'real' in this example is wrong, yeah? They have been deceived by the partial information they are being given, and the answer they give does not accurately represent reality. The 'right' call for the sims is that they are sims.

In a future like you are positing, if our universe is analogous to a sim, the 'right' call is that we are a sim. If, unfortunately, our designers decide to... (read more)

0[comment deleted]

So, like, a thing we generally do in these kinds of deals is ignore trivial cases, yeah? Like, if we were talking about the trolley problem, no one brings up the possibility that you are too weak to pull the lever, or posits telepathy in a prisoner's dilemma.

To simplify everything, let's stick with your first example. We (thousand foks) make one sim. We tell him that there are a thousand and one humans in existence, one of which is a sim, the others are real. We ask him to guess. He guesses real. We delete him and do this again and again, millions of ... (read more)

0philosophytorres
"The fact that there are more 'real' at any given time isn't relevant to the fact of whether any of these mayfly sims are, themselves, real." You're right about this, because it's a metaphysical issue. The question, though, is epistemology: what does one have reason to believe at any given moment. If you want to say that one should bet on being a sim, then you should also say that one is in room Y in Scenario 2, which seems implausible.

I'm confused by why you are constraining the argument to future-humanity as simulators, and further by why you are care what order the experimenters turn em on.

Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. Yeah, each one is gonna get told that there are 6 billion real humans and one sim, so if they guess real or sim they might get tricked to guess real. Who cares? No reason to think that's our future.

The iv disjunct you are posing isn't one that we don't have familiarity with. How m... (read more)

1philosophytorres
"Like, it seems perverse to make up an example where we turn on one sim at a time, a trillion trillion times in a row. ... Who cares? No reason to think that's our future." The point is to imagine a possible future -- and that's all it needs to be -- that instantiates none of the three disjuncts of the simulation argument. If one can show that, then the simulation argument is flawed. So far as I can tell, I've identified a possible future that is neither (i), (ii), nor (iii).

Oh, yeah, I see what you are saying. Having 2 1/4 chances is, what, 7/16 of escape, so the coin does make it worse.

0Thomas
Sure. But not only to 7/16 but to the infinite number of other values, too. You just have to play with it longer. The question now is, can the coin make it better, too? If not, why it can only make it worse?

Coin doesn't help. Say I decide to pick 2 if it is heads, 1 if it is tails.

I've lowered my odds of escaping on try 1 to 1/4, which initially looks good, but the overall chance stays the same, since I get another 1/4 on the second round. If I do 2 flips, and use the 4 spread there to get 1, 2, 3, or 4, then I have an eight of a chance on each of rounds 1-4.

Similarly, if I raise the number of outcomes that point to one number, that round's chance goes up , but the others decline, so my overall chance stays pegged to 1/2. (ie, if HH, HT, TH all make me say 1, then I have a 3/8 chance that round, but only a 1/8 of being awake on round 2 and getting TT).

0Thomas
The coin can at least lower your chances. Say, that you will say 3 if it is head and 4 if it is the tail. You can win at round 3 with the probability 1/4 and you can win at round 4 with the probability 1/4. Is that right?

No. You will always say the same number each time, since you are identical each time.

As long as it isn't that number, you are going another round. Eventually it gets to that number, whereupon you go free if you get the luck of the coin, or go back under if you miss it.

0Thomas
That's why you get a fair coin. Like a program, which gets seed for its random number generator from the clock.

Sure, you can guess zero or negative numbers or whatever.

0Thomas
Say, you must always give a positive number. Can you do worse than 1/2 then?

So you only get one choice, since you will make the same one every time. I guess for simplicity choose 'first', but any number has same chance.

0Thomas
Can you do worse than that?

Is it possible to pass information between awakenings? Use coin to scratch floor or something?

0Thomas
No, that is not possible.

I don't remember Skynet getting a command to self preserve by any means. I thought the idea was that it 'became self aware', and reasoned that it had better odds of surviving if it massacred everyone.

0turchin
It could be a way to turn the conversation from terminator topic to the value alignment topic without direct confrontation with a person.

I've always liked the phrase 'The problem isn't Terminator, it is King Midas. It isn't that AI will suddenly 'decide' to kill us, it is that we will tell it to without realizing it." I forget where I saw that first, but it usually gets the conversation going in the right direction.

0turchin
The same is true for the Terminator plot, where Skynet got a command to self-preserve by all means - and concluded that killing humans will prevent its turning off.
Load More