jimmy

Wiki Contributions

Comments

Sorted by
jimmy20

I think this is correct as a conditional statement, but I don't think one can deduce the unconditional implication that attempting to price some externalities in domains where many externalities are difficult to price is generally bad.

 

It's not "attempting to price some externalities where many are difficult to price is generally bad", it's "attempting to price some externalities where the difficult to price externalities on the other side is bad". Sometimes the difficulty of pricing them means it's hard to know which side they primarily lie on, but not necessarily.

The direction of legible/illegible externalities might be uncorrelated on average, but that doesn't mean that ignoring the bigger piece of the pie isn't costly. If I offer "I'll pay you twenty dollars, and then make up some rumors about you which may or may not be true and may greatly help or greatly harm your social standing",  you don't think "Well, the difficult part to price is a wash, but twenty dollars is twenty dollars"

you can just directly pay the person who stops the shooting,

You still need a body.

Sure, you can give people like Elisjsha Dicken a bunch of money, but that's because he actually blasted someone. If we want to pay him $1M per life he saved though, how much do we pay him? We can't simply go to the morgue and count how many people aren't there. We have to start making assumptions, modeling the system, and paying out based on our best guesses of what might have happened in what we think to be the relevant hypothetical. Which could totally work here, to be clear, but it's still a potentially imperfect attempt to price the illegible and it's not a coincidence that this was left out of the initial analysis that I'm responding.

But what about the guy who stopped a shooting before it began, simply by walking around looking like the kind of guy that would stop an a spree killer before he accomplished much? What about the good role models in the potential shooters life that lead him onto the right track and stopped a shooting before it was ever planned? This could be ten times as important and you wouldn't even know without a lot of very careful analysis. And even then you could be mistaken, and good luck creating enough of a consensus on your program to pay out what you believe to be the appropriate amount to the right people who have no concrete evidence to stand on. It's just not gonna work.

I don't agree that most of the benefits of AI are likely to be illegible. I expect plenty of them to take the form of new consumer products that were not available before, for example.

Sure, they'll be a lot of new consumer products and other legible stuff, but how are you estimating the amount of illegible stuff and determining it to be smaller? That's the stuff that by definition is going to be harder to recognize so you can't just say "all of the stuff I recognize is legible, therefore legible>>illegible".

For example, what's the probability that AI changes the outcome of future elections and political trajectory, is it a good or bad change, and what is the dollar value of that compared to the dollar value of ChatGPT?

jimmy40

I think my main point would be that Coase's theorem is great for profitable actions with externalities, but doesn't really work for punishment/elimination of non-monetary-incented actions where the cost is very hard to calculate. 

 

This brings up another important point which is that a lot of externalities are impossible to calculate,  and therefore such approaches end up fixating on the part that seems calculable without even accounting for (or even noticing) the incalculable part. If the calculable externalities happen to be opposed to larger incalculable externalities, then you can end up worse off than if you had never tried.

As applied to the gun externality question, you could theoretically offer a huge payday to the gun shop that sold the firearm used to stop a spree shooting in progress, but you still need a body to count before paying out. It's really hard to measure the number of murders which didn't happen because the guns you sold deterred the attacks. And if we accept the pro 2A arguments that the real advantage of an armed populace is that it prevents tyranny, that's even harder to put a real number on.

I think this applies well to AI, because absent a scenario where gray goo rearranges everyone into paperclips (in which case everyone pays with their life anyway), a lot of the benefits and harms are likely to be illegible. If AI chatbots end up swaying the next election, what is the dollar value we need to stick on someone? How do we know if it's even positive or negative, or if it even happened? If we latch onto the one measurable thing, that might not help.

jimmy3621

The frustrating thing about the discussion about the origins is that people seldom show recognition of the priorities here, and all get lost in the weeds.

You can get n layers deep into the details, and if the bottom is at n+1 you're fucked. To give an example I see people talking about with this debate, "The lab was working on doing gain of function to coronaviruses just like this!" sounds pretty damning but "actually the grant was denied, do you think they'd be working on it in secret after they were denied funding?" completely reverses it. Then after the debate, "Actually, labs frequently write grant proposals for work they've already done, and frequently are years behind in publishing" reverses it again. Even if there's an odd number of remaining counters, the debate doesn't demonstrate it. If you're not really really careful about this stuff, it's very easy to get lost and not realize where you've overextended on shaky ground.

Scott talks about how Saar is much more careful about these "out of model" possibilities and feels ripped off because his opponent wasn't, but at least judging from Scott's summary it doesn't appear he really hammered on what the issue is here and how to address it.

Elsewhere in the comments here Saar is criticized for failing to fact check the dead cat thing, and I think that's a good example of the issue here. It's not that any individual thing is too difficult to fact check, it's that when all the evidence is pointing in one direction (so far as you can tell) then you don't really have a reason to fact check every little thing that makes total sense so of course you're likely to not do it. If someone argues that clay bricks weigh less than an ounce, you're going to weigh the first brick you see to prove them wrong, and you're not going to break it open to confirm that it's not secretly filled with something other than clay. And if it turns out it is, that doesn't actually matter because your belief didn't hinge on this particular brick being clay in the first place.

If it turns out that a lot of your predictions turn out to be based on false presuppositions, this might be an issue. If it turns out the trend you based your perspective on just isn't there, then yeah that's a problem. But if that's not actually the evidence that formed your beliefs, and they're just tentative predictions that aren't required by your belief under question, then it means much less. Doubly so if we're at "there exists a seemingly compelling counterargument" and not "we've gotten to the bottom of this, and there are no more seemingly compelling counter-counterarguments".

So Saar didn't check if the grant was actually approved. And Peter didn't check if labs sometimes do the work before writing grant proposals. Or they did, and it didn't come through in the debate. And Saar missed the cat thing. Peter did better on this game of "whack-a-mole" of arguments than Saar did, and more than I expected, but what is it worth? Truth certainly makes this easier, but so does preparation and debate skill, so I'm not really sure how much to update here.


What I want to see more than "who can paint an excessively detailed story that doesn't really matter and have it stand up to surface level scrutiny better", is people focusing on the actual cruxes underlying their views. Forget the myriad of implications n steps down the road which we don't have the ability to fully map out and verify, what are the first few things we can actually know, and what can we learn from this by itself? If we're talking about a controversial "relationship guru", postpone discussions of whether clips were "taken out of context" and what context might be necessary until we settle whether this person is on their first marriage or fifth. If we're wondering if a suspect is guilty of murder, don't even bother looking into the credibility of the witness until you've settled the question of does the DNA match.

If there appears to be a novel coronavirus outbreak right outside a lab studying novel coronaviruses, is that actually the case? Do we even need to look at anything else, and can looking at anything else even change the answer?

To exaggerate the point to highlight the issue, if there were unambiguously a million wet markets that are all equivalent, and one lab, and the outbreak were to happen right between the lab and the nearest wet market, you're done. It doesn't matter how much you think the virus "doesn't look engineered" because you can't get to a million to one that way. Even if you somehow manage to make what you think is a 1000:1 case, a) even if your analysis is sound it still came from the lab, b) either your analysis there or the million to one starting premise is flawed. And if we're looking for a flaw in our analyses, it's going to be a lot easier to find flaws in something relatively concrete like "there are a million wet markets just like this one" than whatever is going into arguing that it "looks natural".

So I really wish they'd sit down and hammer out the most significant and easiest to verify bits first. How many equally risky wet markets are there? How many labs? What is the quantitative strength of the 30,000 foot view "It looks like an outbreak of chocolatey goodness in Hershey Pennsylvania"? What does it actually take to have arguments that contain leaks to this degree, and can we realistically demonstrate that here?
 

jimmy20

The difference between what I strive for (and would advocate) and "epistemic learned helplessness" is that it's not helpless. I do trust myself to figure out the answers to these kinds of things when I need to -- or at least, to be able to come to a perspective that is worth contending with.

The solution I'm pointing at is simply humility. If you pretend that you know things you don't know, you're setting yourself up for failure. If you don't wanna say "I dunno, maybe" and can't say "Definitely not, and here's why" (or "That's irrelevant and here's why" or "Probably not, and here's why I suspect this despite not having dived into the details"), then you were committing arrogance by getting into a "debate" in the first place.

Easier said than done, of course.

jimmy20

I think "subject specific knowledge is helpful in distinguishing between bullshit and non-bullshit claims." is pretty clear on its own, and if you want to add an example it'd be sufficient to do something simple and vague like "If someone cites scientific studies you haven't had time to read, it can sound like they've actually done their research. Except sometimes when you do this you'll find that the study doesn't actually support their claim".

"How to formulate a rebuttal" sounds like a very different thing, depending on what your social goals are with the rebuttal.

I think I'm starting to realize the dilemma I'm in. 

Yeah, you're kinda stuck between "That's too obvious of a problem for me to fall into!" and "I don't see a problem here! I don't believe you!". I'd personally err on the side of the obvious, while highlighting why the examples I'm picking are so obvious.

I could bring out the factual evidence and analyze it if you like, but I don't think that was your intention

Yeah, I think that'd require a pretty big conversation and I already agree with the point you're trying to use it to make.

jimmy54

I did get feedback warning that the Ramaswamy example was quite distracting (my beta reader reccomended flat eartherism or anti-vaxxing instead). In hindsight it may have been a better choice, but I'm not too familiar with geology or medicine, so I didn't think I could do the proper rebuttal justice.


My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?".

After going back and skimming a bit, it's still not clear to me that they're not.

The uninformed judge cannot tell him from someone with a genuine understanding of geopolitics.

The thing is, this applies to you as well. Looking at this bit, for example:

What about Ukraine? Ukrainians have died in the hundreds of thousands to defend their country. Civil society has mobilized for a total war. Zelensky retains overwhelming popular support, and by and large the populace is committed to a long war.  

Is this the picture of a people about to give up? I think not.  

This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn't really mean a country isn't about to give up. Maybe it's the reason they are about to give up; there's always a line, and whos to say it isn't in the hundreds of thousands? Zelensky having popular support does seem to support your point, and I could go check primary sources on that, but even if I did your point about "selecting the right facts and omitting others" still stands, and there's no easy way to find out if you're full of shit here or not.

So it's kinda weird to see it presented as if we're supposed to take your arguments at face value... in a piece purportedly teaching us to defend against the dark art of bullshit. It's not clear to me how this section even helps even if we do take it at face value. Okay, so Ramaswamy said something you disagree with, and you might even be right and maybe his thoughts don't hold up to scrutiny? But even if so, that doesn't mean he's "using dark arts" any more than he just doesn't think things through well enough to get to the right answer, and I don't see what that teaches us about how to avoid BS besides "Don't trust Ramaswamy".

To be clear, this isn't at all "your post sucks, feel bad". It's partly genuine curiosity about where you were trying to go with that part, and mostly that you seem to genuinely appreciate feedback.

My own answer to "how to defend against bullshit" is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take how seriously, I track how much people are able to engage with other worldviews, and which worldviews hold up and don't require avoidance techniques in order to preserve the worldview.
 

jimmy31

The frequency explanation doesn't really work, because men do sometimes get excess compliments and it doesn't actually become annoying; it's just background. Also, when women give men the kind of compliments that men tend to give women, it can be quite unwanted even when infrequent.

The common thing, which you both gesture at, is whether it's genuinely a compliment or simply a bid for sexual attention, borne out of neediness. The validation given by a compliment is of questionable legitimacy when paired with some sort of tug for reciprocation, and it's simply much easier to have this kind of social interaction when sexual desire is off the table the way it is between same sex groups of presumably straight individuals.

For example, say you're a man who has gotten into working out and you're visiting your friend whom you haven't seen in a while. If your friend goes wide eyed, saying "Wow, you look good. Have you been working out?" and starts feeling your muscles, that's a compliment because it's not too hard for your friend to pull off "no homo". He's not trying to get in your pants. If that friend's new girlfriend were to do the exact same thing, she'd have to pull off "no hetero" for it to not get awkward, and while that's doable it's definitely significantly harder. If she's been wanting an open relationship and he hasn't, it gets that much harder to take it as "just a compliment" and this doesn't have to be a recurring issue in order for it to be quite uncomfortable to receive that compliment. As a result, unless their relationship is unusually secure she's less likely to compliment you than he is -- and when she does she's going to be a lot more restrained than he can be.

The question, to me, is to what extent people are trying to "be sexy for their homies" because society has a semi-intentional way of doing division of labor to allow formation of social hierarchies without having to go directly through the mess of sexual desires, and to what extent people are simply using their homies as a proxy for what the opposite sex is into and getting things wrong because they're projecting a bit. The latter seems sufficient and a priori expected, but maybe it leads into the former.

jimmy4630

 I want there to be a way to trade action for knowledge- to credibly claim I won't get upset or tell anyone if a lizardman admits their secret to me- but obviously the lizardman wouldn't know that I could be trusted to keep to that, 

 

The thing people are generally trying to avoid, when hiding their socially disapproved of traits, isn't so much "People are going to see me for what I am", but that they won't

Imagine you and your wife are into BDSM, and it's a completely healthy and consensual thing -- at least, so far as you see. Then imagine your aunt says "You can tell me if you're one of those BDSM perverts. I won't tell anybody, nor will I get upset if you're that degenerate". You're still probably not going to be inclined to tell her, because even if she's telling the truth about what she won't do, she's still telling you that she's already written the bottom line that BDSM folks are "degenerate perverts". She's still going to see you differently, and she's still shown that her stance gives her no room for understanding what you do or why, so her input -- hostile or not -- cannot be of use.

In contrast, imagine your other aunt tells you about how her friends relationship benefitted a lot from BDSM dynamics which match your own quite well, and then mentions that they stopped doing it because of a more subtle issue that was causing problems they hadn't recognized. Imagine your aunt goes on to say "This is why I've always been opposed to BDSM. It can be so much fun, and healthy and massively beneficial in the short term, but the longer term hidden risks just aren't worth it". That aunt sounds worth talking to, even if she might give pushback that the other aunt promised not to. It would be empathetic pushback, coming from a place of actually understanding what you do and why you do it. Instead of feeling written off and misunderstood, you feel seen and heard -- warts and all. And that kind of "I got your back, and I care who you are even if you're not perfect" response is the kind of response you want to get from someone you open up to.

So for lizardmen, you'd probably want to start by understanding why they wouldn't be so inclined to show their true faces to most people. You'd want to be someone who can say "Oh yeah, I get that. If I were you I'd be doing the same thing" for whatever you think their motivation might be, even if you are going to push back on their plans to exterminate humanity or whatever. And you might want to consider whether "lizardmen" really captures what's going on or if it's functioning in the way "pervert" does for your hypothetical aunt.

jimmy20

I get that "humans are screwed up" is a sequences take, that you're not really sure how to carve up the different parts of your mind, etc. What I'm pointing at here is substantive, not merely semantic. 

  1. The dissociation of saying "humans are messed up"/"my brain is messed up" feels different than saying "I am messed up". The latter is speaking from a perspective that is associated with the problem and has the responsibility to fix it from the first person. This perspective shift is absolutely crucial, and trying to solve your problems "from the outside" gets people very very caught up in additional meta level problems and unable to touch the object level problem. This is a huge topic.
  2. I had as a strong an aversion to homework as anyone, including homework which I knew to be important. It's not a matter of "finding a situation where you notice part of your mind attempting to write the bottom line first", but of noticing why that part of your mind will try to write the bottom line first, and relating to yourself in a way that eliminates the motivation to do so in the first place. I don't have situations where part of my mind attempts to write the bottom line first... that I'm aware of, at least. There are things that I'm attached to, which is what causes the "bottom line first" issues and which is still an obstacle to be overcome in itself, but the motivation to write the bottom line first can be completely obsoleted by stopping and giving more attention to the possibility that you've been trying to undervalue something that you can sense is critically important. This mental move shifts all of your "my brain is being irrational" problems into "I don't know what to do on the object level"/"I don't know why this is so important to me" problems, which are still problems but they are much nicer because they highlight rather than obscure the path to solution.
  3. "I want some kind of language to distinguish the truth seeking part from the biased part". I don't think such a distinction exists in any meaningful sense.

In my model, there's a part of your brain that recognizes that something is important (e.g. social time), and a part of your brain that recognizes that something else is important (e.g. doing homework), and that neither are "truth seeking" or "biased", but simply tugging you towards a particular goal. Then there's a part of your brain which feels tugged in both directions and has to mediate and try to form this incoherent mess into something resembling useful behavior.

This latter part wants to get out of the conflict, and there are many strategies to do this. This is another big topic, but one way to get out of the conflict is to simply give in to the more salient side and shut out the less salient side. This strategy has obvious and serious problems, so making an explicit decision to use this strategy itself can cause conflict between the desire "I want to not deal with this discomfort" and "I want to not drive my life into the ground by ignoring things that might be important". 

One way to attempt to resolve that conflict is to decide "Okay, I'll 'be rational', 'use logic and evidence and reason', and then satisfy the side which is more logical and shut out the side that is 'irrational and wrong'". This has clear advantages over the "be a slave to impulses" strategy, but it has it's own serious issues. One is that the side that you judge to be "irrational" isn't always the side that's easier to shut out, so attempting to do so can be unsuccessful at the actual goal of "get out of this uncomfortable conflict". 

A more successful strategy to resolving like these is to shut out the easy to shut out side, and then use "logic and reason" to justify it if possible, so that the "I don't want to run my life into the ground by making bad decisions" part is satisfied too. The issue with this one comes up when part of you notices that the bottom line is getting written first and that the pull isn't towards truth -- but so long as you fail to notice, this strategy actually does quite well, so every time your algorithm that you describe as "logical and reasoned" drifts in this direction it gets rewarded and you end up sliding down this path. That's why you get this repeating pattern of "Dammit, my brain was writing the bottom line again. I shall keep myself from doing that next time!". 

It's simply not the case that you have a "truth seeking part" and a "biased part". You contain a multitude of desires, and strategies for achieving these desires and mediating conflicts between these desires. The strategies you employ, which call for shutting out desires which retain power over you unless they can come up with sufficient justification, requires you to come up with justifications and find them sufficient in order to get what you want. So that's what you're motivated to do, and that's what you tend to do. 

Then you notice that this strategy has problems, but so long as you're working within this strategy, adding the extra desires of "but don't fool myself here!" becomes simply another desire that can be rationalized away if you succeed in coming up with a justification that you're willing to deem sufficient ("Nah, I'm not fooling myself this time! These reasons are sound!", "Shit, I did it again didn't I. Wow, these biases sure can be sneaky!").

The framing itself is what creates the problems. By the time you are labeling one part "truth seeking" and one part "biased, and therefore important to not listen to", you are writing the bottom line . And if your bottom line includes "there is a problem with how my brain is working", then that's gonna be in your bottom line.

The alternative is to not purport to know which side is "truth seeking" and which side is "biased", and simply look, until you see the resolution.

jimmy20

1) You keep saying "My brain", which distances you from it. You say "Human minds are screwed up", but what are you if not a human mind? Why not say "I am screwed up"? Notice how that one feels different and weightier? Almost like there's something you could do about it, and a motivation to do it?


2) Why does homework seem so unfun to you? Why do you feel tempted to put off homework and socialize? Have you put much thought into figuring out if "your brain" might be right about something here?

In my experience, most homework is indeed a waste of time, some homework very much is not, and even that very worthwhile homework can be put off until the last minute with zero downside. I decided to stop putting it off to the last minute once it actually became a problem, and that day just never came. In hindsight, I think "my brain" was just right about things. 

How sure are you that you'd have noticed if this applies to you as well?

3) "If your brain was likely to succeed in deceiving you".

You say this as if you are an innocent victim, yet I don't think you'd fall for any of these arguments if you didn't want to be deceived. And who can blame you? Some asshole won't let you have fun unless you believe that homework isn't worthwhile, so of course you want to believe it's not worth doing.

Your "trick" works because it takes off the pressure to believe the lies. You don't need to dissociate from the rest of your mental processes to do this, and you don't have to make known bad decisions in order to do this. You simply need to give yourself permission to do what you want, even when you aren't yet convinced that it's right.

Give yourself that permission, and there's no distortionary pressure so you can be upfront about how important you think doing your homework tonight really is. And if you decide that you'd rather not put it off, you're allowed to choose that too. As a general rule, rationality is improved by removing blocks to looking at reality, not adding more blocks to compensate for other blocks.

It's not that "human minds are messed up" in some sort of fundamental architectural way and there's nothing you can do about it, it's that human minds take work to organize, people don't fully recognize this or how to do it, and until that work you're going to be full of contradictions.

Load More