Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
zoop4-4

I think you've made a motte-and-bailey argument:

  • Motte: The payoff structure of the cosmic flip/St. Petersburg Paradox applied to the real world is actually much better than double-or-nothing, and therefore you should play the game.
  • Bailey: SBF was correct in saying you should play the double-or-nothing St. Petersburg Paradox game. 

Your motte is definitely defensible. Obviously, you can alter the payoff structure of the game to a point where you should play it. 

That does not mean "there's no real paradox" , it just means you are no longer talking about the paradox. SBF literally said he would take the game in the specific case where the game was double-or-nothing. Totally different!

This ends my issue with your argument, but I'll also share my favorite anti-St. Petersburg Paradox argument since you didn't really touch on any of the issues it connects to. In short: the definition of expected value as the mean outcome is inappropriate in this scenario and we should instead use the median outcome. 

This paper makes the argument better than I can if you're curious, but here's my concise summary:

  • Mean values are perhaps appropriate if we play the game many (or infinity) times. In these situations, through the law of large numbers, the mean outcome of the games played will approach the mean interpretation of expected value.
  • For a single play-through (as in the thought experiment) the mean is not appropriate, as the law of large numbers does not apply. Instead, we should value the game by its median outcome: the outcome one should reasonably expect.
  • Indeed, if you have people actually play this game, their betting behavior is more consistent with an intuition of median expected value (this is tested in the paper).
  • There's an argument Median EV is the better interpretation even when playing multiple times. In these situations you can think of the game as "playing the game multiple times, once." This resolves the paradox in all but the infinite cases.
  • If you use the median interpretation of EV for finite trials of the game, there is no paradox. 

A personal gripe: I find it more than a little stupid that the "expected value" is a value you don't actually "expect" to observe very frequently when sampling highly skewed distributions.

Mathematicians and Economists have taken issue with the mean definition of EV basically as long as it has existed. Regardless of whether or not you agree with it, it seems pretty obvious to me that it is inappropriate to use the mean to value single trial outcomes. 

So maybe in the real world we should play the game, but I firmly believe we should value the game using medians and not means. Do we get to play the world outcome optimization game multiple/infinite times? Obviously not. 

zoop30

I made an omission mistake in just saying "sampling from noisy posteriors," note I didn't say they were performing unbiased sampling. 

To extend the Psychology example: a study could be considered a sampling technique of the noisy posterior. You appear to be arguing that the extent to which this is a biased sample is a "skill issue." 

I'm arguing that it is often very difficult to perform unbiased sampling in some fields; the issue might be a property of the posterior and not that the researcher has a weak prefrontal cortex. In this framing it would totally make sense if two researchers studying the same/correlated posterior(s) are biased in the same direction–its the same posterior!

zoop94

Eh. feels wrong to me. Specifically, this argument feels over-complicated.

As best I can tell, the predominant mode of science in replication-crisis affected fields is that they do causal inference by sampling from noisy posteriors. 

The predominant mode of science in non-replication-crisis affected fields is that they don't do this or do this less. 

Most of the time it seems like science is conducted like that in those fields because they have to. Can you come up with a better way of doing Psychology research? Science in hard fields is hard is definitely a less sexy hypothesis, but it seems obviously true? 

zoop16-6

I really, really, really did not like this post. I found it to be riddled with bad assumptions, questionable unsupported claims, and critical omissions. I don't think any of the core arguments survive close scrutiny.

Moreover, I took serious issue with the tone throughout. The first half hand-waves some seriously questionable claims into existence with strong confidence, while the second half opines that everyone who ever thought otherwise is some combination of sycophantic, incurious, brainwashed, or an idiot. I would have appreciated more intellectual humility. 

***

My read is that this post totally whiffed on the entire subject of die casting cost savings.

 The chassis of cars is a relatively small fraction of their cost. The cost of aluminum die casting and stamped steel is, on Tesla's scale, similar. Yet, there were so many articles saying gigacasting was a major advantage of Tesla over other companies.


To be clear: the cost savings argument for die casting is little to do with the cost of the chassis itself, it's mostly an argument about the cost of body assembly. 

In an automotive assembly line one of the most labor-intensive, challenging, and expensive steps is the "body shop," where a car's structural components are assembled into a "body in white."  Die casting saves time and money by reducing the number of welds, bolts, etc. required to go from components to body. It also cuts down on total weight, waste material from manufacturing a larger number of components, and the number of steps one can introduce tolerance errors. 

Here is an example from the Model 3. Switching from traditional assembly to die casting cuts out 169 separate metal parts and 1600 welds. Those costs add up! Look at the difference in estimated variable costs. 

in short, your claim: "The cost of aluminum die casting and stamped steel is, on Tesla's scale, similar" both seems to miss the entire point and run against literally everything I have seen written about this. You need citations for this claim, I am not going to take your word for it.

***

The price thing alone seems like a post invalidating miss, but I was pretty alarmed by the sheer number of other strong assertions made with weak or no supporting evidence. Some of these seemed obviously wrong. 

Tesla has been widely criticized for stuff not fitting together properly on the car body. My understanding is that the biggest reason for that is their large aluminum castings being slightly warped.

Tesla's panel gap issues predate the giga press by like a decade and has always been attributed to wide tolerances for all parts and lazy QA (de-prioritized in favor of R&D). I have absolutely no idea how you got to this "understanding." Citation please?

As for voids, they can create weak points; I think they were the reason the cybertruck hitch broke off in this test.

Or the geometry of the frame was insufficiently optimized for vertical shear. I do not understand how you reached this conclusion.

BYD is still welding stamped steel sheets together, and that's why it can't compete on price with Tesla. Hold on, it seems...BYD prices are actually lower than Tesla's? Much lower? 

Price alone doesn't really say anything about the giga press. Perhaps BYD's efficiency could be explained by some of the other few thousand things that go into making a car? What about all the other stamped steel chassis companies BYD is way more efficient than?

Also, production costs are the actual thing that matter for this argument, not price. Tesla has 6x the profit per car of BYD which obviously factors into the higher prices. 

Oh, and Tesla is no longer planning single unitary castings for future vehicles?

This is a bit misleading. Tesla doesn't currently do unitary castings, so this is a suspension of future R&D not changing what they currently do. Importantly, this means they will keep giga casting their chassis for the foreseeable future.

Money is a factor, of course; PR agencies drive a lot of the articles in media. I assume Tesla pays some PR firms and people there presumably decided to push the Giga Press.

You should stop assuming! Tesla spent essentially nothing on marketing until 2023, well after this assumed PR would be taking place. By nothing I mean that the estimate for their marketing spend in 2022 (literally all marketing to include PR if there was any at all) was $175k

zoop10

Actually, my read of the data is that the mountain west is not more environmentally conscious than the rest of the US. 

The mountain west poll does not include national numbers, so I have no idea where your national comparisons are coming from. If I did, I'd check for same year/same question, but because I don't know where they're from I can't.

Take a look at this cool visualization of different state partisan splits from 2018: https://climatecommunication.yale.edu/visualizations-data/partisan-maps-2018/

The mountain west appears neither significantly more nor significantly less partisan on any of the climate change related questions than the rest of the US. 

My main point, which I don't think you've contradicted (even if I accept that the mountain west is unique), is that you're making an argument about "environmentalism" partisanship by using primarily "climate change" polling data. The charts from the 2013 paper you've posted sort of confirm this take–climate change is obviously a uniquely partisan issue. 

The intro to your sequence states the following:

The partisanship we see today is unusual, compared to other issues, other countries, or even the US in the 1980s.

Basically, I have not seen evidence that this is true for issues beyond climate change (or other countries!), and I think your sequence would benefit by explicitly comparing 

  • the partisan split of non-climate-change environmental issues (e.g. rain forest protection) to 
  • the partisan split of non-environmental issues (e.g. taxation)
zoop40

My initial reaction, admittedly light on evidence, is that the numbers you present are at least partially due to selection bias. You've picked a set of issues, like climate change, that are not representative of the entire scope of "environmentalism." It shouldn't surprise anybody that "worry about global warming" is a blue issue, but the much more conservative-y "land use,"  "protection of fish and wildlife" and "conservation,"  issues for whatever reason are often not measured. In short, it feels a little to me that your actual argument is that liberal-coded environmental issues are partisan.

More than half of state wildlife conservation funding comes from hunting licenses and firearms taxes. I assure you, these fees mostly come from republicans in republican states. Here  is some polling done in the west on environmental issues. It shouldn't be a surprise that republican voters in Wyoming and rural Colorado care a lot about the environment, but one shouldn't expect them to think about the issues in the same way as latte drinking knowledge workers in coastal cities. 

It also might interest some to read how Nixon talked about the environment. This message to congress about founding the EPA in 1972 has some interesting passage, including the following:

PROTECTING OUR NATURAL HERITAGE

Wild places and wild things constitute a treasure to be cherished and protected for all time. The pleasure and refreshment which they give man confirm their value to society. More importantly perhaps, the wonder, beauty, and elemental force in which the least of them share suggest a higher right to exist--not granted them by man and not his to take away. In environmental policy as anywhere else we cannot deal in absolutes. Yet we can at least give considerations like these more relative weight in the seventies, and become a more civilized people in a healthier land because of it.

I've paid attention to politics for a long time, but I've never heard a democrat talk like this about the environment. Just this one paragraph contains three progressive blasphemies, nearly one per sentence:

  • The idea that the environment belongs in any way shape or form to a nation or a people (is our heritage) 
  • The idea that the environment derives its value from the "pleasure and refreshment" they "give man"
  • A higher right to exist not granted by man?????! 
zoop30

I hear what you're saying. I probably should have made the following distinction:

  1. A technology in the abstract (e.g. nuclear fission, LLMs)
  2. A technology deployed to do a thing (e.g. nuclear in a power plant, LLM used for customer service)

The question I understand you to be asking is essentially how do we make safety cases for AI agents generally? I would argue that's more situation 1 than 2, and as I understand it safety cases are basically only ever applied to case 2. The nuclear facilities document you linked definitely is 2. 

So yeah, admittedly the document you were looking for doesn't exist, but that doesn't really surprise me. If you started looking for narrowly scoped safety principles for AI systems you start finding them everywhere. For example, a search for "artificial intelligence" on the ISO website results in 73 standards . 

Just a few relevant standards, though I admit, standards are exceptionally boring (also many aren't public, which is dumb):

  • UL 4600 standard for autonomous vehicles
  • ISO/IEC TR 5469 standard for ai safety stuff generally (this one is decently interesting)
  • ISO/IEC 42001 this one covers what you do if you set up a system that uses AI

You also might find this paper a good read: https://ieeexplore.ieee.org/document/9269875 

zoop50

I've published in this area so I have some meta comments about this work.

First the positive: 

1. Assurance cases are the state of the art for making sure things don't kill people in a regulated environment. Ever wonder why planes are so safe? Safety cases. Because the actual process of making one is so unsexy (GSNs make me want to cry), people tend to ignore them, so you deserve lots of credit for somehow getting ex-risk people to upvote this. More lesswronger types should be thinking about safety cases.

2. I do think you have good / defensible arguments overall, minus minor quibbles that don't matter much.

Some bothers:

1. Since I used to be a little involved, I am perhaps a bit too aware of the absolutely insane amount of relevant literature was not mentioned. To me, the introduction made it sound a little bit like the specifics of applying safety cases to AI systems have not been studied. That is very, very, very not true. 

That's not to say you don't have a contribution! Just that I don't think it was placed well in the relevant literature. Many have done safety cases for AI but they usually do it as part of concrete applied work on drones or autonomous vehicles, not ex-risk pie-in-the-sky stuff. I think your arguments would be greatly improved by referencing back to this work. 

I was extremely surprised to see so few of the (to me) obvious suspects referenced, particularly more from York. Some labs with people that publish lots in this area.

  • University of York Institute for Safe Autonomy
  • NASA Intelligent Systems Division
  • Waterloo Intelligent Systems Engineering Lab
  • Anything funded by the DARPA Assured Autonomy program

2. Second issue is a little more specific, related to this paragraph:

To mitigate these dangers, researchers have called on developers to provide evidence that their systems are safe (Koessler & Schuett, 2023; Schuett et al., 2023); however, the details of what this evidence should
look like have not been spelled out. For example, Anderljung et al vaguely state that this evidence should be “informed by evaluations of dangerous capabilities and controllability”(Anderljung et al., 2023). Similarly, a recently proposed California bill asserts that developers should provide a “positive safety determination” that “excludes hazardous capabilities” (California State Legislature, 2024). These nebulous requirements raise questions: what are the core assumptions behind these evaluations? How might developers integrate other kinds of evidence?

The reason the "nebulous requirements" aren't explicitly stated is that when you make a safety case you assure the safety of a system against specific relevant hazards for the system you're assuring. These are usually identified by performing a HAZOP analysis or similar. Not all AI systems have the same list of hazards, so its obviously dubious to expect you can list requirements a priori. This should have been stated, imo.

zoop1-2

I don't think it works if there isn't a correct answer, e.g. predicting the future, but I'm positive this is a good way to improve how convincing your claims are to others.

If there isn't ground truth about a claim to refer to, any disagreement around a claim is going to be about how convincing and internally/externally consistent the claim is. As we keep learning from prediction markets, rationale don't always lead to correctness. Many cases of good heuristics (priors) doing extremely well. 

If you want to be correct, good reasoning is often a nice-to-have, not a need-to-have. 

zoop3-2

I very strongly disagree. In my opinion, this argument appears fatally confused about the concept of "software." 

As others have pointed out, this post seems to be getting at a distinction between code and data, but many of the examples of software given by OP contain both code and data, as most software does. Perhaps the title should have been "AI is Not Code," but since it wasn't I think mine is a legitimate rebuttal. 

I'm not trying to make an argument by definition. My comment is about properties of software that I think we would likely agree on. I think OP both ignores some properties software can have while assuming all software shares other separate properties, to the detriment of the argument.

I  think the post is correct in pointing out that traditional software is not similar to AI in many ways, but that's where my agreement ends.

 

1: Software, I/O, and such

Most agree on the following basic definition: software is a set of both instructions and data, hosted on hardware, that governs how input data is transformed to some sort of output. As you point out, inputs and outputs are not software.

For example, photos of a wedding or a vacation aren’t software, even if they are created, edited, and stored using software.

Yes.

Second, when we run the model, it takes the input we give it and performs “inference” with the model. This is certainly run on the computer, but the program isn’t executing code that produces the output, it’s using the complicated probability model which grew, and was stored as a bunch of numbers. 

No! It is quite literally executing code to produce the output! Just because this specific code and the data it interacts with specifies a complicated probability model that does not mean it is not software. 

Every component of the model is software. Even the pseudorandomness of the model outputs is software (torch.randn(), often). There is no part of this inference process that generates outputs that is not software. To run inference is only to run software.

 

2: Stochasticity

The model responds to input by using the probability model to estimate the probability of difference responses, in order to output something akin to what the input data did - but it does so in often unexpected or unanticipated ways.

Software is often, but is not necessarily deterministic. Software can have stochastic or pseudorandom outputs. For example, software that generates pseudorandom numbers is still software. The fact that AI generates stochastic outputs humans don't expect does not make it not software.

Also, software is not necessarily interpretable and outputs are not necessarily expected or expectable.

 

3: Made on Earth by Humans

First, we can talk about how it is created. Developers choose a model structure and data, and then a mathematical algorithm uses that structure and the training data to “grow” a very complicated probability model of different responses... The AI model itself, the probability model which was grown, is generating output based on a huge set of numbers that no human has directly chosen, or even seen. It’s not instructions written by a human.

Neither a software's code nor its data is necessarily generated by humans.

 

4: I have bad news for you about software engineering

Does software work? Not always, but if not, it fails in ways that are entirely determined by the human’s instructions.

This is just not true, many bugs are caused by specific interactions between inputs and the code + data, some also caused by inputs, code, data, and hardware (buffer overflows being the canonical example). You could get an error due to cosmic bit flips, that has nothing to do with humans or instructions at all! Data corruption... I could go on and on.

For example, unit tests are written to verify that the software does what it is expected to do in different cases. The set of cases are specified in advance, based on what the programmer expected the software to do. 

... or the test is incorrect. Or both the test and the software are incorrect. Of course this assumes you wrote tests, which you probably didn't. Also, who said you can't write unit tests for AI? You can, and people do. All you have to do is fix the temperature parameter and random seed. One could argue benchmarks are just stochastic tests...

If it fails a single unit test, the software is incorrect, and should be fixed.

Oh dear. I wish the world worked like this. 

Badly written, buggy software is still software. Not all software works, and it isn't always software's fault. Not all software is fixable or easy to fix.

 

5: Implications

What we call AI in 2024 is not software. It's kind of natural to put it in the same category as other things that run on a computer, but thinking about LLMs, or image generation, or deepfakes as software is misleading, and confuses most of the ethical, political, and technological discussions.

In my experience, thinking of AI as software leads to higher quality conversations about the issues. Everyone understands at some level that software can break, be misused, or be otherwise in-optimal for any number of reasons. 

I have found that when people begin to think AI is not software, they often devolve into dorm room philosophy debates instead of dealing with its many concrete, logical, potentially fixable issues. 

Load More