All of DanArmak's Comments + Replies

It's impossible to prove that an arbitrary program, which someone else gave you, is correct. That's halting-problem equivalent, or Rice's theorem, etc.

Yes, we can prove various properties of programs we carefully write to be provable, but the context here is that a black-box executable Crowdstrike submits to Microsoft cannot be proven reliable by Microsoft.

There are definitely improvements we can make. Counting just the ones made in some other (bits of) operating systems, we could:

  • Rewrite in a memory-safe language like Rust
  • Move more stuff to userspace.
... (read more)
3mishka
That is, of course, true. The chances that an arbitrary program is correct are very close to zero, and one can't prove a false statement. So one should not even try. An arbitrary program someone gave you is almost certainly incorrect. The standard methodology for formally correct software is joint development of a program and of a proof of its correctness. One starts from a specification, and refines it into a proof and a program in parallel. One can't write a program, and then try to prove its correctness as an afterthought. The goal of having a formally verified software needs to be present from the start, and then there are methods to accomplish the task of creating this kind of software jointly with a proof of its correctness (but these methods are currently very labor-expensive). (And yes, perhaps, Windows environment is too messy to deal with formally. Although one would think that fire control for fleet missile defense would be fairly messy as well, yet people claimed that they created a verified Ada code for that back in the 1990-s (or, perhaps, late 1980-s, I am not sure). The numbers they quoted back then during a mid-1990-s talk were 500 thousand lines of Ada and 50 million dollars (would be way more expensive today).)

Addendum 2: this particular quoted comment is very wrong, and I expect this is indicative of the quality of the quoted discussion, i.e. these people do not know what they are talking about.

Luke Parrish: Microsoft designed their OS to run driver files without even a checksum and you say they aren’t responsible? They literally tried to execute a string of zeroes!

Luke Parrish: CrowdStrike is absolutely to blame, but so is Microsoft. Microsoft’s software, Windows, is failing to do extremely basic basic checks on driver files before trying to load them and

... (read more)
7mishka
That's not true. This industry has a lot of experience formally proving correctness of mission-critical software, even in the previous century (Aegis missile cruiser fire control system is one example, software for Ariane rockets is another example, although the Ariane V maiden flight disaster is one of the illustrations that even a formal proof does not fully guarantee that one is safe; nevertheless having a formal proof tends to dramatically increase the level of safety). But this kind of formal proofs is very expensive, it increases the cost of software at least an order of magnitude, and probably more than that. That's why Zvi's suggestion does not seem realistic. One does need heavy AI assist to bring the cost of making formally verified software to levels which are affordable.

Addendum: Crowdstrike also has MacOS and Linux products, and those are a useful comparison in the matter of whether we should be blaming Microsoft.

On MacOS they don't have a kernel module (called a kext on MacOS). For two reasons; first, kexts are now disabled by default (I think you have to go to recovery mode to turn them on?) and second, the kernel provides APIs to accomplish most things without having to write a kext. So Crowdstrike doesn't need to (hypothetically) guard against malicious kexts because those are not a threat nearly as much as malicious... (read more)

Did Microsoft massively screw up by not guarding against this particular failure mode? Oh, absolutely, everyone agrees on that.

I'm sorry, this is wrong, and that everyone thinks so is also wrong - some people got this right.

Normal Windows kernel drivers are sandboxed to some extent. If a driver segfaults, it will be disabled on the next boot and the user informed; if that fails for some reason, you can tell the the computer to boot into 'safe mode', and if that fails, there is recovery mode. None of these options require the manual, tedious, error-prone... (read more)

Addendum 2: this particular quoted comment is very wrong, and I expect this is indicative of the quality of the quoted discussion, i.e. these people do not know what they are talking about.

Luke Parrish: Microsoft designed their OS to run driver files without even a checksum and you say they aren’t responsible? They literally tried to execute a string of zeroes!

Luke Parrish: CrowdStrike is absolutely to blame, but so is Microsoft. Microsoft’s software, Windows, is failing to do extremely basic basic checks on driver files before trying to load them and

... (read more)
1mishka
Thanks! I wonder if there are players in that space who are on that quality level. Or would the only reliable solution be for Microsoft itself to provide this functionality as one of their own optional services? (Microsoft should hopefully be able to maintain the quality level on par with their own core Windows kernel.)
6DanArmak
Addendum: Crowdstrike also has MacOS and Linux products, and those are a useful comparison in the matter of whether we should be blaming Microsoft. On MacOS they don't have a kernel module (called a kext on MacOS). For two reasons; first, kexts are now disabled by default (I think you have to go to recovery mode to turn them on?) and second, the kernel provides APIs to accomplish most things without having to write a kext. So Crowdstrike doesn't need to (hypothetically) guard against malicious kexts because those are not a threat nearly as much as malicious or plain buggy kernel drivers are on Windows. One reason why this works well is that MacOS only supports a small first-party set of hardware, so they don't need to allow a bunch of third party vendor drivers like Windows does. Microsoft can't forbid third party kernel drivers, there are probably tens of thousands of legitimate ones that can't be replaced easily or at all, even if someone was available to port old code to hypothetical new userland APIs. (Although Microsoft could provide much better userland APIs for new code; e.g. WinUSB seems to be very limited.) (Note: I am not a Mac user and this part is not based on personal expertise.) On Linux, Crowdstrike uses eBPF, which is a (relatively novel) domain-specific language for writing code that will execute inside the Linux kernel at runtime. eBPF is sandboxed in the kernel, and while it can (I think) crash it, it cannot e.g. access arbitrary memory. And so you can't use eBPF to guard against malicious linux kernel modules. This is indeed a superior approach, but it's hard to blame Microsoft for not having an innovation in place that nobody had ten years ago and that hasn't exactly replaced most preexisting drivers even on Linux, and removing support for custom drivers entirely on Windows would probably stop it from working on most of the hardware out there. Then again, most linux systems aren't running a hardened configuration, and getting userspace roo

About the impossibility result, if I understand correctly, that paper says two things (I'm simplifying and eliding a great deal):

  1. You can take a recognizable, possibly watermarked output of one LLM, use a different LLM to paraphrase it, and not be able to detect the second LLM's output as coming from (transforming) the first LLM.

  2. In the limit, any classifier that tries to detect LLM output can be beaten by an LLM that is sufficiently good at generating human-like output. There's evidence that a LLMs can soon become that good. And since emulating human

... (read more)
2Thomas Kwa
A more recent paper shows that an equally strong model is not needed to break watermarks though paraphrasing. It suffices to have a quality oracle and a model that achieves equal quality with positive probability.

Charles P. Steinmetz saw a two-hour working day on the horizon—he was the scientist who made giant power possible

What is giant power? I can't figure this out.

6jasoncrawford
It basically means large-scale, widely distributed electrical power generation. More narrowly, it can refer to specific proposals from around the 1920s by the progressives of that era for the buildout of electric power infrastructure: see e.g. “Giant Power: A Progressive Proposal of the Nineteen-Twenties” 

So we can imagine AI occupying the most "cushy" subset of former human territory

We can definitely imagine it - this is a salience argument - but why is it at all likely? Also, this argument is subject to reference class tennis: humans have colonized much more and more diverse territory than other apes, or even all other primates.

Once AI can flourish without ongoing human support (building and running machines, generating electricity, reacting to novel environmental challenges), what would plausibly limit AI to human territory, let alone "cushy" human te... (read more)

Nuclear power has the highest chance of The People suddenly demanding it be turned off twenty years later for no good reason. Baseload shouldn't be hostage to popular whim.

Thanks for pointing this out!

A few corollaries and alternative conclusions to the same premises:

  1. There are two distinct interesting things here: a magic cross-domain property that can be learned, and an inner architecture that can learn it.
  2. There may be several small efficient architectures. The ones in human brains may not be like the ones in language models. We have plausibly found one efficient architecture; this is not much evidence about unrelated implementations.
  3. Since the learning is transferable to other domains, it's not language specific. Large
... (read more)

Thanks! This, together with gjm's comment, is very informative.

How is the base or fundamental frequency chosen? What is special about the standard ones?

4jefftk
There isn't really anything special: you could take almost any piece of music and shift it up or down a few percent without affecting how people experience it very much. On the other hand, if you have multiple instruments together, it matters a lot that they agree on what frequencies to use. We've generally standardized on setting A=440Hz, and everything else relative to that. Aside: this was a real missed opportunity, because it puts C very close to 256Hz but not quite there. We could have had 2^N Hz be C for all N! Some instruments have notes or keys that they are best at. For example, a singer will have some minimum and maximum note, and perhaps some areas in between that sound better or worse, which means that for any given piece of music there is a (often narrow) range of keys where it will fit best. Other instruments, like a flute or trumpet have some keys that fall very naturally on the instrument (D and Bb respectively), while other some other keys require awkward fingerings. Some instruments (bagpipes, anglo concertina, tin whistle) can even only be played in one or a few keys, because they are missing notes that would be needed for other keys.

the sinking of the Muscovy

Is this some complicated socio-political ploy denying the name Moskva / Moscow and going back to the medieval state of Muscovy?

1Cmrde
Calm down, they just mistranslated the name of the ship that went... okay, sorry, they mistranslated "Moskva"

I'm a moral anti-realist; it seems to me to be a direct inescapable consequence of materialism.

I tried looking at definitions of moral relativism, and it seems more confused than moral realism vs. anti-realism. (To be sure there are even more confused stances out there, like error theory...)

Should I take it that Peterson and Harris are both moral realists and interpret their words in that light? Note that this wouldn't be reasoning about what they're saying, for me, it would be literally interpreting their words, because people are rarely precise, and mora... (read more)

4lsusr
Yes. I believe neither Peterson nor Harris is a moral anti-realist. Yes. I think you understand the debate correctly.

When Peterson argues religion is a useful cultural memeplex, he is presumably arguing for all of (Western monotheistic) religion. This includes a great variety of beliefs, rituals, practices over space and time - I don't think any of these have really stayed constant across the major branches of Judaism, Christianity and Islam over the last two thousand years. If we discard all these incidental, mutable characteristics, what is left as "religion"?

One possible answer (I have no idea if Peterson would agree): the structure of having shared community beliefs ... (read more)

They are both pro free speech and pro good where "good" is what a reasonable person would think of as "good".

I have trouble parsing that definition. You're defining "good" by pointing at "reasonable". But people who disagree on what is good, will not think each other reasonable.

I have no idea what actual object-level concept of "good" you meant. Can you please clarify?

For example, you go on to say:

They both agree that religion has value.

I'm not sure whether religion has (significant, positive) value. Does that make me unreasonable?

2lsusr
I mean Peterson and Harris both support "good" as opposed to "moral relativism" where there is no good, there is no evil. Moral relativism is a philosophy without objective goodness e.g. Nietzsche: there is only will to power. There are many competing definitions of "good". Peterson and Harris agree that the concept of "good" shouldn't be thrown away entirely. Which definition of "good" we use is not important to the Peterson-Harris debate. In the context of this debate, not. I'm using "reasonable" the way legal scholars use "reasonable": for having to avoid defining nebulous-yet-commonly-used words.

Amazon using an (unknown secret) algorithm to hire or fire Flex drivers is not a instance of "AI", not even in the buzzword sense of AI = ML. For all we know it's doing something trivially simple, like combining a few measured properties (how often they're on time, etc.) with a few manually assigned weights and thresholds. Even if it's using ML, it's going to be something much more like a bog standard Random Forest model trained on 100k rows with no tuning, than a scary powerful language model with a runaway growth trend.

Even if some laws are passed about ... (read more)

Epistemic status: wild guessing:

  1. If the US has submarine locators (or even a theory or a work-in-progress), it has to keep them secret. The DoD or Navy might not want to reveal them to any Representatives. This would prevent them from explaining to those Representatives why submarine budgets should be lowered in favor of something else.

  2. A submarine locator doesn't stop submarines by itself; you still presumably need to bring ships and/or planes to where the submarines are. If you do this ahead of time and just keep following the enemy subs around, they

... (read more)

Now write the scene where Draco attempts to convince his father to accept Quirrel points in repayment of the debt.

"You see, Father, Professor Quirrel has promised to grant any school-related wish within his power to whoever has the most Quirrel points. If Harry gives his points to me, I will have the most points by far. Then I can get Quirrel to teach students that blood purism is correct, or that it would be rational to follow the Dark Lord if he returns, or to make me the undisputed leader of House Slytherin. That is worth far more than six thousand gall... (read more)

I don't see an advantage

A potential advantage of inactivated virus vaccine is that it can raise antibodies for all viral proteins and not just a subunit of the spike protein, which would make it harder for future strains to evade the immunity. I think this is also the model implicitly behind this claim that natural immunity (from being infected with the real virus) is stronger than the immunity gained from subunit (eg mRNA) vaccines. (I make no claim that that study is reliable, and just on priors it probably should be ignored.)

direct sources are more and more available to the public... But simultaneously get less and less trustworthy.

The former helps cause the latter. Sources that aren't available to the public, or are not widely read by the public for whatever reason, don't face the pressure to propagandize - either to influence the public, and/or to be seen as ideologically correct by the public.

Of course influencing the public only one of several drives to distort or ignore the truth, and less public fora are not automatically trustworthy.

Suppose that TV experience does influence dreams - or the memories or self-reporting of dreams. Why would it affect specifically and only color?

Should we expect people who watch old TV to dream in low resolution and non-surround sound? Do people have poor reception and visual static in their black and white dreams? Would people who grew up with mostly over the border transmissions dream in foreign languages, or have their dreams subtitled or overdubbed? Would people who grew up with VCRs have pause and rewind controls in their dreams?

Some of these effects... (read more)

7davidad
My pet theory would answer as follows. 1. Color vision is an aspect of sensory experience that we can do without for most upstream perception, largely because our eyes actually have a grayscale mode for low-light conditions. 2. People are surprisingly insensitive to sound quality or even mono vs stereo under ordinary conditions. This theory predicts that the population in question would in fact dream in mono sound, but might have a hard time noticing or reporting that fact. 3. Visual static or noise tends to be filtered out by perception, except when it overwhelms the signal. This theory would predict that people who live in areas with decent reception would not have static or noise in their dreams, while people who live in areas with unacceptable reception also wouldn’t (because their brains wouldn’t even be able to entrain to the story), but in a critical band where TV signal strength is highly variable dependent on atmospheric conditions, people might sometimes experience bursts of visual static in their dreams. 4. Visual artifacts of film/video/TV that are plausible to the brain as actual optical phenomena, such as softly glowing halos around bright edges, might sometimes be incorporated into the experience of dreaming. 5. People whose experiences of storytelling predominantly involves a foreign language would tend to dream in that language. 6. When consuming media with subtitles, the visual stimulus of the subtitles is removed (and potentially even inpainted, as with the eyes’ blind spots), with the subtitle information rerouted to a language-processing area, which uses it to fill in the missing meanings from the audio stream. So, in reversed-perception dreaming, people whose experience with storytelling was heavily subtitled would experience dreams where the visuals lack subtitles, speech sounds like the foreign language, and yet speech also somehow has a meaning that feels more like the subtitle language. 7. Rewinding and pausing are “out of band” ele

Epistemic status: anecdote.

Most of the dreams I've ever had (and remembered in the morning) were not about any kind of received story (media, told to me, etc). They were all modified versions of my own experiences, like school, army, or work, sometimes fantastically distorted, but recognizably about my experiences. A minority of dreams has been about stories (eg a book I read), usually from a first person point of view (eg. a self insert into the book).

So for me, dreams are stories about myself. And I wonder: if these people had their dreams influenced by ... (read more)

4Measure
I've definitely dreamed about being in a Minecraft world doing Minecraft things (actually in the world myself - not sitting at a computer), and likewise for other video games I've played extensively.

He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

Being concerned implies 1) something has experiences 2) they can be negative / disliked in a meaningful way 3) we morally care about that.

I'd like to ask about the first condition: what is the set of things that might have experience, things whose experiences we might try to unders... (read more)

This is a question similar to "am I a butterfly dreaming that I am a man?". Both statements are incompatible with any other empirical or logical belief, or with making any predictions about future experiences. Therefore, the questions and belief-propositions are in some sense meaningless. (I'm curious whether this is a theorem in some formalized belief structure.)

For example, there's an argument about B-brains that goes: simple fluctuations are vastly more likely than complex ones; therefore almost all B-brains that fluctuate into existence will exist for ... (read more)

1Quintin Pope
Also typo: “reportis” (“is” belongs to a hyperlink, so the typo may look like “report[is” depending on your editor).

Let's take the US government as a metaphor. Instead of saying it's composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary

Both are useful models of different levels of the US government. Is the claim here that there is no useful model of the brain as a few big powerful modules that aggregate sub-modules? Or is it merely that others posit merely a few large modules, whereas Kurzban thinks we must model both small and large agents at once?

We don't ask "what

... (read more)
2PeterMcCluskey
Kurzban doesn't directly address the question of whether it's ever useful to model the mind as made of a few big parts. I presume he would admit they can sometimes be reasonable models to use. He's mostly focused on showing that those big parts don't act like very unified agents. That seems consistent with sometimes using simpler, less accurate models. He certainly didn't convince me to stop using the concepts of system 1 and system 2. I took his arguments as a reminder that those concepts were half-assed approximations. He's saying that it's extremely hard to answer those questions about edge detectors. We have little agreement on whether we should be concerned about the experiences of bats or insects, and it's similarly unobvious whether we should worry about the suffering of edge detectors.

Finding the percentage of "immigrants" is misleading, since it's immigrants from Mexico and Central America who are politically controversial, not generic "immigrants" averaged over all sources.

I'm no expert on American immigration issues, but I presume this is because most immigrants come in through the (huge) south land border, and are much harder for the government to control than those coming in by air or sea.

However, I expect immigrants from any other country outside the Americas would be just as politically controversial if large numbers of them s... (read more)

2Jiro
The question is whether immigrants have different political positions than natives. Latinos (and especially non-Cuban Latinos) absolutely have different political positions than average natives, and immigration consisting largely of them would in fact have the effect that Caplan denies. I expect that if lots of them (or their descendants) voted for Republicans, they wouldn't be politically controversial, because the Democrats and the left are spearheading the push for more immigration, and they would abruptly stop doing so. (This would not be compensated by Republicans pushing for them, because Republicans have no power to make such a push.)

immigrants are barely different from natives in their political views, and they adopt a lot of the cultural values of their destination country.

The US is famous for being culturally and politically polarized. What does it even mean for immigrants to be "barely different from natives" politically? Do they have the same (polarized) spread of positions? Do they all fit into one of the existing political camps without creating a new one? Do they all fit into the in-group camp for Caplan's target audience?

And again:

[Caplan] finds that immigrants are a tin

... (read more)
4Jiro
Finding the percentage of "immigrants" is misleading, since it's immigrants from Mexico and Central America who are politically controversial, not generic "immigrants" averaged over all sources. Statistics show that Latinos vote around 2/3 for the Democrats. That's a pretty big imbalance. And it's even more imbalanced than those statistics show because Cubans are likely to vote Republican, and the immigrants who are the center of current political controversy don't include Cubans.

bad configurations can be selected against inside the germinal cells themselves or when the new organism is just a clump of a few thousand cells

Many genes and downstream effects are only expressed (and can be selected on) after birthing/hatching, or only in adult organisms. This can include whole organs, e.g. mammal fetuses don't use their lungs in the womb. A fetus could be deaf, blind, weak, slow, stupid - none of this would stop it from being carried to term. An individual could be terrible at hunting, socializing, mating, raising grandchildren - non... (read more)

When you get an allele from sex, there are two sources of variance. One is genes your (adult) partner has that are different from yours. The other is additional de novo mutations in your partner's gametes.

The former has already undergone strong selection, because it was part of one (and usually many) generations' worth of successfully reproducing organisms. This is much better than getting variance from random mutations, which are more often bad than good, and can be outright fatal.

Selecting through many generations of gametes, like (human) sperm do, isn't... (read more)

3George3d6
  Neither of which are guaranteed to yield viable offspring, the latter won't carry all or maybe any of the benefits when mixed with your genes. Indeed, chances are most won't. On the other hand just getting random mutation on a constant set of genes seems like it has a much higher chance of still yielding a viable combination. How many generations of reproduction do you get in-absentia of recombination with other lineages between they are no longer compatible sexually? The answer varies based on the mutation rate, but it boils down to "surprisingly few". A lineage getting a new mutation often results in speciation. Also, see, most mammals mate with individuals on average 1 to 10 generations removed from them with few exceptions in rather "primitive" animals. Why? The vast majority of potential children in most mammals die because of structural issues that are "detected" very early on (i.e. at the stage when they are still a clump of cells). One reason why old animals become infertile even if germinal cells are still present. Could this mechanism not do any better? More importantly, I think you're missing the point when I say "in some species the vast majority of resources go towards mate selection". Get rid of that and allow every individual to reproduce and you'd get the ability to have many more offsprings to "test" stuff in. Some bacteria and archaea do little to not LGT, many viruses don't either. They have been here for potentially billions of years and will likely outlive. The same can be said for many plants that reproduce mainly via cloning. If you want to take a homo-centric POV and assume we are the end-all-be-all of biological life, fine, but even then you ought to keep in mind that sex might not be a requirement for that. Other strategies exist, and they don't involve sexual reproduction, the fact they did not evolve us may be chance ---------------------------------------- At any rate, I think your view might be mostly right, but it's not

I propose using computational resources as the "reference" good.

I don't understand the implications of this, can you please explain / refer me somewhere? How is the GDP measurement resulting from this choice going to be different from another choice like control of matter/energy? Why do we even need to make a choice, beyond the necessary assumption that there will still be a monetary economy (and therefore a measurable GDP)?

In the hypothetical future society you propose, most value comes from non-material goods.

That seems very likely, but it's not a... (read more)

7Vanessa Kosoy
The nominal GDP is given in units of currency, but the value of currency can change over time. Today's dollars are not the same as the dollars of 1900. When I wrote the previous comment, I thought that's handled using a consumer price index, in which case the answer can depend on which goods you include in the basket. However, actually real GDP is defined using something called the GDP deflator which is apparently based on a variable "basket" consisting of those goods that are actually traded, in proportion to the total market value traded in each one. AFAIU, this means GDP growth can theoretically be completely divorced from actual value. For example, imagine there are two goods, A and B, s.t. during some periods A is fashionable and its price is double the price of B, whereas during other periods B is fashionable and its price is double the price of A. Assume also that every time a good becomes fashionable, the entire market switches to producing almost solely this good. Then, every time the fashion changes the GDP doubles. It thus continues to grow exponentially while the real changes are just circling periodically on the same place. (Let someone who understands economics correct me if I misunderstood something.) Given the above, we certainly cannot rule out indefinite exponential GDP growth. However, I think that the OP's argument that we live in a very unusual situation can be salvaged by using a different metric. For example, we can measure the entropy per unit of time produced by the sum total of human activity. I suspect that for the history so far, it tracks GDP growth relatively well (i.e. very slow growth for most of history, relatively rapid exponential growth in modern times). Since the observable universe has finite entropy (due to the holographic principle), there is a bound on how long this phenomenon can last.

I think that most people would prefer facing a 10e-6 probability of death to paying 1000 USD.

The sum of 1000 USD comes from the average wealth of people today. Using (any) constant here encodes the assumption that GDP per capita (wealth times population) won't keep growing.

If we instead suppose a purely relative limit, e.g. that a person is willing to pay a 1e-6 part of their personal wealth to avoid a 1e6 chance of death, then we don't get a bound on total wealth.

2Vanessa Kosoy
Let U(W) denotes the utility of a person with wealth W, Umax the maximal utility of a person (i.e. limW→∞U(W)) and ¯W the median wealth of a modern person. My argument establishes that Umax≤U(¯W)+dUdW∣W=¯W⋅$1025 But, can we translate this to a bound on GDP? I'm not sure. Part of the problem is, how do we even compare GDPs in different time periods? To do this, we need to normalize the value of money. Standard ways of doing this in economics involve using "universally valuable" goods such as food. But, food would be worthless in a future society of brain emulations, for example. I propose using computational resources as the "reference" good. In the hypothetical future society you propose, most value comes from non-material goods. However, these non-material goods are produced by some computational process,. Therefore, buying computational resources should always be marginally profitable. On the other hand, the total amount of computational resources is bounded by physics. This seems like it should imply a bound on GDP.

you imagine that the rate at which new "things" are produced hits diminishing returns

The rate at which new atoms (or matter/energy/space more broadly) are added will hit diminishing returns, at the very least due to speed of light.

The rate at which new things are produced won't necessarily hit diminishing returns because we can keep cannibalizing old things to make better new things. Often, re-configurations of existing atoms produce value without consuming new resources except for the (much smaller) amount of resources used to rearrange them. If I inve... (read more)

2Vanessa Kosoy
I think it's more than an argument from incredulity. Let's try another angle. I think that most people would prefer facing a 10−6 probability of death to paying 1000 USD. I also think there's nothing so good that a typical person would accept a 1−10−6 probability of everyone dying to get it with the remaining probability of 10−6. Moreover, a typical person is "subulititarian" (i.e. considers n people dying at most n times as bad as themself dying). Hence, subjective value is bounded by 1000×106×106×1010=1025 USD. Combined with physics, this limits GPD growth on a relevant timeframe.

I agree, and want to place a slightly different emphasis. A "better" education system is a two-place function; what's better for a poor country is different from what's better for a rich Western one. And education in Western countries looked different back when they were industrializing and still poor by modern standards.

(Not that the West a century ago is necessarily a good template to copy. The point is that the education systems rich countries have today weren't necessarily a part of what made them rich in the first place.)

A lot (some think most) of Wes... (read more)

Great point, thanks!

Please see my other reply here. Yes, value is finite, but the number of possible states of the universe is enormously large, and we won't explore it in 8000 years. The order of magnitude is much bigger.

(Incidentally, our galaxy is ~ 100,000 light years across; so even expanding to cover it would take much longer than 8000 years, and that would be creating value the old-fashioned way by adding atoms, but it wouldn't support continued exponential growth. So "8000 years" and calculations based off the size of the galaxy shouldn't be mixed together. But the or... (read more)

2awenonian
In much the same way, estimates of value and calculations based on the number of permutations of atoms shouldn't be mixed together. There being a googleplex possible states in no way implies that any of them have a value over 3 (or any other number). It does not, by itself, imply that any particular state is better than any other. Let alone that any particular state should have value proportional to the total number of states possible. Restricting yourself to atoms within 8000 light years, instead of the galaxy, just compounds the problem as well, but you noted that yourself. The size of the galaxy wasn't actually a relevant number, just a (maybe) useful comparison. It's like when people say that chess has more possible board states than there are atoms in the observable universe times the number of seconds since the Big Bang. It's not that there's any specifically useful interaction between atoms and seconds and chess, it's just to recognize the scale of the problem.

in their expected lifespan

Or even in the expected lifetime of the universe.

perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.

That's a good point, but how would we know? We would need to prove that a given configuration is of maximal (and tile-able) utility without evaluating the (exponentially bigger) number of configurations of bigger size. And we don't (and possibly can't, or shouldn't) have an exact (mathematical) definition of a Pan-Human Utility Function.

However, a proof isn't needed to... (read more)

2DirectedEvolution
Indeed. I think that a serious search for an answer to these questions is probably best left for the "Long Reflection."

In the limit you are correct: if a utility function assigns a value to every possible arrangement of atoms, then there is some maximum value, and you can't keep increasing value forever without adding atoms because you will hit the maximum at some point. An economy can be said to be "maximally efficient" when value can't be added by rearranging its existing atoms, and we must add atoms to produce more value.

However, physics provides very weak upper bounds on the possible value (to humans) of a physical system of given size, because the number of possible p... (read more)

2DirectedEvolution
That’s a nice conceptual refinement. It actually swings me in the other direction, making it seem plausible that humans might not have nearly enough time to find the optimum arrangement in their expected lifespan and that this might be a central question. One possibility is that there is a maximal value tile that is much smaller than “all available atoms” and can be duplicated indefinitely to maximize expected value. So perhaps we don’t need to explore all combinations of atoms to be sure that we’ve achieved the limit of value.

The OP's argument is general: it says essentially that (economic) value is bounded linearly by the number of atoms backing the economy. Regardless of how the atoms are translated to value. This is an impossibility argument. My rebuttal was also general, saying that value is not so bounded.

Any particular way of extracting value, like electronics, usually has much lower bounds in practice than 'linear in the amount of atoms used' (even ignoring different atomic elements). So yes, today's technology that depends on 'rare' earths is bounded by the accessible a... (read more)

The rate of value production per atom can be bounded by physics. But the amount of value ascribed to the thing being produced is only strictly bounded by the size of the number (representing the amount of value) that can be physically encoded, which is exponential in the number of atoms, and not linear.

2ESRogs
To me, just ascribing more value to things without anything material about the situation changing sounds like inflation, not real growth.
2Vanessa Kosoy
So, you imagine that the rate at which new "things" are produced hits diminishing returns, but every new generation of things is more valuable than the previous generation s.t. exponential growth is maintained. But, I think this value growth has to hit a ceiling pretty soon anyway, because things can only be that much valuable. Arguably, nothing is so valuable that you can be Pascal-mugged into paying 1000 USD for someone promising to produce it by magic. Hence, the maximally valuable thing is worth no more than 1000 USD divided by the tiny probability that a Pascal mugger is telling the truth. I admit that I don't know how to quantify this, but it does point at a limit to such growth.
4Vladimir_Nesov
The natural numbers that can be physically encoded are not bounded by an exponent of the number of bits if you don't have to be able to encode all smaller numbers as well in the same number of bits. If you define a number, you've encoded it, and it's possible to define very large numbers indeed.

By "proportionately more" I meant more than the previous economic-best use of the same material input, which the new invention displaced (modulo increasing supply). For example, the amount of value derived by giving everyone (every home? every soldier? every car?) a radio is much greater than any other value the same amount of copper, zinc etc. could have been used for before the invention of radio. We found a new way to get more value from the same material inputs.

For material outputs (radio sets, telegraph wire, computers), of course material inputs are ... (read more)

-1CynthesisToday
Thank you for clarifying the definition you're using for "proportionately more". Two points come to mind: * The material waste products of the electronics ecosystem between 1990s and now has shifted from mass/toxic atoms (cathode-ray tubes/lead, mercury) to less mass but more rare(er) earth elements such as indium and cobalt. 1 The problem of "this can't go on" may not be limited by total of all atoms but by total of electronically important elements that can be mined "sustainably" on earth. All atoms are not equal. As you're probably aware, "rare earth" is not always about the total amount of atoms of said element in the earth but of how the element is dispersed (or not) and, thus, how "easily" it can be mined. ("easily" includes physical as well as political impediments2)The electronic waste stream efforts are very likely to shift from dealing with mass/toxicity to harvesting the rare earth elements from electronic waste. I can imagine the trade-off graph between all of the costs of more pit mines in more politically diverse areas for harvesting virgin rare earth elements vs harvesting electronic waste. I can't imagine either being anywhere close to all of the atoms on earth much less the entire universe. Orders of magnitude seem likely but I could be persuaded otherwise. * The idea of "modern technology (=value)" seems to have a presumption of that value being only positive. When I see that kind of blanket statement about technology I am reminded of the 2012 cover of The MIT Technology Review with Buzz Aldrin saying "You promised me Mars colonies. Instead, I got Facebook". No argument from me that use of atom-light applications are valued in the stock market. No argument from me regarding the excitement/"value" of block-chain and it's use of more electricity than many countries. Humans used to be pretty thrilled about tulips, too. Maybe the point of downsides of modern technology, including the exploitation of human nature wrt self-image (Instagram), in-group

GDP growth is measured in money, a measure of value. Value does not have to be backed by a proportional amount of matter (or energy, space or time) because we can value things as much as we like - more than some constant times utilon per gram second.

Suppose I invent an algorithm that solves a hard problem and sell it as a service. The amount people will be willing to pay for it - and the amount the economy grows - is determined by how much people want it and how much money there is, but nobody cares how many new atoms I used to implement it. If I displace ... (read more)

5awenonian
I still think the argument holds in this case, because even computer software isn't atom-less. It needs to be stored, or run, or something somewhere. I don't doubt that you could drastically reduce the number of atoms required for many products today. For example, you could in future get a chip in your brain that makes typing without a keyboard possible. That chip is smaller than a keyboard, so represents lots of atoms saved. You could go further, and have that chip be an entire futuristic computer suite, by reading and writing your brain inputs and outputs directly it could replace the keyboard, mouse, monitors, speakers, and entire desktop, plus some extra stuff, like also acting as a VR Headset, or video game console, or whatever. Lets say you manage to squeeze all that into a single atom. Cool. That's not enough. For this growth to go on for those ~8000 years, you'd need to have that single-atom brain chip be as valuable as everything on Earth today. Along with every other atom in the galaxy.  I think at some point, unless the hottest thing in the economy becomes editing humans to value specific atoms arbitrary amounts (which sounds bad, even if it would work), you can't get infinite value out of things. I'm not even sure human minds have the capability of valuing things infinitely. I think even with today's economy, you'd start to hit some asymptotes (i.e. if one person had everything in the world, I'm not sure what they'd do with it all. I'm also not sure they'd actually value it any more than if they just had 90% of everything, except maybe the value on saying "I have it all", which wouldn't be represented in our future economy) And still, the path to value per atom has to come from somewhere, and in general it's going to be making stuff more useful, or smaller, but there's only so useful a single atom can be, and there's only so small a useful thing can be. (I imagine some math on the number of ways you could arrange a set of particles, multiplied by the
3interstice
There's some discussion of this in a followup post.
5Vanessa Kosoy
Value is not obviously bounded by atoms, yes. However, GDP measures production of value. And, the entities producing value are made of atoms. Today these entities are humans. In the future, they might be something much more efficient. However, it seems at least plausible that their efficiency (i.e. rate of value production per atom) is somehow bounded by physics.

As a concrete example, let's imagine that sending an email is equivalent to sending a letter. Let's ignore the infrastructure required to send emails (computers, satellites, etc) vs. letters (mail trucks, post offices, etc), and assume they're roughly equal to each other. Then the invention of email eliminated the vast majority of letters, and the atoms they would have been made from.

Couple this with the fact that emails are more durable, searchable, instantaneous, free, legible, compatible with mixed media, and occupy only a miniscule amount of physical r... (read more)

-13malicious
-3CynthesisToday
"Many recent developments that produced a lot of value, like radio, computing, and the Internet, didn't do it by using proportionally more atoms." There are vacuum electronic tube production facilities (late 18th century onward), many billion dollar semiconductor factories (late 1970s onward), and piles and piles of electronic waste that say this isn't true.

Sorry, who is GBS?

Also: if Orwell thought vegeterians expected to gain 5 years of life, that would be an immense effect well worth some social disruption. And boo Orwell for mocking them merely for being different and not for any substance of the way they were different. It's not as if people eating different food intrudes on others (or even makes them notice, most of the time), unlike e.g. nudists, or social-reforming feminists.

1TAG
George Bernard Shaw. 1856-1950.

I strongly agree that the methodology should have presented up front. lsusr's response is illuminative and gives invaluable context.

But my first reaction to your comment was to note the aggressive tone and what feels like borderline name-calling. This made me want to downvote and ignore it at first, before I thought for a minute and realized that yes, on the object level this is a very important point. It made it difficult for me to engage with it.

So I'd like to ask you what exactly you meant (because it's easy to mistake tone on the internet) and why. Cal... (read more)

In addition to this there is the horrible—the really disquieting—prevalence of cranks wherever Socialists are gathered together. One sometimes gets the impression that the mere words 'Socialism' and 'Communism' draw towards them with magnetic force every fruit-juice drinker, nudist, sandal-wearer, sex-maniac, Quaker, 'Nature Cure' quack, pacifist and feminist in England.

It's interesting to see how this aged. 85 years later, sex-maniacs and quacks are still considered 'cranks'; pacifism and nudists are not well tolerated by most societies, whereas sandal... (read more)

9Viliam
I guess that Orwell's objection was something like "these people seem incapable to tone down their middle-class signalling". They ostentatiously care about things that working-class people do not have capacity to care about. They utterly fail at empathy with the workers... and yet presume to speak in their name. The worker is trying not to starve, and to have enough strength for daily 16-hour work at the factory. Vegetarianism is a luxury he can't afford. Will healthier diet really make him live longer? His main risk factors are falling of the scaffolding, mutilation by an engine, suffocation in a mine, et cetera; how does eating a f-ing tofu protect against that? For a working-class woman, the lack of right to vote is also not very high on her list of priorities, I suppose. Therefore, talking about these topics too much is like saying that actual working-class people are not invited to the debate.
1TAG
GBS got a good lifespan out of his vegetarian diet.

The link to "Israeli data" is wrong; it goes to the tweet by @politicalmath showing the Houston graph you inlined later.

What is the most rational way to break ice?

  1. Does the cost to get a drug approved depend on how novel or irreplaceable it might be? Did it cost the same amount to approve Silenor for insomnia as it would cost to approve a really novel drug much better at combating insomnia than any existing one?

    If the FDA imposes equal costs on any new drug, then it's not "imposing [costs] on a company trying to [...] parasitize the healthcare system". It's neutrally imposing costs on all companies developing drugs. And this probably does a lot more harm on net (fewer drugs marketed) then it does good (punishes so

... (read more)

Did it cost the same amount to approve Silenor for insomnia as it would cost to approve a really novel drug much better at combating insomnia than any existing one?

The FDA wants proof that drugs have a statistical significant effect. The stronger the effect of your drug happens to be the less people you need in your phase III trials.

Bullshit is what comes out of the mouth of someone who values persuasion over truth. [...] The people with a need to obscure the truth are those with a political or social agenda.

Almost all humans, in almost all contexts, value persuasion over truth and have a social agenda. Condemning all human behavior that is not truth-seeking is condemning almost all human behavior. This is a strong (normative? prescriptive? judgmental?) claim that should be motivated, but you seem to take it for given.

Persuasion is a natural and desirable behavior in a social, coop... (read more)

Load More