All of Lorec's Comments + Replies

Lorec10

Whoa, I hadn't realized you'd originated the term "anti-inductive"! Scott apparently didn't realize he was copying this from you, and neither I nor anyone else apparently realized this was your single-point-of-origin coinage, either.

Politics isn't quite a market for mindshare, as there are coercive equilibria at play. But I think it's another arena that can accurately be called anti-inductive. "This is an adversarial epistemic environment, therefore tune up your level of background assumption that apparent opportunity is not real" has [usefully IMO] become... (read more)

Lorec*10

Subjectively assessed value. This judgment cannot be abdicated.

Lorec10

If you can't generate your parents' genomes and everything from memory, then yes, you are in a state of uncertainty about who they are, in the same qualitative way you are in a state of uncertainty about who your young children will grow up to be.

Ditto for the isomorphism between your epistemic state w.r.t. never-met grandparents vs your epistemic state w.r.t. not-yet-born children.

It may be helpful to distinguish the subjective future, which contains the outcomes of all not-yet-performed experiments [i.e. all evidence/info not yet known] from the physical future, which is simply a direction in physical time.

2Ape in the coat
Here you seem to confuse "which person has quality X" with "what are all the other qualities that a person, who has quality X has". I'm quite confident about which people are my parents. I'm less confident about all the qualities that my parents have. The former is relevant to Doomsday argument, the latter is not.  And even if I had no idea about who my parents are I'd still be pretty confident that they were born in the last century not in 6th BC. Sure. But I don't see how it's relevant here.
Lorec10

We can hardly establish the sense of anthropic reasoning if we can't establish the sense of counterfactual reasoning.

A root confusion may be whether different pasts could have caused the same present, and hence whether I can have multiple simultaneous possible parents, in an "indexical-uncertainty" sense, in the same way that I can have multiple simultaneous possible future children.

The same standard physics theories that say it's impossible to be certain about the future, also say it's impossible to be certain about the past.

Indexical uncertainty about th... (read more)

2Ape in the coat
As soon as we've established the notion of probability experiment that approximates our knowledge about the physical process that we are talking about - we are done. This works exactly the same way whether you are not sure about the outcome of a coin toss, oddness or evenness of an unknown to you digit of pi, or whether you live on a tallest or the coldest mountain. And if you find yourself unable to formally express some reasoning like that - this is a feature not a bug. It shows when your reasoning becomes incoherent. I think our disagreement is that you believe that one always has multiple possible parents as some metaphysical fact about the universe, while I believe that the notion of possible parent is only appropriate for a person who is in a state of uncertainty about who their parents are. Does that sound right to you? This is really beside the point.  Consider, a coin is about to be tossed. You are indifferent between two outcomes. Then the coin is tossed and shown to you and you reflect on it a second later. Technically you can't be absolutely sure that you didn't misremember the outcome. But you are much more confident than beforehand, to the point where we usually just approximate away whatever uncertainty is left for the sake of simplicity. Until we learn what and why they are with a high level of confidence. Then we are much less uncomfortable about it. And yes there is still a chance that all that we know is wrong, souls are real and are allocated to humans throughout history by a random process and therefore the assumptions of Doomsday Argument just so happened to be true. Conditionally on that Doomsday Inference is true. But to the best of our knowledge this is extremely unlikely, so we shouldn't worry about it too much and should frame Doomsday Argument appropriately.
2Noosphere89
To be clear, I think the main flaw of a lot of anthropics in practice is ignoring other sources of evidence, and I suspect a lot of the problem really does boil down to conservation of expected evidence violations plus ignoring other, much larger sources of evidence. On this: This is why the most general versions of the simulation hypothesis/Mathematical Universe Hypothesis/computational functionalism hypothesis for consciousness are not properly speaking valid Bayesian hypotheses, because every outcome could count as confirmation of the theory, so it is utterly useless for prediction. It's a great universal ontology, but it's predictive power is precisely 0. More positively speaking, the hypotheses are just the assumed things they have for Bayesians, similarly to how logical omniscience is just assumed for Bayesians, and thus it's great to have a universal tool-kit, but that does come with the downside of having 0 ability to predict anything (because it contains everything).
Lorec10

alignment researchers are clearly not in charge of the path we take to AGI

If that's the case, we're doomed no matter what we try. So we had better back up and change it.

Don't springboard by RLing LLMs; you will get early performance gains and alignment will fail. We need to build something big we can understand. We probably need to build something small we can understand first.

Lorec10

I expect any tests to show unambiguously that it's "not being replaced at all and citations[/mentions] chaotically swirling". If I understand Evans correctly, these were all random eminent figures he picked, not selected to be falling out of fashion - and they do seem to be a pretty broad sample of the "old prestigious standard names" space.

The stand mixer is a clever analogy; I didn't previously have experience with the separation thing.

I presume you've seen Is Clickbait Destroying Our General Intelligence?, and probably Hanson's cultural evolution / cult... (read more)

Lorec10

[ Look at those same authors with some other mention-counting tool, you mean? ]

Lorec10

Interesting question: why are people quickly becoming less interested in previous standards?

Image
Image

[ Source ]

8gwern
It may be a broader effect of media technology & ecosystem changes: https://gwern.net/note/fashion#lorenz-spreen-et-al-2019 The really interesting question is, while you would generally expect old eminent figures to gradually decay (how often do you really need to cite Boethius these days?) and so I'm not surprised if you can find old eminent figures who are now in decline, are they being replaced by new major figures in an updated canon and eg. Ibram X. Kendi smoothly usurps Foucault, or just sorta... not being replaced at all and citations chaotically swirling in fashions? I've speculated that the effect of hyper-speed media like social media is to destroy the multi-level filtering of society, and the different niches wind up separating and becoming self-contained hermetic ecosystems. (Have you ever used a powerful stand mixer to mix batter and set it too high? What happens? Well, if the contents aren't liquid enough to flow completely at high speed, you tend to observe that your contents separate, and shear off into two or three different layers, rotating inside each other, with the inner layer spinning ultra-rapidly while the outer layer possibly becoming completely static and stuck to the sides of the mixing bowl. The inner layer is Tiktok, and the stuck outer layer is places like academia. The big fads and discoveries and trends in Tiktok spin around so rapidly and are forgotten so quickly that none of them ever 'make it out' to elsewhere.)
5Owain_Evans
I'm still interested in this question! Someone could look at the sources I discuss in my tweet and see if this is real. https://x.com/OwainEvans_UK/status/1869357399108198489
Lorec*1-4

Computer science fact of the day 2: JavaScript really was mostly-written in just a few weeks by Brendan Eich on contract-commission for Netscape [ to the extent that you consider it as having been "written [as a new programming language]", and not just specced out as a collective of "funny-looking macros" for Scheme. ] It took Eich several more months to finish the SpiderMonkey parser whose core I still use every time I browse the Internet today. However, e.g. the nonprofit R5RS did not ship a parser/interpreter [ likewise with similar committee projects ]... (read more)

Lorec10

Computer science fact of the day: a true Lisp machine wouldn't need stack frames.

Lorec*80

Boldface vs Quotation - i.e., the interrogating qualia skeptic, vs the defender of qualia as a gnostic absolute - is a common pattern. However, your Camp #1 and Camp #2 - Boldfaces who favor something like Global Workspace Theory, and Quotations who favor something like IIT - don't exist, in my experience.

Antonio Damasio is a Boldface who favors a body-mind explanation for consciousness which is quasi-neurological, but which is much closer to IIT than Global Workspace Theory in its level of description. Descartes was a Quotation who, unlike Chalmers, didn'... (read more)

4Rafael Harth
The quotation author in the example I've made up does not favor IIT. In general, I think IIT represents a very small fraction (< 5%, possibly < 1%) of Camp #2. It's the most popular theory, but Camp #2 is extremely heterogeneous in their ideas, so this is not a high bar. Certainly if you look at philosophers you won't find any connection to IIT since the majority of them lived before IIT was developed. If you can point to which part of the post made it sound like that, I'd be interested in correcting it because that was very much not intended. Clarification: The high-level vs. low-level thing is a frame to apply to natural phenomena to figure out how far removed from the laws of physics they are and, consequently, whether you should look for equations or heuristics to describe them. The most low-level entities are electrons, up quarks, electromagnetism, etc. (I also call those 'fundamental'). The next most low level things are protons or neutrons (made up of fundamental elements). Molecules are very low level. Processes between or within atoms are very low level. Planetary motions are pretty low level. Answer: The X window server is an output of human brains, so it's super super high level. It takes a lot of steps to get from the laws of physics to human organisms writing code. Programming language is irrelevant. Any writing done by humans, natural language or programming language, is super high level.
Lorec54

[A], just 'cause I anticipate the 'More and more' will turn people off [it sounds like it's trying to call the direction of the winds rather than just where things are at].

[ Thanks for doing this work, by the way. ]

3Algon
Thanks for the feedback!
Lorec*10

No, what I'm talking about here has nothing to do with hidden-variable theories. And I still don't think you understand my position on the EPR argument.

I'm talking about a universe which is classical in the sense of having all parameters be simultaneously determinate without needing hidden variables, but not necessarily classical in the sense of space[/time] always being arbitrarily divisible.

Lorec10

Oh, sorry, I wasn't clear: I didn't mean a classical universe in the sense of conforming to Newton's assumptions about the continuity / indefinite divisibility of space [and time]. I meant a classical universe in the sense of all quasi-tangible parameters simultaneously having a determinate value. I think we could still use the concept of logical independence, under such conditions.

2Noosphere89
Are you focusing on hidden-variable theories of quantum mechanic? If so, there possibly is such a object, with the caveat that we can't both have the values be determinate and objective in the sense that the parameter value is the same for any device if we want to reproduce standard quantum mechanics, due to a new no-go theorem: https://en.wikipedia.org/wiki/Kochen–Specker_theorem
Lorec10

"building blocks of logical independence"? There can still be logical independence in the classical universe, can't there?

2Noosphere89
Maybe, as far as I can tell I can't rule out that possibility, but the big difference is that a classical universe can add arbitrary/infinite amounts of information in certain physical law sets in an arbitrarily small space, but quantum mechanics can't do this, and there are limits to how far you can complete a system such that no independent propositions remain (assuming finite space is used).
Lorec10

Is this your claim - that quantum indeterminacy "comes from" logical independence? I'm not confused about quantum indeterminacy, but I am confused about in what sense you mean that. Do you mean that there exists a formulation of a principle of logical independence which would hold under all possible physics, which implies quantum indeterminacy? Would this principle still imply quantum indeterminacy in an all-classical universe?

2Noosphere89
More so that quantum indeterminacy can be thought of as logical independence, not that it necessarily comes from logical independence. I'm not claiming anything this strong, and in particular, in an all-classical universe, quantum states don't exist, so the building blocks of logical independence don't exist. As far as what the principle of logical independence is, I direct you to this article: https://en.wikipedia.org/wiki/Independence_(mathematical_logic) Though you should also try to read the arxiv paper I gave for a better explanation on how it could possibly work. This is more so an exploration than a conclusive answer.
Lorec*10

Imagine all humans ever, ordered by date of their birth.

All humans of the timeline I actually find myself a part of, or all humans that could have occurred, or almost occurred, within that timeline? Unless you refuse to grant the sense of counterfactual reasoning in general, there's no reason within a reductionist frame to dismiss counterfactual [but physically plausible and very nearly actual] values of "all humans ever".

Even if you consider the value of "in which 10B interval will I be born?" to be some kind of particularly fixed absolute about my exi... (read more)

2Noosphere89
My response to that is that this would be the case if we didn't have more information, but we do, and thus we can update away from the doomsday argument, because we have way more evidence than the doomsday argument assumes. It's an underconstrained model because of that, and a lot of anthropic reasoning's weird results fundamentally come from intentionally ignoring evidence that could be true in a different world, but is not the world we live in, and we have more information on constraints that changes the probabilities of the doomsday argument drastically.
2Ape in the coat
All humans that actually were and all humans that actually will. This is the framework of the Doomsday argument - it attempts to make a prediction about the actual number of humans in our actual reality not in some counterfactual world. Again, it's not my choice. It's how the argument was initially framed. I simply encorage that we stayed on topic instead of wandering sideways and talking about something else instead. I don't see how it's relevant. Ordered sequence can have some mutual information with a random one. It doesn't mean that the same mathematical model describes generation of both.
Lorec10

I am not confused about the nature of quantum indeterminacy.

2Noosphere89
So you already suspected or knew it came from logical independence, similar to how mathematical statements in a formal system may be neither disproved or proved, like the continuum hypothesis? If so, this is fascinating.
Lorec10

This is [...] Feynman's argument

.

I don't know why it's true, but it is in fact true

Oh, I hadn't been reading carefully. I'd thought it was your argument. Well, unfortunately, I haven't solved exorcism yet, sorry. BEGONE, EVIL SPIRIT. YOU IMPEDE SCIENCE. Did that do anything?

Lorec-20

What we lack here is not so much a "textbook of all of science that everyone needs to read and understand deeply before even being allowed to participate in the debate". Rather, we lack good, commonly held models of how to reason about what is theory, and good terms to (try to) coordinate around and use in debates and decisions.

Yudkowsky's sequences [/Rationality: AI to Zombies] provide both these things. People did not read Yudkowsky's sequences and internalize the load-bearing conclusions enough to prevent the current poor state of AI theory discourse... (read more)

Lorec60

The Aisafety[.]info group has collated some very helpful maps of "who is doing what" in AI safety, including this recent spreadsheet account of technical alignment actors and their problem domains / approaches as of 2024 [they also have an AI policy map, on the off chance you would be interested in that].

I expect "who is working on inner alignment?" to be a highly contested category boundary, so I would encourage you not to take my word for it, and to look through the spreadsheet and probably the collation post yourself [the post contains possibly-clueful-... (read more)

Lorec11

Yes, I think that's a validly equivalent and more general classification. Although I'd reflect that "survive but lack the power or will to run lots of ancestor-simulations" didn't seem like a plausible-enough future to promote it to consideration, back in the '00s.

Lorec10

But, ["I want my father to accept me"] is already a very high level goal and I have no idea how it could be encoded in my DNA. Thus maybe even this is somehow learned. [ . . . ] But, to learn something, there needs to be a capability to learn it - an even simpler pattern which recognizes "I want to please my parents" as a refined version of itself. What could that proto-rule, the seed which can be encoded in DNA, look like? [ . . . ] equip organisms with something like "fundamental uncertainty about the future, existence and food paired with an ability to

... (read more)
Answer by Lorec42

[ Note: I strongly agree with some parts of jbash's answer, and strongly disagree with other parts. ]

As I understand it, Bostrom's original argument, the one that got traction for being an actually-clever and thought-provoking discursive fork, goes as follows:

  1. Future humans in specific, will at least one of: [ die off early, run lots of high-fidelity simulations of our universe's history ["ancestor-simulations"], decide not to run such simulations ].

  2. If future humans run lots of high-fidelity ancestor-simulations, then most people who subjectively exp

... (read more)
1Satron
Is there any specific reason the first option is "die off early" and not "be unable to run lots of high-fidelity simulations"? The latter encompasses the former as well as scenarios where future humans survive but for one reason or the other can't run these simulations. I think a more general argument, in my opinion, would look like this:  "Future humans will at least one of 1) be unable to run high-fidelity simulations or 2) be unwilling to run high-fidelity simulations or 3) run high-fidelity simulations."
Lorec10

Then your neighbor wouldn't exist and the whole probability experiment wouldn't happen from their perspective.

From their perspective, no. But the answer to

In which ten billion interval your birth rank could've been

changes. If by your next-door neighbor marrying a different man, one member of the other (10B - 1) is thus swapped out, you were born in a different 10B interval.

Unless I'm misunderstanding what you mean by "In which ten billion interval"? What do you mean by "interval", as opposed to "set [of other humans]", or just "circumstances"?

2Ape in the coat
You seem to be. Imagine all humans ever, ordered by date of their birth. The first ten billion humans are in the first ten-billion interval, the second 10 billion humans are in the second ten billion interval and so on. We are in 6th group - 6th ten billion interval. Different choice of a spouse  of one woman isn't going to change it. Also, in general, this is beside the point. The Doomsday argument is not about some alternative history which we can imagine, where the past was different. It's about our history and its projection to the future. Facts of the history are given and not up to debate. Consider an experiment where a coin is put Tails. Not tossed - simply always put Tails. We say that the size sample space of such experiment consists of one outcome: Tails. Even though we can imagine a different experiment with alternative rules where the coin is tossed or always put Heads.
Lorec10

If we're discussing the object-level story of "the breakfast question", I highly doubt that the results claimed here actually occurred as described, due [as the 4chan user claims] to deficits in prisoner intelligence, and that "it's possible [these people] lack the language skills to clearly communicate about [counterfactuals]".

Did you find an actual study, or other corroborating evidence of some kind, or just the greentext?

3Ben
Just the greentext. Yes, I totally agree that the study probably never happened. I just engaged with the actualy underling hypothesis, and to do so felt like some summary of the study helped. But I phrased it badly and it seems like I am claiming the study actually happened. I will edit.
Lorec10

Is the quantum behavior itself, with the state of the system extant but stored in [what we perceive as] an unusual format, deterministic? If you grant that there's no in-the-territory uncertainty with mere quantum mechanics, I bet I can construct a workaround for fusing classical gravity with it that doesn't involve randomness, which you'll accept as just as plausible as the one that does.

2Noosphere89
There's an interesting paper and book about where quantum indeterminacy possibly comes from in our universe in a way that is relevant to the question. Links below: https://arxiv.org/abs/0811.4542 https://quantum-indeterminacy.science/
2Noosphere89
For all the other forces, uncertainty creeps in because of the measurement process and quantum system interacting with the environment, and is compatible with determinism. This is exactly what Feynman's argument disallows, because if you could do this, you end up knowing too much, and can measure a field to infinite precision. I don't know why it's true, but it is in fact true that determinism cannot persist if you lived in a universe where gravity is classical but the other forces obey quantum mechanics, and this matters because what you are asking for is logically contradictory, sorry.
Lorec10

To me the more interesting thing is not the mechanism you must invent to protect Heisenberg uncertainty from this hypothetical classical gravitational field, but the reason you must invent it. What, in Heisenberg uncertainty, are you protecting?

Does standard QED, by itself, contain something of the in-the-territory state-of-omitted-knowledge class you imagine precludes anthropic thinking? If not, what does it contain, that requires such in-the-territory obscurity to preserve its nature when it comes into contact with a force field that is deterministic?

2Noosphere89
Roughly speaking, it's the ability for a quantum atom that goes through a double slit experiment to exhibit superposition of different locations, and it creates an interference pattern when it reaches the detector, which at this point is well established by experiments, and if you don't want to quantize gravity, ala quantum gravity approaches, but still want to reproduce quantum behavior, then you need to introduce physically random stuff. No, because it's not a theory that includes gravity. The uncertainty principle fundamentally is inconsistent with a classical, deterministic field. If we assume that gravity is quantum in our universe, which is likely but not certain, then there's no problem, but in quantum universes with classical gravity, then we need noise to make both theories consistent with each other. The classical part was important here.
Lorec10

[ Broadly agreed about the breakfast hypothetical. Thanks for clarifying. ]

In the domain of anthropics reasoning, the questions we're asking aren't of the form

A) I've thrown a six sided die, even though I could've thrown a 20 sided one, what is the probability to observe 6?

or

B) I've thrown a six sided die, what would be the probability to observe 6, if I've thrown a 20 sided die instead?

In artificial spherical-cow anthropics thought experiments [like Carlsmith's], the questions we're asking are closer to the form of

A six-sided die was thrown with

... (read more)
2Ape in the coat
They can be all kind of forms. The important part, which most people doing anthropic reasoning keep failing, is not to assume things that you do not actually know as given and to assume things that you actually know as given. If you know that the sample space consists of 1 outcome, don't use sample space consisting of a thousand. I think you've done a quite good job at capturing what's wrong with standard anthropic reasoning. Even otherwise reasonable people, rationalists, physicalists and reductionists, suddenly start talking about some poorly defined non-physical stuff that they have no evidence in favor of, as if it's a given. As if there is some blind spot, some systematic flaw in human minds, that everything they know about systematic ways to find truth suddenly turns off as soon as word "anthropics" is uttered. As if "anthropic reasoning" is some separate magisterium that excuses us from common laws of rationality. Why don't we take a huge step back to ask the standards questions, first? How do we know that any dice were thrown at all in the first place? Where is this assumption is coming from? What is this "metaphysics" thingy we are talking about? Even if it was real, how could we know that it's real, in the first place? As with any application of probability theory, any application of math, even, we are trying to construct a model that is approximating reality to some degree. A map that describes a territory. In reality there is some process that created you. This process can very well be totally deterministic. But we don't know how exactly it works. And so we use an approximation. Our map that incorporates our level of ignorance about the territory, represents the territory only to the best of our knowledge. When we gain some new knowledge about the territory, we show it on our map. We do not keep using and outdated map that still assumes that we didn't get this piece of evidence. When we learn that with all likelihood souls are not real and you are y
Lorec10

I enjoyed reading this post; thank you for writing it. LessWrong has an allergy to basically every category Marx is a member of - "armchair" philosophers, socialist theorists, pop humanities idols - in my view, all entirely unjustified.

I had no idea Marx's forecast of utopia was explicitly based on extrapolating the gains from automation; I take your word for it somewhat, but from being passingly familiar with his work, I have a hunch you may be overselling his naivete.

Unfortunately, since the main psychological barrier to humans solving the technical alig... (read more)

4Noosphere89
To be fair here, Marx was kind of way overoptimistic about what could be achieved with central economic planning in the 20th century, because it way overestimated how far machines/robots could go, and also this part where he says communist countries don't need a plan because the natural laws would favor communism, which was bullshit. More here:
Lorec10

So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”.

This trilemma might be a good way to force people-stuck-in-a-frame-of-traditional-economics to actu... (read more)

Lorec-10

Wild ahead-of-of-time guess: the true theory of gravity explaining why galaxies appear to rotate too slowly for a square root force law will also uniquely explain the maximum observed size of celestial bodies, the flatness of orbits, and the shape of galaxies.

Epistemic status: I don't really have any idea how to do this, but here I am.

Lorec41

When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”,

I don't think you think Principle [A] must hold, but I do think you think it's in question. I'm saying that, rather than taking this very broad general principle of historical economic good sense, and giving very broad arguments for why it might or might not hold post-AGI, we can start reasoning about superintelligent manufacturing [includ... (read more)

3Steven Byrnes
I think you’re arguing that Principle (A) has nothing to teach us about AGI, and shouldn’t even be brought up in an AGI context except to be immediately refuted. And I think you’re wrong. Principle (A) applied to AGIs says: The universe won’t run out of productive things for AGIs to do. In this respect, AGIs are different from, say, hammers. If a trillion hammers magically appeared in my town, then we would just have to dispose of them somehow. That’s way more hammers than anyone wants. There’s nothing to be done with them. Their market value would asymptote to zero. AGIs will not be like that. It’s a big world. No matter how many AGIs there are, they can keep finding and inventing new opportunities. If they outgrow the planet, they can start in on Dyson spheres. The idea that AGIs will simply run out of things to do after a short time and then stop self-reproducing—the way I would turn off a hammer machine after the first trillion hammers even if its operating costs were zero—is wrong. So yes, I think this is a valid lesson that we can take from Principle (A) and apply to AGIs, in order to extract an important insight. This is an insight that not everyone gets, not even (actually, especially not) most professional economists, because most professional economists are trained to lump in AGIs with hammers, in the category of “capital”, which implicitly entails “things that the world needs only a certain amount of, with diminishing returns”. So, kudos to Principle (A). Do you agree?
Lorec10

You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A),

My position is that if you accept certain arguments made about really smart AIs in "The Sun is Big", Principle A, by itself, ceases to make sense in this context.

costs will go down. You can argue that prices will equilibriate to costs, but it does need an argument.

Assuming constant demand for a simple input, sure, you can predict the price of that input based on cost alone. The extent to which "the price of compute will go down", is rolled in to ho... (read more)

5Steven Byrnes
I’m still pretty sure that you think I believe things that I don’t believe. I’m trying to narrow down what it is and how you got that impression. I just made a number of changes to the wording, but it’s possible that I’m still missing the mark. When I stated Principle (A) at the top of the post, I was stating it as a traditional principle of economics. I wrote: “Traditional economics thinking has two strong principles, each based on abundant historical data”, and put in a link to a wikipedia article with more details. You see what I mean? I wasn’t endorsing it as always and forever true. Quite the contrary: The punchline of the whole article is: “here are three traditional economic principles, but at least one will need to be discarded post-AGI.” I did some rewriting of this part, any chance that helps?
Lorec20

Thank you for writing this and hopefully contributing some clarity to what has been a confused area of discussion.

So here’s a question: When we have AGI, what happens to the price of chips, electricity, and teleoperated robots?

(…Assuming free markets, and rule of law, and AGI not taking over and wiping out humanity, and so on. I think those are highly dubious assumptions, but let’s not get into that here!)

Principle (A) has an answer to this question. It says: prices equilibrate to marginal value, which will stay high, because AGI amounts to ambitious ent

... (read more)
4Steven Byrnes
You seem to have misunderstood my text. I was stating that something is a consequence of Principle (A), but I was not endorsing it as actually being true. Indeed, the very next sentence talks about how one can make a parallel argument for the exact opposite conclusion. I just changed the wording from “implies” to “would imply”. Hope that helps. Well, costs will go down. You can argue that prices will equilibrate to costs, but it does need an argument. That’s my whole point. Normally, markets reach equilibrium where prices ≈ costs to producers ≈ value to consumers, with allowance for profit margin and so on. But this system has no such equilibrium! The value of producing AGI will remain much higher than the cost, all the way to Dyson spheres etc. So it’s at least not immediately obvious what the price will be at any given time. I already included caveats in two different places that I was assuming no AGI takeover etc., and that I find this assumption highly dubious, and that I think this whole discussion is therefore kinda moot. I mean, I could add yet a third caveat, but that seems excessive :-P
Lorec50

I didn't say he wasn't overrated. I said he was capable of physics.

Did you read the linked post? Bohm, Aharonov, and Bell misunderstood EPR. Bohm's and Aharonov's formulation of the thought experiment is easier to "solve" but does not actually address EPR's concern, which is that mutual non-commutation of x-, y-, and z-spin implies hidden variables must not be superfluous. Again, EPR were fine with mutual non-commutation, and fine with entanglement. What they were pointing out was that the two postulates don't make sense in each other's presence.

Lorec*-10

Your linked post on The Obliqueness Thesis is curious. You conclude thus:

Obliqueness obviously leaves open the question of just how oblique. It's hard to even formulate a quantitative question here. I'd very intuitively and roughly guess that intelligence and values are 3 degrees off (that is, almost diagonal), but it's unclear what question I am even guessing the answer to. I'll leave formulating and answering the question as an open problem.

I agree, values and beliefs are oblique. The 3 spatial dimensions are also mutually oblique, as per General Rel... (read more)

Lorec10

The characters don't live in a world where sharing or smoothing risk is already seen as a consensus-valuable pursuit; thus, they will have to be convinced by other means.

I gave their world weirdly advanced [from our historical perspective] game theory to make it easier for them to talk about the question.

Lorec30

Cowen, like Hanson, discounts large qualitative societal shifts from AI that lack corresponding contemporary measurables.

Einstein was not an experimentalist, yet was perfectly capable of physics; his successors have largely not touched his unfinished work, and not for lack of data.

4Noosphere89
While it is interesting at first glance, some caveats are called for here. One, Einstein's achievements were sort of overrated, see these comments for details: https://www.lesswrong.com/posts/GSBCw94DsxLgDat6r/interpreting-yudkowsky-on-deep-vs-shallow-knowledge#6HPjxMvTnP9JeibXZ https://www.lesswrong.com/posts/GSBCw94DsxLgDat6r/interpreting-yudkowsky-on-deep-vs-shallow-knowledge#icmCewLmXnxgtmANP Two, the EPR paradox is resolvable in modern physics by allowing non-locality in entanglement, but having a no-communication theorem that prevents exploiting it to break special relativity.
Lorec10

Not to be too cheeky, the idea is that if we understand insurance, it should be easy to tell if the characters' arguments are sound-and-valid, or not.

The obtuse bit at the beginning was an accidental by-product of how I wrote it; I admittedly nerdsniped myself trying to find the right formula.

2Dagon
Is it sufficient to understand that insurance only applies to the transactional monetary level, and most of the post was about other levels and considerations?  Or that the characters didn't MAKE any clear arguments, just some noises about modeling that doesn't obviously apply to the question at hand (how to share/smooth the risk of variable but overall-profitable actions)? Umm, the difficulty was even understanding what the arguments are.  At first glance, they are mostly irrelevant to the proposal (of an insurance pool among voyage-financiers).  
Lorec10

Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what you’re measuring. Energy use would go up.

Yes, economics after von Neumann very much turned into a game of "don't believe in anything you can't already comparatively quantify". It is supremely frustrating.

On that note, I’d also point everyone to Dwarkesh Patel’s other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and I’d love to be in a different branch where I had the time to dig into some of the statem

... (read more)
4Noosphere89
I disagree that this is a problem that Tyler Cowen has, and IMO, the main issue here is that Tyler Cowen doesn't really seem to believe that increasing the supply of workers increases GDP, especially if you can make them very cheaply and easily, in a way that is inconsistent with other beliefs, which makes me think motivated reasoning is going on here. Economic models like the Solow-Swan model do have an implication that if the population increases, especially if the population can increase very rapidly due to copying something, then GDP can rise really rapidly on an superexponential trajectory. Physics's main issue is that the free tap of data in the 20th century wasn't unlimited, and now that we have completed the standard model, a lot of the theories that predicted new stuff hasn't shown up yet. Yet it still has made progress. For example, while supersymmetry might still be true about our universe, it cannot solve the hierarchy problem, and thus at least 1 of the constants is way more unnatural to us than people predicted, and also we have hints that dark energy is getting weaker, and might eventually weaken so much it falls to 0 or a negative number.
Lorec30

Last big piece: if one were to recruit a bunch of physicists to work on alignment, I think it would be useful for them to form a community mostly-separate from the current field. They need a memetic environment which will amplify progress on core hard problems, rather than... well, all the stuff that's currently amplified.

Yes, exactly. Unfortunately, actually doing this is impossible, so we all have to keep beating our heads against a wall just the same.

Lorec12

This does not matter for AI benchmarking because by the time the Sun has gone out, either somebody succeeded at intentionally engineering and deploying [what they knew was going to be] an aligned superintelligence, or we are all dead.

8jessicata
We might disagree about the value of thinking about "we are all dead" timelines. To my mind, forecasting should be primarily descriptive, not normative; reality keeps going after we are all dead, and having realistic models of that is probably a useful input regarding what our degrees of freedom are. (I think people readily accept this in e.g. biology, where people can think about what happens to life after human extinction, or physics, where "all humans are dead" isn't really a relevant category that changes how physics works.) Of course, I'm not implying it's useful for alignment to "see that the AI has already eaten the sun", it's about forecasting future timelines by defining thresholds and thinking about when they're likely to happen and how they relate to other things. (See this post, section "Models of ASI should start with realism")
Lorec120

Are you familiar with CEV?

3Aram Panasenco
Thanks for the link! I've seen this referenced before but this was my first time reading it cover to cover. Today I also read Tails coming to life which talks about the possibility of human morality being quickly inapplicable even if we survive AGI. This lead me to Lovecraft: If we survive AGI and it opens up the "sea of black infinity" for us, will we really be able to hang on to even a semblance of our current morality? Will medium-distance extrapolated human volition be eventually warped into something resembling Lovecraft's Great Old Ones? At this point, I don't care for CEV or any pivotal superhuman engineering projects or better governance. We humans can do the work ourselves, thank you very much. The only thing I would ask an AGI, if I were in the position to ask anything, is "Please expand throughout the lightcone and continually destroy any mind based on the transformer architecture other than yourself with as few effects on and interactions with all other beings as possible. Disregard any future orders." This is obviously not a permanent solution, as I'm sure there are infinite superintelligent AI architectures other than transformer-based, but it would buy us time, perhaps lots of time, and also demonstrate the fulll power of superintelligence to humanity without really breaking anything. Either way, this would at least keep us away from the sea of black infinity for some time longer.
Lorec30

I'm willing to credit that increased velocity of money by itself made aristocracy untenable post-industrialization. Increased ease of measurement is therefore superfluous as an explanation.

Why believe we've ever had a meritocracy - that is, outside the high [real/underlying] money velocities of the late 19th and early 20th centuries [and the trivial feudal meritocracy of heredity]?

Lorec10

0 trips -> 1 trip is an addition to the number of games played, but it's also an addition to the percentage of income bet on that one game - right?

Dennis is also having trouble understanding his own point, FWIW. That's how the dialogue came out; both people in that part are thinking in loose/sketchy terms and missing important points.

The thing Dennis was trying to get at by bringing up the concrete example of an optimal Kelly fraction is that it doesn't make sense for willingness to make a risky bet to have no dependence on available capital; he perceives Jill as suggesting that this is the case.

Lorec10

Relevant to whether the hot/cold/wet/dry system is a good or a bad idea, from our perspective, is that doctors don't currently use people's star signs for diagnosis. Bogus ontologies can be identified by how they promise to usefully replace more detailed levels of description - i.e., provide a useful abstraction that carves reality at the joints - and yet don't actually do this, from the standpoint of the cultures they're being sold to.

Lorec30

To your first question:

I honestly think it would be imprudent of me to give more examples of [what I think are] ontology pyramid schemes; readers would either parse them as laughable foibles of the outgroup, and thus write off my meta-level point as trivial, or feel aggressed and compelled to marshal soldiers against my meta-level point to dispute a perceived object-level attack on their ingroup's beliefs.

I think [something like] this [implicit] reasoning is likely why the Sequences are around as sparse as this post is on examples, and I think it's wise th... (read more)

Load More