Wiki Contributions

Comments

I think formalizing it in full will be a pretty nontrivial undertaking, but formalizing isolated components feels tractable, and is in fact where I’m currently directing a lot of my time and funding.

Great. Yes, I think that's the thing to do. Start small! I (and presumably others) would update a lot from a new piece of actual formal mathematics from Chris's work. Even if that work was, by itself, not very impressive.

(I would also want to check that that math had something to do with his earlier writings.)

My current understanding is that he believes that his current written work should be sufficient for modern mathematicians and scientists to understand his core ideas

Uh oh. The "formal grammar" that I checked used formal language, but was not even close to giving a precise definition. So Chris either (i) doesn't realize that you need to be precise to communicate with mathematicians, or (ii) doesn't understand how to be precise.

Please be prepared for the possibility that Chris is very smart and creative, and that he's had some interesting ideas (e.g. Syndiffeonesis), but that his framework is more of a interlocked collection of ideas than anything mathematical (despite using terms from mathematics). Litany of Tarsky and all that.

"gesture at something formal" -- not in the way of the "grammar" it isn't. I've seen rough mathematics and proof sketches, especially around formal grammars. This isn't that, and it isn't trying to be. There isn't even an attempt at a rough definition for which things the grammar derives.

I think Chris’s work is most valuable to engage with for people who have independently explored philosophical directions similar to the ones Chris has explored

A big part of Chris’s preliminary setup is around how to sidestep the issues around making the sets well-ordered.

Nonsense! If Chris has an alternative to well-ordering, that's of general mathematical interest! He would make a splash simply writing that up formally on its own, without dragging the rest of his framework along with it.

Except, I can already predict you're going to say that no piece of his framework can be understood without the whole. Not even by making a different smaller framework that exists just to showcase the well-ordering alternative. It's a little suspicious.

because someone else I’d funded to review Chris’s work

If you're going to fund someone to do something, it should be to formalize Chris's work. That would not only serve as a BS check, it would make it vastly more approachable.

I’m confused why you’re asking about specific insights people have gotten when Jessica has included a number of insights she’s gotten in her post

I was hoping people other than Jessica would share some specific curated insights they got. Syndiffeonesis is in fact a good insight.

tldr; a spot check calls bullshit on this.

I know a bunch about formal languages (PhD in programming languages), so I did a spot check on the "grammar" described on page 45. It's described as a "generative grammar", though instead of words (sequences of symbols) it produces "L_O spacial relationships". Since he uses these phrases to describe his "grammar", and they have their standard meaning because he listed their standard definition earlier in the section, he is pretty clearly claiming to be making something akin to a formal grammar.

My spot check is then: is the thing defined here more-or-less a grammar, in the following sense?

  1. There's a clearly defined thing called a grammar, and there can be more than one of them.
  2. Each grammar can be used to generate something (apparently an L_O) according to clearly defined derivation rules that depend only on the grammar itself.

If you don't have a thing plus a way to derive stuff from that thing, you don't have anything resembling a grammar.

My spot check says:

  1. There's certainly a thing called a grammar. It's a four-tuple, whose parts closely mimic that of a standard grammar, but using his constructs for all the basic parts.
  2. There's no definition of how to derive an "L_O spacial relationship" given a grammar. Just some vague references to using "telic recursion".

I'd categorize this section as "not even wrong"; it isn't doing anything formal enough to have a mistake in it.


Another fishy aspect of this section is how he makes a point of various things coinciding, and how that's very different from the standard definitions. But it's compatible with the standard definitions! E.g. the alphabet of a language is typically a finite set of symbols that have no additional structure, but there's no reason you couldn't define a language whose symbols were e.g. grammars over that very language. The definition of a language just says that its symbols form a set. (Perhaps you'd run into issues with making the sets well-ordered, but if so he's running headlong into the same issues.)


I'm really not seeing any value in this guy's writing. Could someone who got something out of it share a couple specific insights that got from it?

How did you find me? How do they always find me? No matter...

Have you tried applying your models to predict the day's weather, or what your teacher will be wearing that day? I bet not: they wouldn't work very well. Models have domains in which they're meant to be applied. More precise models tend to have more specific domains.

Making real predictions about something, like what the result of a classroom experiment will be even if the pendulum falls over, is usually outside the domain of any precise model. That's why your successful models are compound models, using Newtonian mechanics as a sub-model, and that's why they're so unsatisfyingly vague and cobbled together.

There is a skill to assembling models that make good predictions in messy domains, and it is a valuable skill. But it's not the goal of your physics class. That class is trying to teach you about precise models like Newtonian mechanics. Figuring out exactly how to apply Newtonian mechanics to a real physical experiment is often harder than solving the Newtonian math! But surely you've noticed by now that, in the domains where Newtonian mechanics seems to actually apply, it applies very accurately?

This civilization we live in tends to have two modes of thinking. The first is 'precise' thinking, where people use precise models but don't think about the mismatch between their domain and reality. The model's domain is irrelevant in the real world, so people will either inappropriately apply the model outside its domain or carefully only make statements within the model's domain and hope that others will make that incorrect leap on their own. The other mode of thinking is 'imprecise' thinking, where people ignore all models and rely on their gut feelings. We are extremely bad, at the moment, of the missing middle path of making and recognizing models for messy domains.

"There's no such thing as 'a Bayesian update against the Newtonian mechanics model'!" says a hooded figure from the back of the room. "Updates are relative: if one model loses, it must be because others have won. If all your models lose, it may hint that there's another model you haven't thought of that does better than all of them, or it may simply be that predicting things is hard."

"Try adding a couple more models to compare against. Here's one: pendulums never swing. And here's another: Newtonian mechanics is correct but experiments are hard to perform correctly, so there's a 80% probability that Newtonian mechanics gives the right answer and 20% probability spread over all possibilities including 5% on 'the pendulum fails to swing'. Continue to compare these models during your course, and see which one wins. I think you can predict it already, despite your feigned ignorance."

The hooded figure opens a window in the back of the room and awkwardly climbs through and walks off.

Are we assuming things are fair or something?

I would have modeled this as von Neumann getting 300 points and putting 260 of them into the maths and sciences and the remaining 40 into living life and being well adjusted.

Oh, excellent!

It's a little hard to tell from the lack of docs, but you're modelling dilemmas with Bayesian networks? I considered that, but wasn't sure how to express Sleeping Beauty nicely, whereas it's easy to express (and gives the right answers) in my tree-shaped dilemmas. Have you tried to express Sleeping Beauty?

And have you tried to express a dilemma like smoking lesion where the action that an agent takes is not the action their decision theory tells them to take? My guess is that this would be as easy as having a chain of two probabilistic events, where the first one is what the decision theory says to do and the second one is what the agent actually does, but I don't see any of this kind of dilemma in your test cases.

I have a healthy fear of death; it's just that none of it stems from an "unobserved endless void". Some of the specific things I fear are:

  • Being stabbed is painful and scary (it's scary even if you know you're going to live)
  • Most forms of dying are painful, and often very slow
  • The people I love mourning my loss
  • My partner not having my support
  • Future life experiences, not happening
  • All of the things I want to accomplish, not happening

The point I was making in this thread was that "unobserved endless void" is not on this list, I don't know how to picture it, and I'm surprised that other people think it's a big deal.

Who knows, maybe if I come close to dying some time I'll suddenly gain a new ontological category of thing to be scared of.

What's the utility function of the predictor? Is there necessarily a utility function for the predictor such that the predictor's behavior (which is arbitrary) corresponds to maximizing its own utility? (Perhaps this is mentioned in the paper, which I'll look at.)

EDIT: do you mean to reduce a 2-player game to a single-agent decision problem, instead of vice-versa?

I was not aware of Everitt, Leike & Hutter 2015, thank you for the reference! I only delved into decision theory a few weeks ago, so I haven't read that much yet.

Would you say that this is similar to the connection that exists between fixed points and Nash equilibria?

Nash equilibria come from the fact that your action depends on your opponent's action, which depends on your action. When you assume that each player will greedily change their action if it improves their utility, the Nash equilibria are the fixpoints at which no player changes their action.

In single-agent decision theory problems, your (best) action depends on the situation you're in, which depends on what someone predicted your action would be, which (effectively) depends on your action.

If there's a deeper connection than this, I don't know it. There's a fundamental difference between the two cases, I think, because a Nash equilibrium involves multiple agents that don't know each others' decision process (problem statement: maximize the outputs of two functions independently), while single-agent decision theory involves just one agent (problem statement: maximize the output of one function).

Load More