All of jollybard's Comments + Replies

Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don't know how to formalize it.

It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.

 

Well, ask the question, should the bigger brain receive a million dollar, or do you not care?

I've always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible? 

An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you "could" input anything into your pocket... (read more)

5mako yass
Hmm. Are you getting at something like: How can there possibly be an objective way of associating an experiential reference class with a system of matter.. when the reference class is an algorithm, and algorithms only exist as abstractions, and there are various reasons the multiverse can't be an abstraction-considerer, so anthropic binding couldn't be a real metaphysical effect and must just be a construct of agents? There are some accounts of anthropic binding that allow for it to just be a construct. I removed this from the post, because it was very speculative and conflicted with some other stuff and I wanted the post to be fairly evergreen, but it was kind of interesting, so here's some doubts I had about whether I should really dismiss the force theory:

Fantastic work!

How do we express the way that the world might be carved up into different agent-environment frames while still remaining "the same world"? The dual functor certainly works, but how about other ways to carve up the world? Suppose I notice a subagent of the environment, can I switch perspective to it?

Also, I am guessing is that an "embedded" cartesian frame might be one where  i.e. where the world is just the agent along with the environment. Or something. Then, since we can iterate the choice function, it ould represent t... (read more)

There are two theorems. You're correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar's paradox, and then the second is proved by repeating the proof of the first internally.

However I chose to take the reverse route for a more epistemological flavour.

But we can totally prove it to be consistent, though, from the outside. Its sanity isn't necessarily suspect, only its own claim of sanity.

If someone tells you something, you don't take it at face value, you first verify that the thought process used to generate it was reliable.

You are correct. Maybe I should have made that clearer.

My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn't even believe it if it claimed to be consistent).

4Viliam
By the same logic, shouldn't we distrust it even if it proves "2 + 2 = 4"? I mean, it's sanity is already suspect, and we know that insane systems could have a wrong opinion on all kinds of math problems, so there is no reason to trust this specific proof. And I feel pretty sure that Gödel didn't have "also, no formalized system can prove that 2+2=4" in mind. Related question: what about systems proving other systems to be correct? Is that simply an equivalent of saying "I 100% believe this other guy to be sane (although ignorant about some facts of the world that are known to me, such as his sanity)"? Which for the external observer would simply mean "if A is sane, B is sane, too".

It's essentially my jumping off point, though I'm more interested in the human-specific parts than he is.

The relevance that I'm seeing is that of self-fulfilling prophecies.

My understanding of FEP/predictive processing is that you're looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some ac... (read more)

Excellent post, it echoes much of my current thoughts.

I just wanted to point out that this is very reminiscent of Karl Friston's free energy principle.

The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-­maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.
7abramdemski
I agree that it's broadly relevant to the partial-agency sequence, but I'm curious what particular reminiscence you're seeing here. I would say that "if the free energy principle is a good way of looking at things" then it is the solution to at least one of the riddles I'm thinking about here. However, I haven't so far been very convinced. I haven't looked into the technical details of Friston's work very much myself. However, a very mathematically sophisticated friend has tried going through Friston papers on a couple of occasions and found them riddled with mathematical errors. This does not, of course, mean that every version of free-energy/predictive-processing ideas is wrong, but it does make me hesitant to take purported results at face value.
1dsatan
Yes. That is the logical induction I was talking about.
Mammals and birds tend to grow, reach maturity, and stop growing. Conversely, many reptile and fish species keep growing throughout their lives. As you get bigger, you can not only defend yourself better (reducing your extrinsic mortality), but also lay more eggs.

So, clearly, we must have the same for humans. If we became progressively larger, women could carry twins and n-tuplets more easily. Plus, our brains would get larger, too, which could allow for a gradual increase in intelligence during our whole lifetimes.

Ha ha, just kidding: presumably intelligence is proportional to brain size/body size, which would remain constant, or might even decrease...

I'm not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether.

My feeling is that the arguments I give above are pretty decent reasons to think that they're not truth values! As I wrote: "The thesis of this post is that probabilities aren't (intuitionistic) truth values."

4MrMind
Yeah, my point is that they aren't truth values per se, not intuitionistic or linear or MVs or anything else

Indeed, and -categories can provide semantics of homotopy type theory. But -categories are ultimately based on sets. At some point though maybe we'll use HoTT to "provide semantics" to set theories, who knows.

In general, there's a close syntax-semantics relationship between category theory and type theory. I was expecting to touch on that in my next post, though!

EDIT: Just to be clear, type theory is a good alternate foundation, and type theory is the internal language of categories.

Yes, I have! Girard is very... opinionated, he is fun to read for that reason. That is, Jean-Yves has some spicy takes:

Quantum logic is indeed a sort of punishment inflicted on nature, guilty of not yielding to the prejudices of logicians… just like Xerxes had the Hellespont – which had destroyed a boat bridge – whipped.

I enjoyed his book "Proofs and Types" as an introduction to type theory and the Curry-Howard correspondence. I've looked through "The Blind Spot" a bit and it also seemed like a fun read. Of cou... (read more)

That all makes more sense now :)

In our case the towel rack was right in front of the toilet, so it didn't have to be an ambient thing haha

I just want to point out that you should probably change your towel at least every week (preferably every three uses), especially if you leave it in a high humidity environment like a shared bathroom.

I can't even imagine the smell... Actually, yes I can, because I've had the same scenario happen to me at another rationalist sharehouse.

So, um, maybe every two months is a little bit too long.

A few obvious alternatives:

1. Everyone leave their towels in their room.
2. Guests leave their towels in their rooms. The common towels are put into a hamper ev... (read more)

3mingyuan
You know, that's an excellent point. I just bought my boyfriend a new towel and washed all the towels in the house (serendipitously, everyone's out of town for the holidays). I also want to note that we no longer really have this problem, and that the smell - at least the ambient smell - has never been very bad. Although, yeah, when I stick my face in some of the towels and smell them I... wish I hadn't. I'm also a female with good hygiene, moderate OCD, and an unusually good sense of smell, so. Yeah, I hear you.  Also, re: the two months thing - the guest towels would generally just remain hanging up after one or two uses, while housemates generally would each wash their own towels regularly like normal adults. So it's not quite as bad as it sounds, though it's still not exactly ideal.  Time to clean everything! Thank you for your input.
jollybard*290

I've said it elsewhere, but wringing your hands and crying "it's because of my akrasia!" is definitely not rational behavior; if anything, rationalists should be better at dealing with akrasia. What good is a plan if you can't execute it? It is like a program without a compiler.

Your brain is part of the world. Failing to navigate around akrasia is epistemic failure.

While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").

Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":

  • if modeling and solving akrasia is, like diet, a hard problem that even "experts" barely have an edge on, and importantly, things that do work seem to be very individual-specific making it quite hard to stand on the shoulders of giants
  • if a large percentage of people who've found a
... (read more)
0ChristianKl
That's irrelevant to the question of whether interventions such as reading the sequences or going to a CFAR workshop improve peoples outcomes. It's useful for this discussion to see "rationalist self improvement" as being about the current techniques instead of playing motte-and-bailey.

Maybe I ought to give a slightly more practical description.

Your akrasia is part of the world and failing to navigate around it is epistemic failure.

I see what you mean, but

if I know exactly what a tic tac toe or chess program would do,

if you were this logically omniscient, then supposing that the program did something else would imply that your system is inconsistent, which means everything is provable.

There needs to be boundedness somewhere, either in the number of deductions you can make, or in the certainty of your logical beliefs. This is what I mean by uncertainty being necessary for logical counterfactuals.

Right, so that's not a decision-prediction fixed point; a correct LDT algorithm would, by its very definition, choose the optimal decision, so predicting its behavior would lead to the optimal decision.

2Pattern
Donald Hobson appears to believe that determinism implies you do not have a choice. Instead of a) Beliefs -> Reality, it's b) Reality -> Beliefs. B can be broken or fixed, but fixing A... How does a correct LDT algorithm turn 2 agents into 1?

I don't think that's right. If you know exactly what you are going to do, that leaves no room for counterfactuals, not if you're an LDT agent. Physically, there is no such thing as a counterfactual, especially not a logical one; so if your beliefs match the physical world perfectly, then the world looks deterministic, including your own behavior. I don't think counterfactual reasoning makes sense without uncertainty.

2Gurkenglas
As a human who has an intuitive understanding of counterfactuals, if I know exactly what a tic tac toe or chess program would do, I can still ask what would happen if it chose a particular action instead. The same goes if the agent of interest is myself.

Perhaps, but that's not quite how I see it. I'm saying akrasia is failure to predict yourself, that is when there's a disconnect between your predictions and your actions.

2Donald Hobson
I'm modeling humans as two agents that share a skull. One of those agents wants to do stuff and writes blog posts, the other likes lying in bed and has at least partial control of your actions. The part of you that does the talking can really say that it wants to do X, but it isn't in control. Even if you can predict this whole thing, that still doesn't stop it happening.

Could convolution work?

EDIT: confused why I am downvoted. Don't we want to encourage giving obvious (and obviously wrong) solutions to short form posts?

Metaphysical truth here describes self-fulfilling truths as described by Abram Demski, and whose existence are garanteed by e.g. Löb's theorem. In other words, metaphysical truth is truth, and rationalists should be aware of them.

AIXI is relevant because it shows that world state is not the dominant view in AI research.

But world state is still well-defined even with ontological changes because there is no ontological change without a translation.

Perhaps I would say that "impact" isn't very important, then, except if you define it as a utility delta.

This is a misreading of traditional utility theory and of ontology.

When you change your ontology, concepts like "cat" or "vase" don't become meaningless, they just get translated.

Also, you know that AIXI's reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.

3TurnTrout
This seems like a misreading of my post. That’s a big part of my point. Wait, who’s talking about AIXI?

I like reading. I like reading prose, as if I were listening to someone talking.

I also read very fast and I'm very good at skimming prose.

That being said, I strongly dislike bullet points, in most part because they're not fun to read... But I also find them harder to skim. Indeed, they are usually much denser in terms of information, with much less redundancy, such that every word counts; in other words, no skimming allowed.

I don't understand why skimming natural text should be any more difficult.

>It's easier to skim, and build up ... (read more)

Just a quick, pedantic note.

But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second.

Actually, from an extensive point of view, that is exactly how you would define "animal": as the set of all things that are animals. So it is in fact composed of humans, dogs and cats -- but only partly, as there are lots of other thing... (read more)

2cubefox
Moving from animals to another example: If you are a half-bald person you do not belong to the set of bald people with probability 0.5. Probability is a epistemic concept, but the vagueness (fuzzyness) of the concept of baldness is not epistemic, but semantic. No amount of information makes you more or less bald. Therefore, for fuzzy concepts, there is no probability of membership of a set, but a degree of membership of a set. Which is again a number between 0 and 1, but it is not a probability. There is actually a very unpopular logic which is based on this notion of fuzzy sets: Fuzzy logic. It's logical constants behave different from their equivalents in probability theory. E.g. commonly: A and B = MIN(A, B); A or B = MAX(A, B).

This post has been very helpful for me, as I kept hearing about TAPs in rationalist circles without ever knowing what it meant. Even knowing what the acronym was didn't help at all (is it usually sufficient for people?).

This post, however, for all its faults (it gets too quickly at examples without first convincing me that I should care), serves as a good reference, if only for the fact that I never knew the concept already existed in mainstream science and was called "implementation intentions". I remember once searching for something of the so... (read more)

One thing I've never really seen mentioned in discussion of the planning fallacy is that there is something of a self-defeating prophecy at play.

Let's say I have a report to write, and I need to fit it in my schedule. Now, according to my plans, things should go fine if I take an hour to write it. Great! So, knowing this, I work hard at first, then become bored and dick around for a while, then realise that my self-imposed deadline is approaching, and -- whoosh, I miss it by 30 minutes.

Now, say I go back in time and redo the report, but now I assume it'll ... (read more)

2[anonymous]
Hey jollybard, I'm not 100% sure I understand the self-defeating prophecy point, but there have been a few studies that argue that your planned completion time actually affects reality like you say. Some psychologists also make a distinction between "time spent working on task" (which people seem to be good at sorta knowing) and "time when people are actually finished with a task" (which they often get wrong because they forget about unknown unknowns). I agree that counteracting poor planning also requires you to look at ways you failed: The techniques I cover, Murphyjitsu, RCF, and Back-planning all tackle slightly different things. Murphyjitsu helps you identify potential failure modes so you can patch them. RCF helps you rescale estimates, but can also identify past choke points. Back-planning, I will admit, is mainly for estimates.

Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.

For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!

I suppose the crux of my criticism isn't that there are techniques you haven't rel... (read more)

1kenzi
Here's a writeup on the Asana blog about Inner Simulator, based on a talk CFAR gave there a few years ago.

Decided to contribute a bit: here's a new article on TAPs! :)

7AnnaSalamon
TAPs = Trigger Action Planning; referred to in the scholarly literature as "Implementation intentions". The Inner Simulator unit is CFAR's way of referring to what you actually expect to see happen (as contrasted with, say, your verbally stated "beliefs".) Good point re: being careful about implied common knowledge.

This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?

4Elo
http://lesswrong.com/lw/nhi/geometric_bayesian_update/

That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.

I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)

However, let me s... (read more)

1Nebu
Okay, but what is the utility function Omega is trying to optimize? Let's say you walk up to Omega, tell it "was JFK murdered by Lee Harvey Oswald or not? And by the way, if you get this wrong, I am going to kill you/torture you/dust-spec you." Unless we've figured out how to build safe oracles, with very high probability, Omega is not a safe oracle. Via https://arbital.com/p/instrumental_convergence/, even though Omega may or may not care if it gets tortured/dust-speced, we can assume it doesn't want to get killed. So what is it going to do? Do you think it's going to tell you what it thinks is the true answer? Or do you think it's going to tell you the answer that will minimize the risk of it getting killed?

Oh, yes, good old potential UFAI #261: let the AI learn proper human values from the internet.

The point here being, it seems obvious to me that the vast majority of possible intelligent agents are unfriendly, and that it doesn't really matter what we might learn from specific error cases. In order words, we need to deliberately look into what makes an AI friendly, not what makes it unfriendly.

My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.

[This comment is no longer endorsed by its author]Reply
1qmotus
As turchin said, it's possible that the person in the plane accident exists in both a "real world" and a simulation, and will survive in the latter. Or they quantum tunnel to ground level before the plane crashes (as far as I know, this has an incredibly small but non-zero probability of occurring, although I'm not a physicist either). Or they're resurrected by somebody, perhaps trillions of years after the crash. And so forth.

There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?

And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.

[This comment is no longer endorsed by its author]Reply
0turchin
In Soviet Union a woman survived mid-air frontal planes collision - her chair rotated together with part of the wing and failed into a forrest. But the main idea here is that the same "me" may exist in different worlds - in one I am in a plane in the other I am in plane simulator. I will survive in the second one.