Well, ask the question, should the bigger brain receive a million dollar, or do you not care?
I've always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible?
An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you "could" input anything into your pocket...
Fantastic work!
How do we express the way that the world might be carved up into different agent-environment frames while still remaining "the same world"? The dual functor certainly works, but how about other ways to carve up the world? Suppose I notice a subagent of the environment, can I switch perspective to it?
Also, I am guessing is that an "embedded" cartesian frame might be one where i.e. where the world is just the agent along with the environment. Or something. Then, since we can iterate the choice function, it ould represent t...
There are two theorems. You're correct that the first theorem (that there is an unprovable truth) is generally proved by constructing a sort of liar's paradox, and then the second is proved by repeating the proof of the first internally.
However I chose to take the reverse route for a more epistemological flavour.
But we can totally prove it to be consistent, though, from the outside. Its sanity isn't necessarily suspect, only its own claim of sanity.
If someone tells you something, you don't take it at face value, you first verify that the thought process used to generate it was reliable.
You are correct. Maybe I should have made that clearer.
My interpretation of the impossibility is that the formal system is self-aware enough to recognize that no one would believe it anyway (it can make a model of itself, and recognizes that it wouldn't even believe it if it claimed to be consistent).
It's essentially my jumping off point, though I'm more interested in the human-specific parts than he is.
The relevance that I'm seeing is that of self-fulfilling prophecies.
My understanding of FEP/predictive processing is that you're looking at brains/agency as a sort of thermodynamic machine that reaches equilibrium when its predictions match its perceptions. The idea is that both ways are available to minimize prediction error: you can update your beliefs, or you can change the world to fit your beliefs. That means that there might not be much difference at all between belief, decision and action. If you want to do something, you just, by some ac...
Excellent post, it echoes much of my current thoughts.
I just wanted to point out that this is very reminiscent of Karl Friston's free energy principle.
The reward-based agent’s goal was to kill a monster inside the game, but the free-energy-driven agent only had to minimize surprise. [...] After a while it became clear that, even in the toy environment of the game, the reward-maximizing agent was “demonstrably less robust”; the free energy agent had learned its environment better.
Mammals and birds tend to grow, reach maturity, and stop growing. Conversely, many reptile and fish species keep growing throughout their lives. As you get bigger, you can not only defend yourself better (reducing your extrinsic mortality), but also lay more eggs.
So, clearly, we must have the same for humans. If we became progressively larger, women could carry twins and n-tuplets more easily. Plus, our brains would get larger, too, which could allow for a gradual increase in intelligence during our whole lifetimes.
Ha ha, just kidding: presumably intelligence is proportional to brain size/body size, which would remain constant, or might even decrease...
I'm not sure that probabilities should be understood as truth values. I cannot prove it, but my gut feeling is telling me that they are two different things altogether.
My feeling is that the arguments I give above are pretty decent reasons to think that they're not truth values! As I wrote: "The thesis of this post is that probabilities aren't (intuitionistic) truth values."
Indeed, and -categories can provide semantics of homotopy type theory. But -categories are ultimately based on sets. At some point though maybe we'll use HoTT to "provide semantics" to set theories, who knows.
In general, there's a close syntax-semantics relationship between category theory and type theory. I was expecting to touch on that in my next post, though!
EDIT: Just to be clear, type theory is a good alternate foundation, and type theory is the internal language of categories.
Yes, I have! Girard is very... opinionated, he is fun to read for that reason. That is, Jean-Yves has some spicy takes:
Quantum logic is indeed a sort of punishment inflicted on nature, guilty of not yielding to the prejudices of logicians… just like Xerxes had the Hellespont – which had destroyed a boat bridge – whipped.
I enjoyed his book "Proofs and Types" as an introduction to type theory and the Curry-Howard correspondence. I've looked through "The Blind Spot" a bit and it also seemed like a fun read. Of cou...
That all makes more sense now :)
In our case the towel rack was right in front of the toilet, so it didn't have to be an ambient thing haha
I just want to point out that you should probably change your towel at least every week (preferably every three uses), especially if you leave it in a high humidity environment like a shared bathroom.
I can't even imagine the smell... Actually, yes I can, because I've had the same scenario happen to me at another rationalist sharehouse.
So, um, maybe every two months is a little bit too long.
A few obvious alternatives:
1. Everyone leave their towels in their room.
2. Guests leave their towels in their rooms. The common towels are put into a hamper ev...
I've said it elsewhere, but wringing your hands and crying "it's because of my akrasia!" is definitely not rational behavior; if anything, rationalists should be better at dealing with akrasia. What good is a plan if you can't execute it? It is like a program without a compiler.
Your brain is part of the world. Failing to navigate around akrasia is epistemic failure.
While you're technically correct, I'd say it's still a little unfair (in the sense of connoting "haha you call yourself a rationalist how come you're failing at akrasia").
Two assumptions that can, I think you'll agree, take away from the force of "akrasia is epistemic failure":
Maybe I ought to give a slightly more practical description.
Your akrasia is part of the world and failing to navigate around it is epistemic failure.
I see what you mean, but
if I know exactly what a tic tac toe or chess program would do,
if you were this logically omniscient, then supposing that the program did something else would imply that your system is inconsistent, which means everything is provable.
There needs to be boundedness somewhere, either in the number of deductions you can make, or in the certainty of your logical beliefs. This is what I mean by uncertainty being necessary for logical counterfactuals.
Right, so that's not a decision-prediction fixed point; a correct LDT algorithm would, by its very definition, choose the optimal decision, so predicting its behavior would lead to the optimal decision.
I don't think that's right. If you know exactly what you are going to do, that leaves no room for counterfactuals, not if you're an LDT agent. Physically, there is no such thing as a counterfactual, especially not a logical one; so if your beliefs match the physical world perfectly, then the world looks deterministic, including your own behavior. I don't think counterfactual reasoning makes sense without uncertainty.
Perhaps, but that's not quite how I see it. I'm saying akrasia is failure to predict yourself, that is when there's a disconnect between your predictions and your actions.
Could convolution work?
EDIT: confused why I am downvoted. Don't we want to encourage giving obvious (and obviously wrong) solutions to short form posts?
Metaphysical truth here describes self-fulfilling truths as described by Abram Demski, and whose existence are garanteed by e.g. Löb's theorem. In other words, metaphysical truth is truth, and rationalists should be aware of them.
AIXI is relevant because it shows that world state is not the dominant view in AI research.
But world state is still well-defined even with ontological changes because there is no ontological change without a translation.
Perhaps I would say that "impact" isn't very important, then, except if you define it as a utility delta.
This is a misreading of traditional utility theory and of ontology.
When you change your ontology, concepts like "cat" or "vase" don't become meaningless, they just get translated.
Also, you know that AIXI's reward function is defined on its percepts and not on world states, right? It seems a bit tautological to say that its utility is local, then.
I like reading. I like reading prose, as if I were listening to someone talking.
I also read very fast and I'm very good at skimming prose.
That being said, I strongly dislike bullet points, in most part because they're not fun to read... But I also find them harder to skim. Indeed, they are usually much denser in terms of information, with much less redundancy, such that every word counts; in other words, no skimming allowed.
I don't understand why skimming natural text should be any more difficult.
>It's easier to skim, and build up ...
Just a quick, pedantic note.
But there seems to be something very different about each of the two situations. In the first, we would say that the "brush your teeth" abstraction is composed of the subtasks, but we wouldn't say that "animal" is composed of humans, dogs and cats in the second.
Actually, from an extensive point of view, that is exactly how you would define "animal": as the set of all things that are animals. So it is in fact composed of humans, dogs and cats -- but only partly, as there are lots of other thing...
This post has been very helpful for me, as I kept hearing about TAPs in rationalist circles without ever knowing what it meant. Even knowing what the acronym was didn't help at all (is it usually sufficient for people?).
This post, however, for all its faults (it gets too quickly at examples without first convincing me that I should care), serves as a good reference, if only for the fact that I never knew the concept already existed in mainstream science and was called "implementation intentions". I remember once searching for something of the so...
One thing I've never really seen mentioned in discussion of the planning fallacy is that there is something of a self-defeating prophecy at play.
Let's say I have a report to write, and I need to fit it in my schedule. Now, according to my plans, things should go fine if I take an hour to write it. Great! So, knowing this, I work hard at first, then become bored and dick around for a while, then realise that my self-imposed deadline is approaching, and -- whoosh, I miss it by 30 minutes.
Now, say I go back in time and redo the report, but now I assume it'll ...
Personally, I am still eagerly waiting for CFAR to release more of their methods and techniques. A lot of them seem to be already part of the rationalist diaspora's vocabulary -- however, I've been unable to find descriptions of them.
For example, you mention "TAP"s and the "Inner Simulator" at the beginning of this article, yet I haven't had any success googling those terms, and you offer no explanation of them. I would be very interested in what they are!
I suppose the crux of my criticism isn't that there are techniques you haven't rel...
This looks great and I can see that it should work, but I can't seem to find a formal proof. Can you explain a bit?
That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.
I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)
However, let me s...
Oh, yes, good old potential UFAI #261: let the AI learn proper human values from the internet.
The point here being, it seems obvious to me that the vast majority of possible intelligent agents are unfriendly, and that it doesn't really matter what we might learn from specific error cases. In order words, we need to deliberately look into what makes an AI friendly, not what makes it unfriendly.
My point was that QM is probabilistic only at the smallest level, for example in the Schrödinger's cat thought experiment. I don't think surviving a plane crash is ontologically probabilistic, unless of course the crash depends on some sort of radioactive decay or something! You can't make it so that you survive the plane crash without completely changing the prior causal networks... up until the beginning of you universe. Maybe there could be a way to very slightly change one of the universal constants so that nothing changes except that you survive, but I seriously doubt it.
There might also be situations where surviving is not just ridiculously unlikely, but simply mathematically impossible. That is, I assume that not everything is possible through quantum effects? I'm not a physicist. I mean, what quantum effects would it take to have your body live forever? Are they really possible?
And I have serious doubts that surviving a plane crash or not could be due to quantum effects, but I suppose it could simply be incredibly unlikely. I fear that people might be confusing "possible worlds" in the subjective Bayesian sense and in the quantum many-worlds sense.
Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don't know how to formalize it.
It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.