Filter This month

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Philosophical theory with an empirical prediction

-1 mgin 28 October 2016 04:14PM

I have a philosophical theory which implies some things empirically about quantum physics, and I was wondering if anyone knowledgeable on the subject could give me some insight.

It goes something like this:

As an anathema to reductionists, quarks (and by "quarks" I just mean, whatever are the fundamental particles of the universe) are not governed by simple rules a la conway's game of life, but rather, like all of metaphysics goes into their behavior.

The reductionist basically reduces metaphysics to the simple rules that govern quarks. Fundamentally there is no other identity or causality, everything else is just emergent from that, anything we want to call "real" that we deal with in ordinary experience, does not have any metaphysical identity or causal efficacy of its own, it's just an illusion produced by tons of atoms bouncing around. If the universe is akin to conway's game of life, then I don't think the things we see around us are actually what we think they are. They don't have any real identity on a metaphysical level, but rather they are just patterns of particles in motion, governed by mathematically simple rules.

But suppose there actually is metaphysical identity and causal power in the things around us, well the place I can see for that, is that the unknown rules governing quarks, are not mathematically simple rules, but literally that's where all of metaphysics is contained, quarks entangle together according to high level concepts corresponding to the things we see around us, including a person's identity, and have not the mathematically simple causal powers like conway's game of life, but the causal powers of the identity of the high-level agent.

The empirical question is this: do we observe the fundamental particles of the universe behaving according mathematically simple rules, or do they seem to behave in complex/unpredictable ways depending on how they are entangled / what they are interacting with?

 

Adding an example to clarify:

The behavior of the quarks corresponds to the identity of the things we see around us. The things we see around us are constituted by quarks - but the question is, are these quarks behaving mindlessly as billiard balls, or is their behavior the result of complex rules corresponding to the identity of the thing they form?

In other words, suppose we're talking about a living ant, are the quarks which constitute that ant behaving according to simple mathematical rules like billiard balls, and the whole concept of there being an "ant" is just an illusion produced by these particles bouncing around, or are these quarks constituting the ant actually behaving "ant-like"?

Is the causal behavior of the ant determined by the billiard-ball interactions of quarks bouncing around, or does the causal behavior actually originate in the identity of the ant, with the quark interactions being decided according to its nature?

What I'm saying is that there metaphysically is such a thing as an ant, when quarks "get together as an ant", they behave differently, they behave ant-like. Given there is a lot of unknown on exactly why quarks behave the way they do, why is this ruled out: that when they "get together as an ant", they behave ant-like?

Basically the idea is, when it comes to the interactions of the quarks constituting the ant with the quarks constituting the things the ant interacts with, the behavior of those interactions is determined not by simple, universal rules of quark behavior, but by the rules of quark behavior that are in effect "when the quarks are an ant".

To further clarify this example:

This is framed in general terms, because I don't actually know any quantum physics, but I'm talking about the fundamental physical particles ("quarks", for lack of a better term), and their behavior at the quantum level - behavior which we don't fully understand. So one could say in general terms, sometimes the quarks "swerve left" and other times they "swerve right", and we don't exactly know why they do that in any given case.

So the question is, suppose the behavior of quarks in general is not determined by simple, universal laws of quark behavior, e.g. "always swerve left 50% of the time", but rather, there are metaphysically real and physically meaningful "quark groups", like if a bunch of quarks are entangled together in a group constituting what we'd observe to be an ant, then quarks in that quark group behave differently. So for example, the quarks in that "ant quark group" might always swerve left when they interact with another quark group of a different kind.

Trying to find a short story

-1 mgin 25 October 2016 02:27AM

It's a story about a boy who is into science and transhumanism, and a girl he told about all these crazy things that were going to happen. He dies and all of the things he said started to happen. She ended up floating around Saturn remembering him.

Either he or she was in the wheelchair. He was dying and he was disappointed he was dying because of all the cool stuff that was going to happen that she was going to be around for, and some of it had to do with whatever problem she had that was going to get fixed.

Please help me find this story if you can.

[Link] Reasonable Requirements of any Moral Theory

-1 TheSurvivalMachine 10 October 2016 08:48PM

[Link] Viruses and DRACOs in the Valley of Death in medical research.

-1 morganism 08 October 2016 08:36PM

Risk Contracts: A Crackpot Idea to Save the World

-2 SquirrelInHell 30 September 2016 02:36PM

Time start: 18:17:30

I

This idea is probably going to sound pretty crazy. As far as seemingly crazy ideas go, it's high up there. But I think it is interesting enough to at least amuse you for a moment, and upon consideration your impression might change. (Maybe.) And as a benefit, it offers some insight into AI problems if you are into that.

(This insight into AI may or may not be new. I am not an expert on AI theory, so I wouldn't know. It's elementary, so probably not new.)

So here it goes, in short form on which I will expand in a moment:

To manage global risks to humanity, they can be captured in "risk contracts", freely tradeable on the market. Risk contracts would serve the same role as CO2 emissions contracts, which can likewise be traded, and ensure that the global norm is not exceeded as long as everyone plays along with the rules.

So e.g. if I want to run a dangerous experiment that might destroy the world, it's totally OK as long as I can purchase enough of a risk budget. Pretty crazy, isn't it?

As an added bonus, a risk contract can take into account the risk of someone else breaking the terms of contract. When you trasfer your rights to global risk, the contract obliges you to diminish the amount you transfer by the uncertainty about the other party being able to fullfill all obligations that come with such a contract. Or if you have not enough risk budget for this, you cannot transfer to that person.

II

Let's go a little bit more into detail about a risk contract. Note that this is supposed to illustrate the idea, not be a final say on the shape and terms of such a contract.

Just to give you some idea, here are some example rules (with lots of room to specify them more clearly etc., it's really just so that you have a clearer idea of what I mean by a "risk contract"):

  1. My initial risk budget is 5 * 10^-12 chance of destroying the world. I am going to track this budget and do everything in my power to make sure that it never goes below 0.
  2. For every action (or set of correlated actions) I take, I will subtract the probability that those actions destroy the world from my budget (using simple subtraction unless correlation between actions is very high).
  3. If I transfer my budget to an agent who is going to decide about its actions independently from me, I will first pay the cost from my budget for the probability that this agent might not keep the terms of the contract. I will use my best conservative estimates, and refuse the transaction if I cannot keep the risk within my budget.
  4. Any event in which a risk contract on world destruction is breached will use my budget as if it was equivalent to actually destroying the world.
  5. Whenever I create a new intelligent agent, I will transfer some risk budget to that agent, according to the rules above.

III

Of course, the application of this could be wider than just an AI which might recursively self-improve - some more "normal" human applications could be risk management in a company or government, or even using risk contract as an internal currency to make better decisions.

I admit though, that the AI case is pretty special - it gives an opportunity to actually control the ability of another agent to keep a risk contract that we are giving to them.

It is an interesting calculation to see roughly what are the costs of keeping a risk contract in the recursive AI case, with a lot of simplifying assumptions. Assume that to reduce risk of child AI going off the rails can be reduced by a constant factor (e.g. have it cut by half) by putting in an additional unit of work. Also assume the chain of child AIs might continue indefinitely, and no later AI will assume a finite ending of it. Then if the chain has no branches, we are basically reduced to a power series: the risk budget of a child AI is always the same fraction of its parent's budget. That means we need linearly increasing amount of work on safety at each step. That in turn means that the total amount of work on safety is quadratic in the number of steps (child AIs).

Time end: 18:52:01

Writing stats: 21 wpm, 115 cpm (previous: 30/167, 33/183, 23/128)

View more: Prev