Comment author: Tem42 05 December 2015 08:46:07PM 1 point [-]

True, 'terminates' is probably the wrong word. There's no reason why the simulation would be wiped. It just couldn't continue.

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

Comment author: Kyre 05 December 2015 11:49:45PM 0 points [-]

I was thinking more like a random power surge, programming error,or political coup within our simulation that happened to shut down the aspect of our program that was hogging resources. If the programmers want the program to continue, it can.

You're right - branch (2) should be "we don't keep running run more than one". We can launch as many as we like.

The single actor is not going to experience every aspect of the simulation in full fidelity, so a low-res simulation is all that is needed. (The actor might think that it is a full simulation, and may have correctly programmed a full simulation, but there is simply no reason for it to actually replicate either the whole universe or the whole actor, as long as it gives output that looks valid).

That would buy you some time. If a single-agent simulation is say 10^60 times cheaper than a whole universe (roughly the number of elementary particles in the observable universe ?), then that gives you about 200 doubling generations before those single-agent simulations cost as much as much as a universe.

Unless the space of all practically different possible lives of the agent is actually much smaller ... maybe your choices don't matter that much and you end up playing out a relatively small number or attractor scripts. You might be able to map out that space efficiently with some clever dynamic programming.

Comment author: Algernoq 05 December 2015 10:04:44AM 2 points [-]

The computer could just halve our clock speed every time we launch a new simulation. No matter how many simulations we launch, our clock speed never reaches zero, so everything continues as normal inside our simulation. Problem solved! Suggested reading: "Hotel Infinity" followed by "Permutation City".

If you wanted to launch a higher order of infinity number of ssimulation from inside our simulation, that would be another story...

Comment author: Kyre 05 December 2015 11:13:45PM 0 points [-]

That's the unbounded computation case.

Comment author: Tem42 04 December 2015 07:38:11PM 1 point [-]

It seems like there is a lot of room between "one simulation" and "unbounded computational resources". Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean... that is an extremely primitive response, and one that suggests that our simulation was pretty close to worthless (at least at the end of its run). It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given.

Comment author: Kyre 05 December 2015 06:33:22AM 1 point [-]

It seems like there is a lot of room between "one simulation" and "unbounded computational resources"

Well the point is that if we are running on bounded resources, then the time until it runs out depends very sensitively on how many simulations we (and simulations like us) launch on average. Say that our simulation has a million years allocated to it, and we launch simulations starting a year back from the time when we launch a simulation.

If we don't launch any, we get a million years.

If we launch one, but that one doesn't launch any, we get half a million.

If we launch one, and that one launches one etc, then we get on the order of a thousand years.

If we launch two, and that one launches two etc, then we get on the order of 20 years.

Also, it is a bit odd to think that when computational resources start running low the correct thing to do is wipe everything clean.

True, 'terminates' is probably the wrong word. There's no reason why the simulation would be wiped. It just couldn't continue.

It also assumes a full-word simulation, and not just a preferred-actors simulation, which is a possibility, and maybe a probability, but not a given

I'm not sure. I think the trilemma applies to a simulation of a single actor, if that actor decides to launch simulations of their own life.

Comment author: Kyre 04 December 2015 04:55:42AM *  1 point [-]

Here is a second Simulation Trilemma.

If we are living in a simulation, at least one of the following is true:

1) we are running on a computer with unbounded computational resources, or

2) we will not launch more than one simulation similar to our world, or

3) the simulation we are in will terminate shortly after we launch our own simulations.

Here 'short' is on the order of the period between the era we start the simulation at and when the simulation reaches our stage.

Comment author: Clarity 09 November 2015 08:10:15AM *  -2 points [-]

I heard strawberry jam can be made with just strawberries, water and sugar on a frying pan on the radio. Sounds simple. Sounds simple to exclude the sugar, too. I don't see any minimalist jam like that on the supermarket shelves though. Does it taste poor or have I found nice little market (albeit, with incredibly low barriers to entry)? And, how could I format my last sentence so I get out of this terrible habit of ending sentences with brackets!

Comment author: Kyre 10 November 2015 04:44:58AM 7 points [-]

I heard strawberry jam can be made with just strawberries, water and sugar on a frying pan on the radio.

I'd use a stove.

Comment author: Kyre 03 November 2015 05:12:01AM 6 points [-]

A short hook headline like “avoiding existential risk is key to afterlife” can get a conversation going. I can imagine Salon, etc. taking another swipe at it, and in doing so, creating publicity which would help in finding more similar minded folks to get involved in the work of MIRI, FHI, CEA etc. There are also some really interesting ideas about acausal trade ...

Assuming you get good feedback and think that you have an interesting, solid arguments ... please think carefully about whether such publicity helps the existential risk movement more than it harms. On the plus side, you might get people thinking about existential risk that otherwise would not have. On the minus side, most people aren't going to understand what you write, and some of the the ones that half-understand it are going to loudly proclaim it as more evidence that MIRI etc are full of insane apocalyptic cultists.

Comment author: CarlShulman 17 September 2015 03:02:48AM 2 points [-]

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment author: Kyre 17 September 2015 05:42:58AM 7 points [-]

It's a trade-off. The example is simple enough that the alignment problem is really easy to see, but it also means that it is easy to shrug it off and say "duh, just the use obvious correct utility function for B".

Perhaps you could follow it up with an example with more complex mechanics (and or more complex goal for A) where the bad strategy for B is not so obvious. You then invite the reader to contemplate the difficulty of the alignment problem as the complexity approaches that of the real world.

In response to Crazy Ideas Thread
Comment author: Viliam 08 July 2015 10:36:19AM *  13 points [-]

Use computers to discover the Theory of Everything.

(I am not a physicist, so what I say here is probably wrong or confused, but I am saying it anyway, so at least someone could explain me where exactly am I wrong. Or maybe someone can improve the idea to make it workable.)

As far as I know, (1) we assume that the laws of the universe are simple, (2) we already have equations for relativity, and (3) we already have equations for quantum physics. However, we don't yet have equations for relativistic quantum physics. We also have (4) data about chemical properties of atoms, that is, about electron orbitals. I assume that for large enough atoms, relativistic effects influence the chemical properties of the atoms.

The plan is the following: Let the computer explore different sets of equations that are supposed to represent laws of physics. That is, take a set of equations, calculate what would be the chemical properties of atoms according to these equations, and compare with known data. Output those sets of equations that seem to fit. Create a smart generator for sets of equations, that would generate random simple equations, or iterate through the equation space starting with the simplest ones. Then apply a lot of computing power and see what happens.

(Inspired by: Einstein's Speed, That Alien Message.)

In response to comment by Viliam on Crazy Ideas Thread
Comment author: Kyre 09 July 2015 06:23:18AM *  4 points [-]

Nitpick: we have equations for (special) relativistic quantum physics. Dirac was one of the pioneers, and the Standard Model for instance is a relativistic quantum field theory. I presume you mean general relativity (gravity) and quantum mechanics that is the problem.

(Douglas_Knight) Moreover, the predictions that QFT makes about chemistry are too hard. I don't think it is possible with current computers to compute the spectrum of helium, let alone lithium. A quantum computer could do this, though.

In the spirit of what Viliam suggested, maybe you could do computational searches for tractable approximations to QFT for chemistry i.e. automatically find things like density functional theory. A problem there might be that you do not get any insight from the result, and you might end up overfitting.

Comment author: ahbwramc 31 May 2015 04:06:28AM 10 points [-]

What contingencies should I be planning for in day to day life? HPMOR was big on the whole "be prepared" theme, and while I encounter very few dark wizards and ominous prophecies in my life, it still seems like a good lesson to take to heart. I'd bet there's some low-hanging fruit that I'm missing out on in terms of preparedness. Any suggestions? They don't have to be big things - people always seem to jump to emergencies when talking about being prepared, which I think is both good and bad. Obviously certain emergencies are common enough that the average person is likely to face one at some point in their life, and being prepared for it can have a very high payoff in that case. But there's also a failure mode that people fall into of focusing only on preparing for sexy-but-extremely-low-probability events (I recall a reddit thread that discussed how to survive in case an airplane that you're on breaks up, which...struck me as not the best use of one's planning time). So I'd be just as interested in mundane, everyday tips.

(Note: my motivation for this is almost exclusively "I want to look like a genius in front of my friends when some contingency I planned for comes to pass", which is maybe not the best motivation for doing this kind of thing. But when I find myself with a dumb-sounding motive for doing something I rationally endorse anyway, I try to take advantage of the motive, dumb-sounding or not.)

Comment author: Kyre 02 June 2015 04:54:05AM 1 point [-]

Things that are unsexy but I can actually verify as having been useful more than once:

In wallet, folded up tissue. For sudden attack of sniffles (especially on public transport), small cuts, emergency toilet paper.

In bag I carry every day: small pack of tissues, multitool, tiny torch, ibuprofin, pad and pencil, USB charging cable for phone, plastic spork, wet wipe thing from KFC (why do they always shovel multiples of those things in with my order ?).

Comment author: Lumifer 21 May 2015 01:14:05AM 5 points [-]

I assume you're familiar with the Hofstadter's law as it seems to describe your situation.

If you updated your expectations and they turned out to be wrong again then your update was incorrect. If you have a pattern of incorrect updates, you should go meta and figure out why this pattern exists.

All in all, if you still believe the cost/benefit ratio is favorable, you should continue. Or is the problem that you don't believe your estimates any more?

Comment author: Kyre 21 May 2015 05:55:04AM 3 points [-]

Very rough toy example.

Say I've started a project which I can definitely see 5 days worth of work. I estimate there'll be some unexpected work in there somewhere, maybe another day, so I estimate 6 days.

I complete day one but have found another day's work. When should I estimate completion now ? Taking the outside view, finishing in 6 days (on day 7) is too optimistic.

Implicit in my original estimate was a "rate of finding new work" of about 0.2 days per day. But, now I have more data on that, so I should update the 0.2 figure. Let's see, 0.2 is my prior, I should build a model for "rate of finding new work" and figure out what the correct Bayesian update is ... screw it, let's assume I won't find any more work today and estimate the rate by Laplace's rule of succession. My updated rate of finding new work is 0.5. Hmmm that's pretty high, the new work I find is itself going to generate new work, better sum the geometric series ... 5 known days work plus 5 more unknown, so I should finish in 10 days (ie day 11).

I complete day 2 and find another day's work ! Crank the handle around, should finish in 15 days (ie day 17).

... etc ...

If this state of affairs continues, my expected total amount of work grows really fast, and it won't be very long before it becomes clear that it is not profitable.

Contrast this with: I can see 5 days of work, but experience tells me that the total work is about 15 days. The first couple of days I turn up additional work, but I don't start to get worried until around day 3.

View more: Prev | Next