lorepieri

Knowledge Seeker https://lorenzopieri.com/

Wiki Contributions

Comments

Sorted by

Hi Clement, I do not have much to add to the previous critiques, I also think that what needs to be simulated is just a consistent enough simulation, so the concept of CI doesn't seem to rule it out.  

You may be interested in a related approach ruling out the sim argument based on computational requirements, as simple simulations should be more likely than complex one, but we are pretty complex. See "The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization" (https://philarchive.org/rec/PIETSA-6)

Cheers!

Answer by lorepieri10

Yes voter, if you can read this: why? It would be great to get an explanation (anon).

Damn, we did not last even 24hrs...  

Thanks for the alternative poll.  One would think that with rules 2 and 5 out of the way it should be harder to say Yes. 

How confident are you that someone is going to press it? If it's pressed: what's the frequency of someone pressing it? What can learn from it? Does any of the rules 2-5 play a crucial role in the decision to press it? 

(we are still alive so far!)

This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the "immune system" of society. 

Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting. 

And yes I was expecting not to find much agreement here, but that's what makes it interesting :) 

A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.

See https://philpapers.org/rec/PIETSA-6  (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)

This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation. 

It is hard to argue that a similar general principle can be found for something being "mundane" since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?    

Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.

Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) "success".  The environment is always changing (tech, knowledge base, tools), so many learnings will not apply.  Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey. 

I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.

Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program. 

See here for more info on the latter assumption.

Load More