Like, the idea that an entity simulating our universe wouldn't be able to do that, because they'd run out of energy doesn't pass even the basic sniff test.
I'm convinced you are not actually reading what I'm writing. I said if the universe ours is simulated in is supposed to be like our own/we are an ancestral simulation then this implies that the universe simulating ours should be like ours, and we can apply our laws of physics to it, and our laws of physics say there's entropy, or a limit to the amount of order.
I also believe that if we're a simulatio...
First, I'm not "resisting a conversion". I'm disagreeing with your position that a hidden variable is even more likely to be a mind than something else.
you are the one basically adding souls
I absolutely am not adding souls. This makes me think you didn't really read my argument. I'll present this a different way: human brains are incredibly complex. So complex, in fact, we still don't fully understand them. With a background in computer science, I know that you can't simulate something accurately without at least having a very accurate model....
You can call it 'something missing', or 'god'.
I disagree. Something missing is different than a god. A god is often not well-defined, but generally it is assumed to be some kind of intelligence, that is it can know and manipulate information, and it has infinite agency or near to it. Something missing could be a simple physical process. One is infinitely complex (god), the other is feasibly simple enough for a human to fully understand.
...The koopas are both pointing to the weirdness of their world, and the atheists are talking about randomness and the t
Sufficiently improbable stuff is evidence that there's a hidden variable you aren't seeing.
Sure, but you aren't showing what that hidden variable is. You're just concluding what you think it should be. So evidence that there's something missing isn't an opportunity to inject god, it's a new point to investigate. That, and sufficiently improbable stuff becomes probable when enough of it happens. Take a real example, like someone getting pregnant. While the probability of any given sperm reaching the egg and fertilizing it is low, the sheer number of sper...
I find the 'where are all the aliens/simulation?" argument to be pretty persuasive in terms of atheism being a bust
Why does this imply atheism is a bust? The only thing I can think of that would make atheism "a bust" would be direct evidence of a god(s).
you can fault them for not properly updating but you can't fault them for inconsistency.
They're still being inconsistent with respect to the reality they observe. Why is the self-consistency alone more important than a consistency with observation?
Many Worlds Interpretation of Quantum Mechanics, a benevolent God is more likely than not going to exist somewhere.
I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.
were you aware that the ratio of sizes between the Sun and the Moon just happen to be exactly right for there to be total solar eclipses?
This also has to due with the distance between the moon and the earth and the earth and the sun. Either or both could be different...
I think you wrote some interesting stuff. As for your question on a meta-epistemy, I think what you said about general approaches mostly holds in this case. Maybe there's a specific way to classify sub-epistemies, but it's probably better to have some general rules of thumb that weed out the definitely wrong candidates, and let other ideas get debated on. To save community time, if that's really a concern, a group could employ a back-off scheme where ideas that have solid rebuttals get less and less time in the debate space.
I don't know that defining sub-e...
Even changing "do" to "did", my counter example holds.
Event A: At 1pm I get a cookie and I'm happy. At 10pm, I reflect on my day and am happy for the cookie I ate.
Event (not) A: At 1pm I do not get a cookie. I am not sad, because I did not expect a cookie. At 10pm, I reflect on my day and I'm happy for having eaten so healthy the entire day.
In either case, I end up happy. Not getting a cookie doesn't make me unhappy. Happiness is not a zero sum game.
If I get a cookie, then I'm happy because I got a cookie. The negation of this event is that I do not get a cookie. However, I am still happy because now I feel healthier, having not eaten a cookie today. So both the event and it's negation cause me positive utility.
The term you're looking for is "apologist".
If you have a universe of a certain complexity, then to simulate another universe of equal complexity it would have to be that universe to fully simulate it. To simulate a universe, you have to be sufficiently more complex and have sufficiently more expendable energy.
"That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). "
Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov't/company and to the gov't/company itself. The only kind of power I can think of that doesn't come with some responsibility is gun ownership. Even Facebook's power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.
Not quite what I meant about identifying content but fair point.
As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it's just narrative. While one side's take may be "real" to half the world, the other side's take can be "real" to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.
I'm actually very familiar with freedom of speech and I'm getting more familiar with your dismissive and elitist tone.
Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn't apply to the relationship between Facebook and users, as exemplified by their terms of use.
I'm not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.
But otherwise thanks for your reply, it's stunning lack of detail gave me no insight whatsoever.
Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?
Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don't push articles that have retractions, and they just now have added a "contested" flag that's less informative than Wikipedia's.
So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn't t...
Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.
Really disagree there. Plenty of trust fund kids are conservative, plenty of scholarship students are liberal... even at the same university. I think if you want to generalize, the more apt generalization is city vs. rural areas. There are tons of "working class" liberals, they work in service industries instead of coal mines. The big difference is the proximity to actual diversity, when you work with and live with and see diverse pe...
"liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia."
I think this is a drastic overgeneralization taken in bad faith.
Yes I think that's exactly right. Scott Alexander's idea on it from the point of view of living in a zombie world makes this point really clear: do we risk becoming zombies to save someone, or no?
Seems to me both liberals and conservatives are social farmers, it's a matter of what crop is grown. Conservatives want their one crop, say potatoes, not because it's the most nutritional, but it's been around for forever and it's allowed their ancestors to survive. (If we assume like you do about Christianity, then we also have that God Himself Commanded They Grow Potatoes.) Liberals see the potatoes, recognize that some people still die even when they eat potatoes like their ancestor, and decide they need more crops. Maybe they grow fewer potatoes, and m...
I thought OpenAI was more about open sourcing deep learning algorithms and ensuring that a couple of rich companies/individuals weren't the only ones with access to the most current techniques. I could be wrong, but from what I understand OpenAI was never about AI safety issues as much as balancing power. Like, instead of building Jurassic Park safely, it let anyone grow a dinosaur in their own home.
Everyone has different ideas of what a "perfectly" or "near perfectly" simulated universe would look like, I was trying to go off of Douglas's idea of it, where I think the boundary errors would have effect.
I still don't see how rewinding would be interference; I imagine interference would be that some part of the "above ours" universe gets inside this one, say if you had some particle with quantum entanglement spanning across the universes (although it would really also just be in the "above ours" universe because it would have to be a superset of our universe, it's just also a particle that we can observe).
I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.
My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being ...
An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.
You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved wh...
Go read a textbook on AI. You clearly do not understand utility functions.
My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E -> R where u is a state in E (the set of all possible states), and R is the reals in one dimension.
What definition are you using? I don't think we can have a productive conversation until we both understand each other's definitions.
I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:
http://rationalwiki.org/wiki/Simulated_reality#Feasibility
I think the feasibility argument described here better encapsulates what I'm trying to get at, and I'll defer to this argument until I can better (more mathematically) state mine.
"Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way...
Let me be a little more clear. Let's assume that we're in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.
Some machine that we're being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun i...
Yes, then I'm arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....
"(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation."
Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?
Energy. If we are to run an ancestral simulation that even remotely wants to co...
You should look up the phrase "planned obsolescence". It's a concept taught in many engineering schools. Apple employs it in it's products. The basic idea is similar to your thoughts under "Greater Global Wealth": the machine is designed to have a lifetime that is significantly shorter than what is possible, specifically to get users to keep buying a machine. This is essentially subscription-izing products; subscriptions are, especially today in the start up world, generally a better business model than selling one product one time (or ...
Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map? Right now your statement doesn't have enough detail for me to be convinced that UF-driven humans are a bad model.
And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.
Why would I give up the whole idea? I think you're correct in that you could model a human with multiple, varying UFs. Is there another way you know of to guide an intelligence toward a goal?
I think you're getting stuck on the idea of one utility function. I like to think humans have many, many utility functions. Some we outgrow, some we "restart" from time to time. For the former, think of a baby learning to walk. There is a utility function, or something very much like it, that gets the baby from sitting to crawling to walking. Once the baby learns how to walk, though, the utility function is no longer useful; the goal has been met. Now this action moves from being modeled by a utility function to a known action that can be used as...
I disagree with your interpretation of how human thoughts resolve into action. My biggest point of contention is the random pick of actions. Perhaps there is some Monte-Carlo algorithm that has a statistical guarantee that after some thousands or so tries, there is a very high probability that one of them is close to the best answer. Such algorithms exist, but it makes more sense to me that we take action based not only on context, but our memory of what has happened before. So instead of a probabilistic algorithm, you may have a structure more like a hash...
This looks like a good method to derive lower-level beliefs from higher-level beliefs. The main thing to consider when taking a complex statement of belief from another person, is that it is likely that there is more than one lower-level belief that goes into this higher-level belief.
In doxastic logic, a belief is really an operator on some information. At the most base level, we are believing, or operating on, sensory experience. More complex beliefs rest on the belief operation on knowledge or understanding; where I define knowledge as belief of some in...
Could you give an actual criticism of the energy argument? "It doesn't pass the smell test" is a poor excuse for an argument.
When I assume that the external universe is similar to ours, this is because Bostrom's argument is specifically about ancestral simulations. An ancestral simulation directly implies that there is a universe trying to simulate itself. I posit this is impossible because of the laws of thermodynamics, the necessity to not allow your simulations to realize what they are, and keeping consistency in the complexity of the universe... (read more)