All of denimalpaca's Comments + Replies

Could you give an actual criticism of the energy argument? "It doesn't pass the smell test" is a poor excuse for an argument.

When I assume that the external universe is similar to ours, this is because Bostrom's argument is specifically about ancestral simulations. An ancestral simulation directly implies that there is a universe trying to simulate itself. I posit this is impossible because of the laws of thermodynamics, the necessity to not allow your simulations to realize what they are, and keeping consistency in the complexity of the universe... (read more)

0entirelyuseless
I agree that no one is ever going to run any ancestor simulations in our world. When Bostrom made his argument he accepted that this was one possible conclusion from it. I think it is the right one. That does not mean there are no simulations at all. As one example Walter mentioned, novels are simulations of worlds, but very different worlds. And likewise, there is no proof that we are not contained in another very different world that differs from our world as much as our world differs from novels.

Like, the idea that an entity simulating our universe wouldn't be able to do that, because they'd run out of energy doesn't pass even the basic sniff test.

I'm convinced you are not actually reading what I'm writing. I said if the universe ours is simulated in is supposed to be like our own/we are an ancestral simulation then this implies that the universe simulating ours should be like ours, and we can apply our laws of physics to it, and our laws of physics say there's entropy, or a limit to the amount of order.

I also believe that if we're a simulatio... (read more)

0WalterL
It's super aggravating that if you already understood that we can't know anything about the (universe simulating ours / the mighty God who created us / the inscrutable machinations of the Spaghetti Monster who cooked the broth of our creation) you go on these long tangents about what they could and could not do. You do it again with regard to simulating humans. Yes, it would be tough to do now. Easier in the future. By definition a cakewalk for the unknowable entity responsible for doing it right now with me and you. Since you understand that we have no knowledge of the mind responsible for our creation, why do you go on about how tough it must be for it? Look, in regards to evidence, you get this in your day to day life. You must, you are a living being. If there are no muddy footprints in your hallway then your toddlers didn't run down it. Earth's complexity in a vastly more simple universe is the same as the coin that flips heads a googleplex times. Earth is weird (should I steal your cool habit of using italics on important words for weird?). The absence of other things like it in our light cone is evidence that there is a hidden variable. (In the same way that me guessing your card more than 1/52 of the time is evidence that you are missing the trick) In every other case we can set up or find where this situation is roughly analogous (watchmaker is the classic), the answer is that the experimenter is to blame. He put the watch in the desert, the other rocks are less complicated not because of chance, but because they weren't put there by a civilization that can make watches. If you are still hung up on how hard it will be to simulate our minds, then just imagine that our simulations are simpler than us, ok? They can only hear, and time goes slower. There are only ten people in their whole universe, whatever. Point is, that when they ask this question, the 'right' answer for them to come to is that they have a creator. That's also the right answer for us to

First, I'm not "resisting a conversion". I'm disagreeing with your position that a hidden variable is even more likely to be a mind than something else.

you are the one basically adding souls

I absolutely am not adding souls. This makes me think you didn't really read my argument. I'll present this a different way: human brains are incredibly complex. So complex, in fact, we still don't fully understand them. With a background in computer science, I know that you can't simulate something accurately without at least having a very accurate model.... (read more)

3WalterL
It feels like you are answering the old question of whether God could create a rock he couldn't lift by explaining that lifting is hard. Like, the idea that an entity simulating our universe wouldn't be able to do that, because they'd run out of energy doesn't pass even the basic sniff test. Say 'our' universe, the one that you and I observe, is a billionth the size of a real one. Say they are running it so that we get a second every time they go through a trillion years. Say that all of us but me are p-zombies, and they only simulate the rest of y'all every time you interact with my experiences. The idea that there aren't enough resources to simulate the universe isn't even really wrong, it just seems like you haven't thought through the postulation of us being a simulation. Even the invocation of our laws of physics as applying to the simulation is bonkers. Why would the reality where our simulation is run in have anything resembling ours? Our creations don't have physics that resemble ours. Video game characters don't fall as per gravity, characters in movies and books don't have real magnetism. Why would you imagine that our simulator is recreating their home conditions? Lastly, you need to either fish or cut bait on the humans being possible to simulate. If you aren't postulating a soul, then we are nothing but complicated lighting and meat, meaning that we are entirely feasible to simulate. If you do think there's something about the human mind going on that God's computers or whatever can't replicate, then I'll certainly cede the argument, but you don't get to call yourself an atheist. It's even more bizarre to see you say that the claim of simulation makes no predictions, in response to me pointing out that it's prediction (just us in the observable universe) is the reason to believe it. Walter if no aliens: Simulation/creator of some kind Walter if aliens : Real Denim if aliens: Real Denim if no aliens: looks really hard away from how monstrously unl

You can call it 'something missing', or 'god'.

I disagree. Something missing is different than a god. A god is often not well-defined, but generally it is assumed to be some kind of intelligence, that is it can know and manipulate information, and it has infinite agency or near to it. Something missing could be a simple physical process. One is infinitely complex (god), the other is feasibly simple enough for a human to fully understand.

The koopas are both pointing to the weirdness of their world, and the atheists are talking about randomness and the t

... (read more)
1WalterL
It feels like you've gone from asking for an explanation to resisting a conversion effort that I'm not making here. Minds simulatable: It's kind of weird that I'm the one saying that there is nothing magic about a mind, and you are the one basically adding souls, yet we are theist and atheist respectively, but surely if minds are unable to be simulated that would be a blow 'against' your beliefs, yeah? Doing that being hard: That doesn't seem knowable. Like, yeah, it seems like faking five senses would be hard, but that's just from our perspective. We can make much simpler minds than ours easily, presumably whatever made us is far enough beyond us that this isn't a big deal. Rollback: Same as above? LIke, maybe this is happening constantly, maybe not. No way for us to know. We might find aliens one day!: Sure, I'll change my mind when that happens. For now the evidence is that we are a special case, which implies a hidden variable. That hidden variable is most likely a mind, since that's what it would be if any simulated intelligence asked what it was. The earth is unlike the rest of everything we've observed. That's weird. One potential reason is that a mind we cannot observe arranged things that way. That will be the right answer for any of our fictions that we give the ability to ask this question in the future, so it is probably the right answer for us.

Sufficiently improbable stuff is evidence that there's a hidden variable you aren't seeing.

Sure, but you aren't showing what that hidden variable is. You're just concluding what you think it should be. So evidence that there's something missing isn't an opportunity to inject god, it's a new point to investigate. That, and sufficiently improbable stuff becomes probable when enough of it happens. Take a real example, like someone getting pregnant. While the probability of any given sperm reaching the egg and fertilizing it is low, the sheer number of sper... (read more)

0WalterL
You can call it 'something missing', or 'god'. The thing that put life on one planet and not on anything else that can be observed is the thing we are gesturing at here. It feels like you see how the Mario argument works. The koopas are both pointing to the weirdness of their world, and the atheists are talking about randomness and the theists are talking about maybe it is a Sky Koopa. Turn it around another way. Before too long we'll be able to write software that does basically what our brains do (citation needed, but LW so I'll guess you agree). Some of this software will be in simulated worlds. That software may well divide into atheist and theist movements, and speculate about whether they are in a simulation. The theists will be right. There will be a lot more minds in simulations than have ever existed inside of human bodies (citation needed, but I feel pretty safe here), so the general answer, posed at large to the universe of all minds ever, to whether your observable universe has a 'god' or 'hidden factor' is 'Yes'. Seems super arrogant for us to presume that we are the exception. Much more likely, there is sentience behind the arrangement of the observable universe. The idea that one planet alone would have life is just too much of a score counter, too much of a giveaway.

I find the 'where are all the aliens/simulation?" argument to be pretty persuasive in terms of atheism being a bust

Why does this imply atheism is a bust? The only thing I can think of that would make atheism "a bust" would be direct evidence of a god(s).

2WalterL
if I flip a coin twice and get heads, and you ask me what the odds are that it'll be heads next time, it's 50/50. If you flip heads a million times and ask me what it is next time then it's 100%, because the coin doesn't have a tails side. Sufficiently improbable stuff is evidence that there's a hidden variable you aren't seeing. Imagine the turtles from Mario getting together and talking their world over? Koopa atheists would point out that they have no proof that their world is a simulation. Koopa theists would point to the equivalent of the no aliens data point (the score counter, the time limit). None of them can get evidence of 'God', but the ones that are smart aren't the ones that reserve judgement, and say maybe it's just really double random chance that their world is setup like an entertainment game.

you can fault them for not properly updating but you can't fault them for inconsistency.

They're still being inconsistent with respect to the reality they observe. Why is the self-consistency alone more important than a consistency with observation?

0Screwtape
You are correct that both sorts of things could be called inconsistency, and as soon as I come up with a better way to phrase that difference I'll edit. I think being consistent with observations + priors is better than being consistently wrong. I also think being wrong in known ways is better than being wrong is unknown ways. Imagine driving a car with a speedometer that's always ten miles an hour under what you're actually going, or using a clock that's twenty minutes fast. You know you're getting wrong answers, but you can do an adjustment in your head to correct for it. If your speedometer is off by a random amount that changes at random times, it's both inconsistent with observation and inconsistent with itself, and therefore useless. You can't adjust or compensate, you just have to ignore it. (Or get used to getting pulled over :p)

Many Worlds Interpretation of Quantum Mechanics, a benevolent God is more likely than not going to exist somewhere.

I would urge you to go learn about QM more. I'm not going to assume what you do/don't know, but from what I've learned about QM there is no argument for or against any god.

were you aware that the ratio of sizes between the Sun and the Moon just happen to be exactly right for there to be total solar eclipses?

This also has to due with the distance between the moon and the earth and the earth and the sun. Either or both could be different... (read more)

2Darklight
Strictly speaking it's not something that is explicitly stated, but I like to think that the implication flows from a logical consideration of what MWI actually entails. Obviously MWI is just one of many possible alternatives in QM as well, and the Copenhagen Interpretation obviously doesn't suggest anything. The point is that they are a particular ratio that makes them ideal for these conditions, when they could have easily been otherwise, and that these are exceptionally convenient coincidences for humanity. The stars also make it possible for us to use telescopes to identify which planets are in the habitable zone. It remains much more convenient than if all star systems were obscured by a cloud of dust, which I can easily imagine being the norm in some alternate universe. Again, the point is that these are very notable coincidences that would be more likely to occur in a universe with some kind of advanced ordering. When I call this evidence, I am using it in the probabilistic sense, that the probability of the evidence given the hypothesis is higher than the probability of the evidence by itself. Even though these things could be coincidences, they are more likely to occur in a controlled universe meant for habitation by sentient beings. In that sense I consider this evidence. I don't know why you bring up the argument from ignorance. I haven't proclaimed that this evidence conclusively proves anything. Evidence is not proof. Why though? Why isn't the universe simply chaos without order? Why is it consistent such that the spacetime metric is meaningful? The structure and order of reality itself strikes me as peculiar given all the possible configurations that one can imagine. Why don't things simply burst into and out of existence? Why do cause and effect dominate reality as they do? Why does the universe have a beginning and such uneven complexity rather than just existing forever as a uniform Bose-Einstein condensate of near zero state, low entropy part

I think you wrote some interesting stuff. As for your question on a meta-epistemy, I think what you said about general approaches mostly holds in this case. Maybe there's a specific way to classify sub-epistemies, but it's probably better to have some general rules of thumb that weed out the definitely wrong candidates, and let other ideas get debated on. To save community time, if that's really a concern, a group could employ a back-off scheme where ideas that have solid rebuttals get less and less time in the debate space.

I don't know that defining sub-e... (read more)

0Onemorenickname
Thanks I agree. I don't expect a full-fledged meta-epistemy. Again, "That epistemy can be as simple as some sanity checks". I agree. I picked that distinction because I assumed many rationalists are in CS or have strong mathematical foundations. It might have been a less precise example. But there are two answers to your remark : * That people who aren't in math or theoretical CS and thus can't distinguish them should not post their related ideas is not a bug, it's a feature. I have tCS or math aberrations on LW that made the community lose time. * That we shouldn't lose time defining epistemies on new ideas. I agree, that's what the "pre-epistemy phase", and the phase status more generally are meant to convey. But if a group of related ideas gets enough traction (Rationalism, Utilitarianism), defining an epistemy becomes more and more important.

Even changing "do" to "did", my counter example holds.

Event A: At 1pm I get a cookie and I'm happy. At 10pm, I reflect on my day and am happy for the cookie I ate.

Event (not) A: At 1pm I do not get a cookie. I am not sad, because I did not expect a cookie. At 10pm, I reflect on my day and I'm happy for having eaten so healthy the entire day.

In either case, I end up happy. Not getting a cookie doesn't make me unhappy. Happiness is not a zero sum game.

If I get a cookie, then I'm happy because I got a cookie. The negation of this event is that I do not get a cookie. However, I am still happy because now I feel healthier, having not eaten a cookie today. So both the event and it's negation cause me positive utility.

0DragonGod
The negation of the event is that you did not get a cookie, not that you do not get a cookie. The negation of an event is that it did not happen. Either an event occurs or does not—it goes without saying that both an event and its negation cannot occur.

If you have a universe of a certain complexity, then to simulate another universe of equal complexity it would have to be that universe to fully simulate it. To simulate a universe, you have to be sufficiently more complex and have sufficiently more expendable energy.

"That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). "

Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov't/company and to the gov't/company itself. The only kind of power I can think of that doesn't come with some responsibility is gun ownership. Even Facebook's power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.

1MrMind
You're looking only at the walled garden of institutions inside a democracy. But if you look at past history, authoritarian governments or muddled legal situations (say some global corporations), you'll find out that as long as the structure of power is kept intact, people in power can do pretty much as they please with little or no backlash.

Not quite what I meant about identifying content but fair point.

As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it's just narrative. While one side's take may be "real" to half the world, the other side's take can be "real" to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.

0ChristianKl
That means if you have an investigative reporter with non-public sources, that's fake news because the other side has no access to his non-public sources?
1lmn
Verified by whom? There is a long history of "facts verified by official sources" turning out to be false.

I'm actually very familiar with freedom of speech and I'm getting more familiar with your dismissive and elitist tone.

Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn't apply to the relationship between Facebook and users, as exemplified by their terms of use.

I'm not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.

But otherwise thanks for your reply, it's stunning lack of detail gave me no insight whatsoever.

8Lumifer
You seem to be mistaken about your familiarity with the freedom of speech. In particular, you're confusing it with the 1st Amendment to the US Constitution. That's a category error. LOL. Would you assert that you represent the masses? A stunning example of narcissism :-P Hint: it's not all about you and your lack of insight.

Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?

Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don't push articles that have retractions, and they just now have added a "contested" flag that's less informative than Wikipedia's.

So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn't t... (read more)

0Viliam
Let's try to frame this with as little politics as possible... You build a medium where people can exchange content. Your original goal is to make money, so you want to make it as popular as possible -- in perfect case, the Schelling point for anyone debating anything. But you notice that certain messages, optimized for virality, make a disproportional fraction of your content. You don't like this... either because you realize you actually have values beyond "making money"... or because you realize that in long term this could have a negative impact on your medium if people start to associate it with low-quality viral messages -- you aim to be a king of all content, not only yellow journalism. There is a risk your competitor would make a competing medium that it more pleasant to read, at least at the beginning, and gradually take over your readers. Some quick ideas: a) censor specific ideas a.1) completely, e.g. all kitten videos get deleted a.2) penalize kitten videos in content aggregation Problem: This will get noticed, and people who love kitten videos will move to your competitors. b) target virality itself b.1) make it more difficult to share content This goes too strongly against your goal be being an addictive website for simpletons. b.2) penalize mindless sharing For example, you have one-click-sharing functionality, but you can optionally add your own comment. Shares with hand-written comments will get much higher priority than shares without ones. The easier to share, the faster to disappear. b.3) penalize articles with too much shares (globally) Your advantage, as a huge website, is that you know which articles are popular worldwide. Unfortuately, soon there will be SEO techniques to circumvent any action you take, such as showing the same content to different users under different URLs (or whatever will make your system believe it is different content.) c) distributed "censorship" You could make functionality of voluntary "content rating" o
1ChristianKl
Half of the US voted for Trump. If Facebook would make a move that would censor a lot of pro-Trump lies it risks losing a significant portion of it's audience. I'm not sure whether the function of verifying the quality of news articles is best fulfilled by a traditional social network. If I would care to solve the problem I would build a browser plugin that provides quality ratings of articles and websites. Users can vote and there's a machine learning algorithm that translates the user votes into a good quality metric.
0lmn
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be "facts". Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
0[anonymous]
A better question is why should we trust Facebook to do so honestly, rather than abusing that power to declare lies that benefit Mark Zuckerberg to be "facts". Given the amount of ethics, or rather lack thereof, his actions have shown so far, I see very little reason to trust him.
6skeptical_lurker
Facebook is full of bullshit because it is far quicker to share something then to fact-check it, not that anyone cares about facts anyway. A viral alarmist meme with no basis in truth will be shared more then a boring, balanced view that doesn't go all out to fight the other tribe. But Facebook has always been full of bullshit and no-one cared until after the US election when everyone decided to pin Trump's victory on fake news. So its pretty clear that good epistemology is not the genuine concern here. Not that I'm saying that Facebook is worse then any other social media - the problem isn't Facebook, the problem is human nature.
0DryHeap
They certainly do identify content, and indeed alter the way that certain messages are promoted. Example. Who decides what is and is not fake news?
2MrMind
"Arbiter of truth" is too big of a word. People easily forget two important things: 1. Facebook is a social media, emphasis on media: it allows the dissemination of content, it does not produce it; 2. Facebook is a private, for profit enterprise: it exists to generate a revenue, not to provide a service to citizens. Force 1 obviously acts against any censoring or control besides what is strictly illegal, but force 2 pushes for the creation of an environment that is customer friendly. That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers. People are silly if they delegate the responsibility of verifying the truth of a content to the transport layer, and the only reason that a flag button is present is because doing otherwise would lose customers. That said, to answer your question: No, Facebook does not have any responsability beyond what is strictly illegal. That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). As a general rule of life, do not acquire your facts from comics.
9Lumifer
It's a horrible idea. No. You're confusing FB and Google (and a library, etc.) I wouldn't. I recommend acquiring some familiarity with the concept of the freedom of speech.

Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.

Really disagree there. Plenty of trust fund kids are conservative, plenty of scholarship students are liberal... even at the same university. I think if you want to generalize, the more apt generalization is city vs. rural areas. There are tons of "working class" liberals, they work in service industries instead of coal mines. The big difference is the proximity to actual diversity, when you work with and live with and see diverse pe... (read more)

0lmn
Kind of like how the mayor of London said people must now accept a certain level of terrorism as 'Part & Parcel' of living in a big city?

"liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia."

I think this is a drastic overgeneralization taken in bad faith.

0lmn
Actually this is more-or-less my experience with liberals. However, I'm curious whether your objection is going to be "not all liberals are like that, a few are actually willing to entertain the possibility that there are rational reasons for opposing kudzu", or "this analogy is flawed because there are no rational reasons to oppose the things kudzu corresponds to".

Yes I think that's exactly right. Scott Alexander's idea on it from the point of view of living in a zombie world makes this point really clear: do we risk becoming zombies to save someone, or no?

0Viliam
That is pretty much it. Except, describing it as zombies makes it seems like the dangers are all fictional, and therefore the people who worry about them are silly. But real world contains real dangers, so I would expect that people who got hurt in the past will be more likely to adopt the "conservative" mindset, while people who lived relatively sheltered lives will be more likely to adopt the "liberal" mindset. (Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.)

Seems to me both liberals and conservatives are social farmers, it's a matter of what crop is grown. Conservatives want their one crop, say potatoes, not because it's the most nutritional, but it's been around for forever and it's allowed their ancestors to survive. (If we assume like you do about Christianity, then we also have that God Himself Commanded They Grow Potatoes.) Liberals see the potatoes, recognize that some people still die even when they eat potatoes like their ancestor, and decide they need more crops. Maybe they grow fewer potatoes, and m... (read more)

5lmn
Like, say, kudzu to enhance the soil and help prevent erosion. However, unlike the people who actually introduced kudzu, liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia.
0Viliam
Any difference becomes a similarity if we go sufficiently meta. On a sufficiently high level, liberals and conservatives are the same in that they both want to do different things. But one step below this too-high level, it's -- liberals: "the system is bad, let's change this and this to improve it"; conservatives: "if you tinker with the system, it is likely to fall apart and kill everyone, let's keep it as it is". Of course, both sides can be expressed by various degrees of sophistication. You can have stupid representatives of both sides, saying things like "the world is perfect as it is, those unsuccessful people just need to stop whining and start working harder" and "hey, let's abolish money and private property, so there will be no more poverty", and you can have educated representatives talking about "progress" or "black swans". But at the core there seems to be the... feeling(?)... that if you start changing the setting on the social machine for nicer values, the situation will, obviously... a) ...improve. b) ...go horribly wrong.

I thought OpenAI was more about open sourcing deep learning algorithms and ensuring that a couple of rich companies/individuals weren't the only ones with access to the most current techniques. I could be wrong, but from what I understand OpenAI was never about AI safety issues as much as balancing power. Like, instead of building Jurassic Park safely, it let anyone grow a dinosaur in their own home.

7der
You're right.

Everyone has different ideas of what a "perfectly" or "near perfectly" simulated universe would look like, I was trying to go off of Douglas's idea of it, where I think the boundary errors would have effect.

I still don't see how rewinding would be interference; I imagine interference would be that some part of the "above ours" universe gets inside this one, say if you had some particle with quantum entanglement spanning across the universes (although it would really also just be in the "above ours" universe because it would have to be a superset of our universe, it's just also a particle that we can observe).

I 100% agree that a "perfect simulation" and a non-simulation are essentially the same, noting Lumifer's comment that our programmer(s) are gods by another name in the case of simulation.

My comment is really about your second paragraph, how likely are we to see an imperfection? My reasoning about error propagation in an imperfect simulation would imply a fairly high probability of us seeing an error eventually. This is assuming that we are a near-perfect simulation of the universe "above" ours, with "perfect" simulation being ... (read more)

0dogiv
I guess where we disagree is in our view of how a simulation would be imperfect. You're envisioning something much closer to a perfect simulation, where slightly incorrect boundary conditions would cause errors to propagate into the region that is perfectly simulated. I consider it more likely that if a simulation has any interference at all (such as rewinding to fix noticeable problems) it will be filled with approximations everywhere. In that case the boundary condition errors aren't so relevant. Whether we see an error would depend mainly on whether there are any (which, like I said, is equivalent to asking whether we are "in" a simulation) and whether we have any mechanism by which to detect them.

An idea I keep coming back to that would imply we reject the idea of being in a simulation is the fact that the laws of physics remain the same no matter your reference point nor place in the universe.

You give the example of a conscious observer recognizing an anomaly, and the simulation runner rewinds time to fix this problem. By only re-running the simulation within that observer's time cone, the simulation may have strange new behavior at the edge of that time cone, propagating an error. I don't think that the error can be recovered so much as moved wh... (read more)

1Douglas_Reay
Companies writing programs to model and display large 3D environments in real time face a similar problem, where they only have limited resources. One work around they common use are "imposters" A solar system sized simulation of a civilisation that has not made observable changes to anything outside our own solar system could take a lot of short cuts when generating the photons that arrive from outside. In particular, until a telescope or camera of particular resolution has been invented, would they need to bother generating thousands of years of such photons in more detail than could be captured by devices yet present?
0dogiv
If it is the case that we are in a "perfect" simulation, I would consider that no different than being in a non-simulation. The concept of being "in a simulation" is useful only insofar as it predicts some future observation. Given the various multiverses that are likely to exist, any perfect simulation an agent might run is probably just duplicating a naturally-occurring mathematical object which, depending on your definitions, already "exists" in baseline reality. The key question, then, is not whether some simulation of us exists (nearly guaranteed) but how likely we are to encounter an imperfection or interference that would differentiate the simulation from the stand-alone "perfect" universe. Once that happens, we are tied in to the world one level up and should be able to interact with it. There's not much evidence about the likelihood of a simulation being imperfect. Maybe imperfect simulations are more common than perfect ones because they're more computationally tractable, but that's not a lot to go on.

Go read a textbook on AI. You clearly do not understand utility functions.

0TheAncientGeek
AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact depend on, shifting an ambiguous definitions.

My definition of utility function is one commonly used in AI. It is a mapping of states to a real number: u:E -> R where u is a state in E (the set of all possible states), and R is the reals in one dimension.

What definition are you using? I don't think we can have a productive conversation until we both understand each other's definitions.

0TheAncientGeek
I'm not using a definition, I'm pointing out that standard arguments about UFs depend on ambiguities. Your definition is abstract and doens't capture anything that an actual AI could "have" -- for one thing, you can't compute the reals. It also fails to capture what UF's are "for".

I tried to see if anyone else had previously made my argument (but better); instead I found these arguments:

http://rationalwiki.org/wiki/Simulated_reality#Feasibility

I think the feasibility argument described here better encapsulates what I'm trying to get at, and I'll defer to this argument until I can better (more mathematically) state mine.

"Yet the number of interactions required to make such a "perfect" simulation are vast, and in some cases require an infinite number of functions operating on each other to describe. Perhaps the only way... (read more)

Let me be a little more clear. Let's assume that we're in a simulation, and that the parent universe hosting ours is the top level (for whatever reason, this is just to avoid turtles all the way down). We know that we can harness the energy of the sun, because not only do plants utilize that energy to metabolize, but we also can harness that energy and use it as electricity; energy can transfer.

Some machine that we're being simulated on must take into account these kinds of interactions and make them happen in some way. The machine must represent the sun i... (read more)

2gjm
I still don't understand. (Less tactfully, I think what you're saying is simply wrong; but I may be missing something.) Suppose we have one simulated photon with 1000 units of energy and another with 2000 units of energy. Here is the binary representation of the number 1000: 1111101000. And here is the binary representation of the number 2000: 11111010000. The second number is longer -- by one bit -- and therefore may take a little more energy to do things with; but it's only 10% bigger than the first number. Now, if we imagine that eventually each of those photons gets turned into lots of little blobs carrying one unit of energy each, or in some other way has a bunch of interactions whose number is proportional to its energy, then indeed you end up with an amount of simulation effort proportional to the energy. But it's not clear to me that that must be so. And if most interactions inside the simulation involve the exchange of a quantity of energy that's larger than the amount of energy required to simulate one interaction -- which seems kinda unlikely, which is one reason why I am sympathetic to your argument overall, but again I see no obvious way to rule it out -- then even if simulation effort is proportional to energy the relevant constant of proportionality could be smaller than 1.

Yes, then I'm arguing that case 1 cannot happen. Although I find it a little tediously tautological (and even more so reductive) to define technological maturity as being solely the technology that makes this disjunction make sense....

"(1) civilizations like ours tend to self-destruct before reaching technological maturity, (2) civilizations like ours tend to reach technological maturity but refrain from running a large number of ancestral simulations, or (3) we are almost certainly in a simulation."

Case 2 seems far, far more likely than case 3, and without a much more specific definition of "technological maturity", I can't make any statement on 1. Why does case 2 seem more likely than 3?

Energy. If we are to run an ancestral simulation that even remotely wants to co... (read more)

2gjm
I don't understand this argument. If it's appealing to a general principle that "simulating something with energy E requires energy at least E" then I don't see any reason why that should be true. Why should it take twice as much energy to simulate a blue photon as a red photon, for instance? (I am sympathetic to the overall pattern of your argument; I also do not expect civilizations like ours to run a lot of ancestral simulations and have never understood why they should be expected to, and I suspect that one reason why not is that the resources to do it well would be very large and even if it were possible there ought to be more useful things to do with those resources.)
0g_pepper
"Technical maturity" as used in the first disjunct means "capable of running high-fidelity ancestor simulations". So, it sounds like you are arguing for the 1st disjunct (or something very close to it) rather than the second, since you are arguing that, due to energy constraints, a civilization like ours would be incapable of reaching technological maturity.

You should look up the phrase "planned obsolescence". It's a concept taught in many engineering schools. Apple employs it in it's products. The basic idea is similar to your thoughts under "Greater Global Wealth": the machine is designed to have a lifetime that is significantly shorter than what is possible, specifically to get users to keep buying a machine. This is essentially subscription-izing products; subscriptions are, especially today in the start up world, generally a better business model than selling one product one time (or ... (read more)

3bogus
That's not what "planned obsolescence" means. Planned obsolescence means "if this machine is going to fail in X years anyway (because of one or more critical parts, or because technical progress means that replacing it in that timeframe makes more sense anyway), it makes no sense to design it to be longer-lived than that. So let's improve efficiency and cut costs by redesigning everything in it under the assumption that we're allowed to fail in X years, and any resources spent in extra lifetime are just wasted". What maximum lifetime is theoretically possible for any given component has very little to do with what's most efficient. One can certainly criticize this sort of 'planned obsolescence' too, for instance by countering that modularity, repairability and lack of any single point of failure should be stressed instead. But let's at least get our facts straight here.
4Tyrrell_McAllister
Planned obsolescence alone doesn't explain the change over time of this phenomenon. It's a static explanation, one which applies equally well to every era, unless something more is said. So the question becomes, Why are manufacturers planning for sooner obsolescence now than they did in the past? Likewise, "worse materials cost less" is always true. It's a static fact, so it can't explain the observed dynamic phenomenon by itself. Or, at least, you need to add some additional data, like, "materials are available now that are worse than what used to be available". That might explain something. It would be another example of things being globally better in a perverse sense (more options = better).

Is there an article that presents multiple models of UF-driven humans and demonstrates that what you criticize as contrived actually shows there is no territory to correspond to the map? Right now your statement doesn't have enough detail for me to be convinced that UF-driven humans are a bad model.

And you didn't answer my question: is there another way, besides UFs, to guide an agent towards a goal? It seems to me that the idea of moving toward a goal implies a utility function, be it hunger or human programmed.

0TheAncientGeek
Rather than trying to prove the negative, it is more a question of whether these models are known to be useful. The idea of mulitple or changing UFs suffers from a problem falsifiability, as well. Whenever a human changes their apparent goals, that's a switch to another UF, or a change in UF? Reminiscent of ptolemaic epicycles, as Ben Goerzel says. Implies what kind of UF? If you are arguing tautologously that having a UF just is having goal directed behaviour, then you are not going to be able to draw interesting conclusions. If you are going to define "having a UF broadly, then you are going to have similar problems, and in particular the problem that "the problem of making an AI safe simplifies to the problem of making its UF safe" only works for certain, relatively narrow, definitions of UF. In the context of a biological organism, or an artificial neural net or deep learning AI, the only thing "UF" could mean is some aspect of its functioning that is entangled with all the others. Neither a biological organism, nor an artificial neural net or deep learning AI is going to have a UF that can be conveniently separated out and reprogrammed. That definition of UF only belongs in the context of GOFAI or symbolic programming. There is no point in defining a term broadly to make one claim come out true, if it was is only an intermediate step towards some other claim, which doesn't come out as true under the broad definition.

Why would I give up the whole idea? I think you're correct in that you could model a human with multiple, varying UFs. Is there another way you know of to guide an intelligence toward a goal?

3TheAncientGeek
The basic problem is the endemic confusion between the map, the UF as a way of modelling an entity, and the territory. the UF as an architectural feature that makes certain things happen. The fact that there are multiple ways of modelling humans as UF-driven, and the fact that they are all a bit contrived, should be a hint that there may be no territory corresponding to the map.

I think you're getting stuck on the idea of one utility function. I like to think humans have many, many utility functions. Some we outgrow, some we "restart" from time to time. For the former, think of a baby learning to walk. There is a utility function, or something very much like it, that gets the baby from sitting to crawling to walking. Once the baby learns how to walk, though, the utility function is no longer useful; the goal has been met. Now this action moves from being modeled by a utility function to a known action that can be used as... (read more)

0TheAncientGeek
You could model humans as having varying UFs, or having multiple UFs...or you could give up on the whole idea.

I disagree with your interpretation of how human thoughts resolve into action. My biggest point of contention is the random pick of actions. Perhaps there is some Monte-Carlo algorithm that has a statistical guarantee that after some thousands or so tries, there is a very high probability that one of them is close to the best answer. Such algorithms exist, but it makes more sense to me that we take action based not only on context, but our memory of what has happened before. So instead of a probabilistic algorithm, you may have a structure more like a hash... (read more)

This looks like a good method to derive lower-level beliefs from higher-level beliefs. The main thing to consider when taking a complex statement of belief from another person, is that it is likely that there is more than one lower-level belief that goes into this higher-level belief.

In doxastic logic, a belief is really an operator on some information. At the most base level, we are believing, or operating on, sensory experience. More complex beliefs rest on the belief operation on knowledge or understanding; where I define knowledge as belief of some in... (read more)