All of MackGopherSena's Comments + Replies

1JBlack
Initially, yes. In the long term, no. Though even initially, the risk of interacting with humans in any way that reveals capabilities (aligned or not!) that could even potentially be perceived as dangerous may be too high to be worth the resources gained by doing so.
4Yitz
That’s true until the point at which the purposes we serve can be replaced by a higher-efficiency design, at which point we become redundant and a waste of energy. I suspect almost all unaligned AGIs would work with us in the beginning, but may defect later on.

[edited]

1JBlack
What is "auto-desynchronization"? What does it mean to "look achieve maximum synchronization"? What connection do either of these have to do with veganism, if any?
8Dagon
This is gibberish to me.  Thanks for this moment of confusion, and for the reminder that LW comprises a number of distinct communities with varying overlap and knowledge of each other.
1David Udell
I don't get the relevance of the scenario. Is the idea that there might be many such other rooms with people like me, and that I want to coordinate with them (to what end?) using the Schelling points in the night sky? I might identify Schelling points using what celestial objects seem to jump out to me on first glance, and see which door of the two that suggests -- reasoning that others will reason similarly. I don't get what we'd be coordinating to do here, though.
2avturchin
There is no physical eternity, so there is a small probability of fork in each moment. Therefore, there will be eventually next observer moment, sufficiently different to be recognised as different. In internal experience, it will be almost immediately.
3eigen
Good comments, thanks for sharing both.  'd love to hear more about practical insights on how to get better at recalling + problem-solving.
1Ulisse Mini
= finding shortest paths on a weighted directed graph, where the shortest path cost must be below some threshold :)
1Chantiel
Interesting. When you say "fake" versions of myself, do you mean simulations? If so, I'm having a hard time seeing how that could be true. Specifically, what's wrong about me thinking I might not be "real"? I mean, if I though I was in a simulation, I think I'd do pretty much the same things I would do if I thought I wasn't in a simulation. So I'm not sure what the moral harm is. Do you have any links to previous discussions about this?
1Chantiel
By "real", do you mean non-simulated? Are you saying that even if 99% of Chantiels in the universe are in simulations, then I should still believe I'm not in one? I don't know how I could convince myself of being "real" if 99% of Chantiels aren't. Do you perhaps mean I should act as if I were non-simulated, rather than literally being non-simulated?

[edited]

2TLW
That's not the same roll distribution as rolling two dice[1]. For instance, rolling a 14 (pulling 2 kings) has a probability of 4∗352∗51≈0.0045249, not 17∗7≈0.020408[2]. (The actual distribution is weird. It's not even symmetrical, due to the division (and associated floor). Rounding to even/odd would help this, but would cause other issues.) This also supposes you shuffle every draw. If you don't, things get worse (e.g. you can't 'roll' a 14 at all if at least 3 kings have already been drawn). ==== Fundamentally: you're pulling out 2 cards from the deck. There are 52 possible choices for the first card, and 51 for the second card. This means that you have 52*51 possibilities. Without rejection sampling this means that you're necessarily limited to probabilities that are a multiple of 152∗51. Meanwhile, rolling N S-sided dice and getting exactly e.g. N occurs with a probability of 1NS. As N and S are both integers, and 52=2*2*13, and 51=3*17, the only combinations of dice you can handle without rejection sampling are: 1. Nd1[3] 2. 1d2, 1d3, 1d4, 1d13, 1d17, 1d26, ..., 1d(52*51) 3. 2d2 ...and even then many of these don't actually involve both cards. For instance, to get 2d2 with 2 pulled cards ignore the second card and just look at the suit of the first card. Wait, do you mean: 1. Decide to cheat or not cheat, then if not cheating do a random roll, or 2. Do a random roll, and then decide to cheat or not? I was assuming 1, but your argument is more suited for 2... 1. ^ Aside from rolling a strange combination of a strangely-labelled d52 and a strangely-labelled d51, or somesuch. 2. ^ import itertools import fractions import collections cards = list(range(1, 14))*4 dice_results = collections.Counter(a+b for a in range(1, 8) for b in range(1, 8)) dice_denom = sum(dice_results.values()) card_results = collections.Counter((a+b)//2+1 for a, b in itertools.permutations(cards, r=2)) card_denom =

[edited]

2TLW
Interesting. Could you elaborate here? Alice cheats and say she got a 6. Bob calls her on it. Is it now Bob's turn, and hence effectively a result of 0? Or is it still Alice's turn? If the latter, what happens if Alice cheats again? I'm not sure how you avoid the stalemate of both players 'always' cheating and both players 'always' calling out the other player. How do you go from a d52 and a d51 to a single potentially-loaded d2? I don't see what to do with said cards.

[edited]

3Dagon
Nope, I don't follow at all.  One second per second is how I experience time.  I can understand compression or variance in rate of memory formation or retention (how much the experience of one second has an impact a week or a decade later).  And I'd expect there's more data in variance over time (where the second in question has different experiences than the previous second did) than in variance at a point in time (where there are many slightly different sensations).

[edited]

1TLW
What's the drawback to always accusing here?
3Dagon
I didn't catch that post.  It would be interesting to solve for optimum strategy here.  Almost certainly a mixed strategy, both for selecting and for accusing.  The specifics of length of game, cost/benefit of false/true accusation, and the final evaluation (do points matter or it it just win/lose) all go into it.   I suspect there would be times when one would pick a low roll so as not to get accused but also prevent a worst-case roll.  I also suspect those are times when one should always accuse, regardless of roll.
3Tristan Cook
I'm not sure what you're saying here. Are you saying that in general, a [total][average] utilitarian wagers for [large][small] populations? Yep! (only if we become grabby though)   What's your reference for the 500 million lifespan remaining? I followed Hanson et al. in using  in using the end of the oxygenated atmosphere as the end of the lifespan.  Yep, I agree. I don't do the SSA update with reference class of observers-on-planets-of-total-habitability-X-Gy but agree that if I did, this 500 My difference would make a difference.

[edited]

4Dagon
Do you have a reference or definition for this?  It sounds like "subjective time" or "density/sparseness of experience over time" or possibly "rate of memory formation".  If so, I get it for attention span or mindfulness, but vegan eating and clothes fabric type seem a stretch.
2avturchin
It is like Laplace sunrise problem: everyday the sun have risen is a small bit of evidence that it more likely to rise again. The same way if the world didn't end today, it is a small evidence that allows to extend our expected doomsday date.
5SebastianG
Hi Mack,  You seem somewhat new. I just want to let you know that community standards of discourse avoid appeals to authority, especially appeals to authority without commentary. A comment like this provides little value, even if in jest. Downvoted because just running in and dropping a scripture quote without commentary degrades LW conversational norms. This is not Wednesday night bible study and people don't nod their heads smilingly because you found a related scripture quote. Even if the audience were 90% believers, I doubt they would interpret scripture the same way you do. You should explain why you chose this quote and what bearing it has on turchin's admittedly glib point. Besides switching from protestantism to at least something with a bit more harumph like, catholicism or orthodoxy, I encourage you to wrestle with the sequences, if you haven't already. Regards!
3avturchin
It is a variant of Roco Basilisk: people who cared about superinteligent entity will be rewarded more.

[edited]

[edited]

2JBlack
When you start with a tiny number of stakeholders with tiny stakes, large deviations will be likely. The share-of-ownership sequence for a given stakeholder is essentially a random walk with step size decreasing like 1/n. Since variance of such a walk is the sum of individual variances, the variance for such a walk scales like Sum (1/n)^2, which remains bounded. If you have N initial coins, the standard deviation in the limiting distribution (as number of allocations goes to infinity) is approximately 1/sqrt(N). In your largest starting allocation you have only N=30, and so the standard deviation in the limit is ~0.18. Since you have only 3 participants the 50% threshold is only 1 standard deviation from the starting values. So basically what you're seeing is an artifact of starting with tiny toy values. More realistic starting conditions (such as a thousand participants with up to a thousand coins each) will yield trivial deviations even over quadrillions of steps. The probability that any one participant would ever reach 50% via minting (or even a coalition of a hundred of them) is utterly negligible.

[edited]

1JBlack
You're stopping as soon as any holder gets at least 50%, or 2^20 iterations, whichever comes first? Obviously this is more likely with few starting coins and few initial participants. It's absolutely certain with 3 participants holding 1 coin each. Even if coins were distributed by initial stake (i.e. uniformly between holders) these conditions would still often be reached with such small initial numbers. Even with these tiny starting conditions biased toward dominance, a fair few runs never resulted in 50% holdings in the first 2^20 coins: 11 out of the 27 that weren't mathematically impossible.

[edited]

2Viliam
The chances of any holder increase when they get the new coin, not just the biggest holder. And although the biggest holder is most likely to get the new coin, it will provide them the smallest relative growth. I wrote a simulation where players start as [60, 30, 10] and get 100 more coins. The results were: [111, 71, 18], [121, 60, 19], [127, 54, 19], [128, 55, 17], [119, 50, 31], [120, 57, 23], [113, 56, 31], [112, 64, 24], [127, 54, 19], [125, 58, 17], [99, 78, 23], [114, 69, 17], [127, 55, 18], [128, 46, 26], [114, 67, 19], [118, 67, 15], [110, 71, 19], [113, 60, 27], [107, 70, 23], [113, 71, 16]. import random def next(arr): ... r = random.randrange(sum(arr)) ... n = [] ... for i in arr: ... ... m = 1 if (r >= 0) and (r < i) else 0 ... ... r -= i ... ... n.append(i + m) ... return n def iterate(arr, count): ... for i in range(count): ... ... arr = next(arr) ... return arr def main(): ... for i in range(20): ... ... print(iterate([60,30,10], 100)) main()
2AprilSR
That's fair, my brain might be doing great at maximizing something which isn't especially correlated with what I actually want / what actually makes me happy.
2dadadarren
Not sure what you mean. Doomsday argument is about how to think about the information that the first-person "I" is a particular physical person. It suggests treating it the same way as if a random sampling process has selected said physical person. SIA agrees with using sampling process, but disagrees with the range it is sampled from.

[edited]

2mako yass
Good, and cheap, is the thing. If we didn't have silicon computing, we would still have vacuum tubes, we'd still have computers. But as I understand it, vacuum tubes sucked, so I wouldn't expect that that machine learning would be moving so quickly at this point. I think you're imagining the decay running in the wrong direction. I suppose you could define it that way. It seems less natural. But you can ask a similar question... should I expect to 'find myself in the previous year' in some sense. Well I could. If there were some "I" hopping between every observer-moment in existence (this is a fairly common form of super-utilitarianism), it wouldn't be perceptible, I wouldn't remember ever having been elsewhere, our memories are all just properties of whatever vessel we currently occupy. I'd phrase it more as... if you observe that you're a human, there's a prior on finding that you're in the earliest year (or the earliest cosmological reproductive cycle) in which a lot of humans exist. You could be in a later year, but until you can confirm that with evidence, you consider it less likely. But that has to trade off against the fact that the number of universes (and so the number of humans) keeps ballooning over time (or even outside of time), and I don't really know how to navigate that, could be that you should expect to be in the latest possible universe, because the measure increases from branching outweigh the measure losses from time discounting.
3Stephen Bennett
That line of inquiry would, presumably, lead to the discovery of subatomic particles. Indeed, that seems to me the point: it's a single detail that leads to the right-minded thinking about the world, even if it is not precisely correct.
2avturchin
It could be, but Bostrom suggested stronger explanation: you spent more time in slower lanes.
1Johannes C. Mayer
We were talking about maximizing positive and minimizing negative conscious experiences. I guess with the implicit assumption that we could find some specification of this objective that we would find satisfactory (one that would not have unintended consequences when implemented).
1superads91
Still don't know what you meant by that other sentence. What's being "the state", and what does a bearable life have do to with it? And what's the "e" in (100/e)%?
1superads91
"The fact that you're living a bearable life right now suggests that this is already the state." Interesting remark... Could you elaborate?
2avturchin
I don't get how you come to 10power51. if we want to save from the past 10 billion people and for each we need to run 10power5 simulations, it is only 10power15, which one Внящт sphere will do. However, there is way to acausaly distribute computations between many superintelligence in different universes and it that case we can simulate all possible observers.
2avturchin
why? if there is 60 000 futures where I escaped a bad outcome, I can bet on it as 1 to 60 000. 
2avturchin
I don't see a problem here, I will win in another branch of MWI. Or I miss something?
2avturchin
Extremely large number, if we do not use some simplification methods. I discuss these methods in the article, and after them, the task become computable.  Without such tricks, it will be like 100 life histories for every second of sufferings. But as we care only about preventing very strong sufferings, then for normal people living normal life there are not that many such seconds.  For example, if a person is dying in fire, it is like 10 minutes of agony, that is 600 seconds and 60 000 life histories which need to be simulated. It is doable task for a future superinteligent AI. 

[edited]

5Dagon
What are the axes on these curves?  Are there multiple commitment and multiple reneg curves in play for a given commitment?  Are distinct commitments (with their own curves) actually correlated for an agent, such that there really should be a unified set of curves that describe all commitments? Personally, I'm much simpler in my modeling.  A commitment is just a credence that the future behavior is what I'll believe is best to do (including my valuation of delivering on commitments).  Evidence or model changes which alter the credence perforce changes my commitment level.   I currently do and expect to continue valuing pretty highly the fact that I generally keep promises.

[edited]

1Algon
People like Scott Garabrant care about you less? https://www.lesswrong.com/posts/NvwJMQvfu9hbBdG6d/preferences-without-existence I think this question is poorly specified, though it sounds like it could lead somewhere fun. For one thing, the more of your actions are random, the harder it is for an adversary to anticipate your actions. But also, it seems likely that you have less power in the long run, so the less important it is overall. It washes out. Increasing the randomness of your actions slightly for actions which don't sacrifice much resources in expectation, it might make you marginally happier by breaking you out of the status quo. And feel like life goes on longer because your memory becomes less compressible.  Oh, and you become more unbelievable as an individual and, depending how you inserted randomes, more interesting, but you may become more complex and hence harder to specify. So simulating you may become more costly, but also more worthwhile.
1Yitz
Would that look something like a reverse Pascal’s mugging? Under what circumstances would that be to a “mugger’s” advantage?
3avturchin
Answer here is obvious, but let's look at another example: Should I eat an apple? Apple promises pleasure and I want it, but after I have eaten it, I don't want eat anything as I am full. So the expected pleasure source has shifted. In other words, we have in some sense bicameral mind: a conscious part which always follows pleasure and an unconscious part which constantly changes rewards depending on the persons' needs.  If we want to learn person's preferences, we want to learn rules why the rewards are given to some things and are not given for other. Someone likes reading and other one likes skying.  And it is not a complete model of mind, just an illustration why reward is not enough to represent human values.

[edited]

1JBlack
Ah I see, the bit you remove is freely chosen. Though I am still confused. The problem I have is that given a 24 million bit raw image format consisting of a million RGB pixels, there are at least 2^million different images that look essentially identical to a human (let's say a picture of my room with 1 bit of sensor noise per pixel in the blue channel). During the compression process the smaller files must remain distinct, otherwise we lose the property of being able to tell which one is correct on expansion. So the process must end when the compressed file reaches a million bits, because every compression of the 2^million possible pictures of my room must have a different sequence of bits assigned to it, and there is no room left for being able to encode anything else. But I can equally well apply the same argument to gigapixel images, concluding that this compression method can't compress anything to less than a billion bits. This argument doesn't have an upper limit, so I'm not sure how it can ever compress anything at all.

[edited]

1JBlack
I'm a little confused. Removing one bit means that two possible images map onto the file that is 1 bit smaller, not 1024 possible images. I'm also confused about what happens when you remove a million bits from the image. When you go to restore 1 bit, there are two possible files that are 1 bit larger. But which one is the one you want? They are both meaningless strings of bits until you get to the final image, and there are 2^million possible final images.
2Dagon
I'd expect that if the differences are enough that a human can decide, they're even easier for a computer to decide.

[edited]

4Pattern
How?
4Viliam
What do you mean by "exact opposite reasons"? To me, it seems like continuation of the same trend of humiliating the human ego: * you are not going to live forever * yes, you are mere atoms * your planet is not the center of the universe * even your sun is not special * your species is related to the other species that you consider inferior * instead of being logical, your mind is a set of short-sighted agents fighting each other Followed by: * even your reality is not special * your civilization is too stupid to stop doing the thing(s) that will predictably kill all of you

[edited]

3Dagon
The concept of cost requires alternatives.   What do you cost, compared to the same universe with someone else in your place?  very little.  What do you cost, compared to no universe at all? you cost the universe.

You'll find it helpful to ignore that aspect for now.

1JBlack
It only implies that you can have no moral imperative to change the past. It has no consequences whatsoever for morally evaluating the past.
3Richard_Kennaway
"Ought implies can" in that linked article is about the present and future, not the past. There is nothing in that principle to disallow having a preference that the past had not been as it was, and to have regret for former actions. The past cannot be changed, but one can learn from one's past errors, and strive to become someone who would not have made that error, and so will not in the future.

[edited]

2JBlack
The motivation remains the same regardless of whether your first 'if' is just an if, but at least it would answer part of the question. My motivation is to elicit further communication about the potential interesting chains of reasoning behind it, since I'm more interested in those than in the original question itself. If it turns out that it's just an 'if' without further interesting reasoning behind it, then at least I'll know that.

[edited]

1Measure
Is this just postulating that whatever did happen (historically) should have happened (morally)?
1JBlack
Mostly that it's a very big "if". What motivates this hypothesis?

[edited]

2Measure
I'm confused. What does anthropics have to do with morality?
2avturchin
Who "we" ? :)  Saying a "king" I just illustrated the difference between interesting character who are more likely to be simulated in a game or in a research simulation, and "qualified observer" selected by anthropics. But these two sets clearly intersects, especially of we live in a game about "saving the world". 
2avturchin
Anthropics imply that I should be special, as I should be "qualified observer", capable to think about anthropics. Simulations also requires that I should be special, as I should find myself living in interesting times. These specialities are similar, but not exactly. Simulation's speciality is requiring that I will be a "king" in some sense, and anthropic speciality will be satisfied that I just understand anthropics.  I am not a very special person (as of now), therefore anthropics specialty seems to be more likely than simulation speciality. 

[edited]

1JBlack
I suspect that I don't understand your last sentence at all. Do you mean in this hypothetical universe? I imagine that it would diverge from our own very quickly if feats like your examples were possible for even a small fraction of people. I don't think intentional actions in such a universe would split nicely into secular and mana actions, they would probably combine and synergize.
3Viliam
What about legality of mana usage? My life could be dramatically changed by adding a few zeroes to my bank account, which would only require changing a few bits, which are probably not even that difficult to find. That is, this task is probably cheaper than either sterilizing or boiling the glass of water. There is an issue with "difficulty" of finding some bits. Like, difficulty for whom? A hypothetical agent who has what knowledge exactly? I was thinking about self-improvement, which (within human boundaries) has a nice upper limit -- in worst case, you need to change all qbits in your body; but in most reasonable cases a fraction of them would probably suffice. The question is, how to find the relatively few qbits in my brain which would increase my intelligence, or willpower. Alternatively, modifying other people could be quite profitable. Any person willing to help you is a multiplier for your abilities. The secular alternative to this is social skills (or manipulation).
Load More