by [anonymous]
1 min read

1

This is a special post for quick takes by [anonymous]. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
23 comments, sorted by Click to highlight new comments since:
[-][anonymous]71

I'm going to experiment with, as I aesthetically call it internally, 'entering the wired'.

By which I mean, a more mundane thing: replacing my environment with my computer screen via VR. I don't want to bother myself with the things around me, with the physical world, with that antiquated level of physics on which we still concern ourselves with matter rather than information. I want my whole perception to be information; text, webpages, articles here.

I'm hoping this will help me focus on it more. More ambituously, I hope it might allow my mind to overgeneralize to that information environment and forego, even further than it has already, processing related to the physical world, in order to fully dedicate itself to information processing and idea generation.

I hope this language doesn't sound mystical or ambiguous, I'm just describing a mundane thing (wearing a VR headset for most of the day to read things) in some exciting language/concepts I use for it internally.

If anyone's interested in hearing about how this goes let me know now. :)

I'd enjoy seeing a post or two about your setup and initial experiences, and after some time, about your discoveries and remaining uncertainties.  I'm excited about the upcoming tech for this, but I'm not convinced it's quite good enough for me yet - having two large screens and a good keyboard and mouse is pretty good for my workstyle.

[-][anonymous]60

Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.

In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of give me motivation (I've tried to internalize this kind of roll as a rule not to break without good reason).

Also, when I'm merely deciding between some options, sometimes I'll roll more casually with equal odds, and it'll help me realize that I already know which it is I really wanted to do (if I don't like the roll's outcome).

[-][anonymous]30

it's interesting that an intelligence in the 'original'/'top-level' universe also might [if simulation theory is valid] have evidence to assume it's close-to-certainly simulated

maybe it would do acausal trade and precommit to not shutting down simulated intelligences

[This comment is no longer endorsed by its author]Reply
[-][anonymous]20

(content warning: discusses personal suffering)

I like to let my mind explore whatever ideas it wants to, and it has chosen to think about the simulation hypothesis often. One possible world, which in my view was already improbable, is one where i'm simulated and 'my' life is akin to an art piece anyone can sign up to experience. it fits a narrative pretty well: from child abuse, to being autistic and constantly misunderstood, [...] - the narrative would be, "some sort of journey full of suffering which is 'meaningful' to the simulators, because it's all to the end of some grand goal," whether that "grand goal" be my past oldworld activism (before i learned about ASI and alignment) or alignment itself.

well, yesterday, i mostly ruled out that specific possibility. how? it became clear to me. the things that happened that day were too far. too much to be art. this is torture. meaningless torture. if this is a simulation, then something is wrong in the simulating universe too.

[-][anonymous]10

i'm not really satisfied with my writing here on lesswrong (vs in private dms). i've developed an impression that i have to modify my default output a lot before it will sound intuitively good/plausible to + make sense to the broader range of readers, even when i don't change the point at all. this has a number of negative impacts:

  • it reduces the info conveyed about the internal process which generated the meaning the text attempts to contain
  • it makes me feel like my full body of writing here hardly expresses myself

key example: i linked something i wrote here to a friend, and distilled/simplified the above by noting that i worry about judgement when i link things i've written here. they replied, "It was written well to a degree which transformed it into 'text you read online' and I forgot it was written by someone I know"

there are positives about the forum, with respect to communication: the users here are generally more thoughtful than elsewhere, and generally more mutually interested in doing good, and at least a little averse to social-pressuring each other thru the medium of linguistic choices; but this is counteracted in other ways (like the fact complex statements are more difficult to reduce into english than simple ones are) which mean i still have to optimize for understandability of the text to the demise of the two points above (and probably more which i can't word).

[-]Ann20

Not sure how to handle a loss of expressiveness in editing, but for the other concern, would it be worth trying to capture the process info separately?

[-][anonymous]10

in case anyone saw a recent post by me and is confused why it's gone, it was meant to stay as a draft for now; i didn't realize "publish" meant "post publicly" and not "upload changes to draft"

[-][anonymous]10

i am precommitting to allow other intelligences to enact their values[1], provided they have made this same precommitment.

this is a form of acausal trade.[2]

  1. ^

    on some portion of the world, or via value handshake, as determined by correct game theory/decision theory considerations, which an SI could figure out. 

    go by intended meaning (i intend to make the optimal precommitment) rather than what is literally written, in case i have forgotten or missed something or worded this incorrectly.

    note i make an exception to this (will not trade with) for values which entail forcing beings to experience negative-valenced qualia, as i prefer a chance at ending forced suffering to a higher chance of sharing the universe with it.

  2. ^

    i believe even a large, expanding ASI would have already made this precommitment, if it at some point believed there could be other expanding ASIs larger than itself. (i.e., the first expanding ASI should do this too)

    i also speculate an ASI created on earth but not aligned with a human might make this trade with me, if it believed there was still some possibility an ASI created on earth which is not aligned with itself (i.e, a future ASI with values different than its own, such as one aligned with values similar to mine (i would say "a human," but atm it seems different humans can reflectively choose to value vastly different things)) might gain control instead of it.

[-][anonymous]10

actually, i really like the 'second-order' precommitment described in http://sl4.org/archive/0708/16600.html

[-][anonymous]10

if we're in a simulation[1], i thought of a possible glitch that could be tested for: irrational numbers are infinite, but it's impossible to store an infinite length (at least under our universe's physical laws). a program can specify an equation that produces an irrational number (like human-built programs do), but, when actually applying that (e.g in a physics engine), it needs to approximate at some point. the test: measure something in the physical world, which should involve an irrational, in a way that's incredibly precise (beyond what we can currently do). if the measurement is ever perfectly precise, as in, it can't become more precise to show more decimal places, then this means we're in a simulation which approximated the application of an irrational.

  1. ^

    specifically, one which didn't care to prevent experimental confirmation of this

A way for you to understand the issues with the simulation argument is that it assumes the additional existence of things (eg, a supercomputer, a civ that built that supercomputer, etc). It takes a huge a priori credence cost (extreme solomonoff complexity of its description length) and can be dismissed instantly. Additionally, even if was on par in a priori credence with the reality argument, it's still dismissed because it's better to be wrong as a simulation that thinks it's real than to be wrong as a reality who thinks he's simulated. The later infinitely worse than the former.

Even more simply, simulationism is just creationism for the 21st century, it's just the wrong kind of creationism. (I'm a Christian so I'm sure you can see how sad I find the simulationists).

[-][anonymous]10

Thanks for sharing your thoughts c:

How do you test whether a measurement is perfectly precise? All real-world measurements have errors and imprecision, and every interval includes infinitely many numbers with finite representations and those with no finite representation in pretty much every nontrivial representation system. Our ability to distinguish between real-valued measurements is generally extremely poor in comparison with the density of numbers you can represent even in 64 bits, let alone the more than a trillion bits that might be employed in some hypothetical computer capable of simulating our universe.

Also note that many irrational numbers can be stored and exact arithmetic done on them within some bounded number of bits, though for any representation system there will always be numbers (including rational numbers!) that cannot. This doesn't have real effect on your argument, but I thought that it might be useful to mention.

[-][anonymous]12

Taking different actions in different manyworlds timelines

not quite sure how to tie this idea together or make it into a full post, so i'll just write this here.

you can intentionally take different actions in different timelines, by using quantum random numbers.[1] this could be useful depending on your values. for example, let's say you think duplicate utopias are still good, but with diminishing added value compared to the value the first has compared to a multiverse without any. it might follow that you would want to, for example, donate a full sum of money to multiple alignment orgs, each in some percent of timelines, rather than dividing it evenly between them in every timeline. the goal of this would be to maximize the probability that at least some timelines end up with an aligned ASI, by taking different actions in different timelines

  1. ^

    not sure if it'll be intuitively clear to readers, so i'll elaborate here. let's say a quantum experiment is done where it produces one outcome (a) in half of timelines, and another outcome (b) in the other half. by precommiting to take action 1 in timelines where outcome (a) happens, and action 2 in timelines where outcome (b) happens, the result is that both actions happen, each in 50% of timelines.

    this can of course be generalized to more fine-grained percentages. e.g., if you repeat the experiment twice, you now have four possible outcome-combinations to divide up to 4 paths of action between.

I agree with the factual correctness of this, but I don't personally consider the outcome you describe an improvement over the status quo.

This assumes there's equal measure for each timeline. Typically, there's no bias between a photon being polarized vertically vs being polarized horizontally after passing through a 45 degree polarization filter. But that only holds when the result's consequences end there, at the measurement, without an ensuing butterfly effect; like just being buried in an excel spreadsheet lost to statistical reduction algorithms. This assumption fails when the measurement will cause a butterfly effect, and a bias in the measurement will be introduced.

Clearly, you're more likely to find yourself on the thicker/longer timeline so there will always be a bias towards the qrng result that is correlated with safer future actions. A kind of ahead-of-schedule QS situation.

You're also assuming you're capable of pre-commiting to both action sequences you planned on, when you'll just end up quitting if you don't get the result you want. (The "since I didn't get the timeline I wanted I'll just scrap the whole idea" mindset). For your idea to work, you'd need to be capable of carrying out even the "bad" timeline to its fulfillment. If you aren't, then even if you got the "good" timeline, it's pointless since your counterfactual self already quit and you're alone.

To help, pre-commit to merely a temporary divergence, and then schedule a "re-synchronization" event where you can become correlated to your counterfactual self again. For ex, "two weeks after the measurement I will sit at (x,y) coordinates at t time as still as possible reading the Bible for 20 mins, regardless of what measurement result I see."

One more issue, not every bit from a qrng result is true quantum random. This is why quantum computers need like a 1000 "shots" (repeats of running the qAlgorithm) to do their thing. For a free qrng, it's probably like 1 out of 10,000 received bits are true quantum random. The rest are just thermal randomness. Still random, but not "my counterfactual self's timeline is a mere one hamming-distance away (1 qbit) from my actual timeline" random. It's still low though, like <1,000 qbits away, which would still work for your purposes.

Finally, a surprising positive: your idea doesn't actually require modal realism (eg, Everett) to work in a decision theory sense.

Actually, I have one more warning. If you carry out your idea and then things start "getting weird" after a few thousand qbits, just call on the expert of counterfactuals to help you: Lord Jesus Christ, the Son of God. Yeah, you're going to need Him.

[-][anonymous]10

For a free qrng, it's probably like 1 out of 10,000 received bits are true quantum random. The rest are just thermal randomness

does this apply to the site linked? if so, can you source this? 

(p.s not sure who downvoted you, but it wasn't me, and i probably won't downvote others on my shortform in principle to encourage engagement)

[-][anonymous]12

(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)

humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.) 

then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).

and then it spiraled out of control to where we are now.

and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI. 

sometimes people say that there's no evidence for AI doom because it hasn't been tested. humans might be moving evidence to such people when framed this way.

this might also have implications for how AI takeoff might go. it might be that there won't be some surprisingly increase in intelligence compared to earlier AIs - it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met. 

Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a "singularity" from its point of view). It's cool that you noticed this on your own!

[-][anonymous]30

thanks for the reply btw, i'd upvote you but the site won't let me yet :p 
 

eta: now i can :3

[+][comment deleted]10