LESSWRONG
LW

Dagon
12984Ω191354560
Message
Dialogue
Subscribe

Just this guy, you know?

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2Dagon's Shortform
6y
92
Vladimir_Nesov's Shortform
Dagon2d0-9

Superintelligence that both lets humans survive (or revives cryonauts) and doesn't enable indefinite lifespans is a very contrived package.

I don't disagree, but I think we might not agree on the reason.  Superintelligence that lets humanity survive (with enough power/value to last for more than a few thousand years, whether or not individuals extend beyond 150 or so years) is pretty contrived.   

There's just no reason to keep significant amounts of biological sub-intelligence around.

Reply
Sleeping Beauty and the Forever Muffin
Dagon3d*20

I don't think I agree with #3 (and I'd frame #2 as "localities of space-time gain the ability to to sense and model things", but I'm not sure if that's important to our miscommunication).  I think each of the observers happens to exist, and observes what it can independently of the others.  Each of them experiences "you-ness", and none are privileged over the others, as far as any 3rd observer can tell.

So I think I'd say

  1. Universe exists
  2. Some parts of the universe have the ability to observe, model, and experience their corner of space-time.
  3. It turns out you are one of those.  

I don't think active verbs are justified here - not necessarily "created", "placed", or "assigned".

I don't know for sure whether there is a god's eye view or "outside" observation point, but I suspect not, or at least I suspect that I can never get access to it or any effects of it, and can't think of what evidence I could find one way or the other.

Reply
Sleeping Beauty and the Forever Muffin
Dagon3d20

I think it goes to our main point of agreement: there is ambiguity in what question is being asked.  For sleeping beauty, the ambiguity is "probability of WHAT future experience for WHOM" is she calculating a probability for.  I was curious if you can answer that for your universe question: whose future experience will be used to resolve the truth of the matter for what probability was appropriate to use for the prediction?

Reply
Sleeping Beauty and the Forever Muffin
Dagon4d31

Math is math, and at the end of the day the SB problem is just a math problem.

No, it's also an identity/assumption problem.  Probability is subjective - it's an agent's estimate of future experience.  In the sleeping beauty case, there is an undefined and out-of-domain intuition about "will it be one or two individuals having this future experience?"   We just don't have any identity quantification experience in the case of split/merge from this memory-wipe setup.

the unstated disagreement is whether it's one or two experiences that resolve the probability.  This ambiguity is clear by the fact that simplifications into clearly-distinct people don't trigger the same confusions.  The memory-wipe is the defining element of this problem.

And to tie this to the universe question - how will the probability be resolved?  What future experience are you predicting with either interpretation?

Reply
Alexander Gietelink Oldenziel's Shortform
Dagon4d30

Efficient Markets Hypothesis has plenty of exceptions, but this is too coarse-grained and distant to be one of them.  Don't ask "what will happen, so I can bet based on that", ask "what do I believe that differs widely from my counterparties".  This possibility is almost certainly "priced in" to the obvious bets (TSMC).  

That said, you may be more correct than the sellers of long-term puts, so maybe it'll work out.  Having a theory and then examining the details and modeling the specific probabilities is exactly what you should be doing.  Have you looked at prices and premia for those specific investments?  A quick spreadsheet of win/loss in various future paths with as close to real numbers as possible goes a long way.

Reply1
tailcalled's Shortform
Dagon4d20

I'm not a rationalist, and I don't think I hit all your best posts and comments, just some of the mediocre ones (though now that I think about it, that COULD BE all your best, by sheer luck).  Do I still get a Boo?

Reply
I can't tell if my ideas are good anymore because I talked to robots too much
Dagon10d100

I don't know how long you've been talking to real people, but the vast majority are not particularly good at feedback - less consistent than AI, but that doesn't make them more correct or helpful.  They're less positive on average, but still pretty un-correlated with "good ideas".  They shit on many good ideas, and support a lot of bad ideas. and are a lot less easy to query for reasons than AI is.

I think there's an error in thinking talk can ever be sufficient - you can do some light filtering, and it's way better if you talk to more sources, but eventually you have to actually try stuff.

Reply
Roman Malov's Shortform
Dagon10d20

Hmm.  What about the claim "pysicality -> no free will".  This is the more common assertion I see, and the one I find compelling.  

The simplicity/complexity I often see attributed to "consciousness" (and I agree: complexity does not imply consciousness, but simplicity denies it), but that's at least partly orthogonal to free will.

Reply
Roman Malov's Shortform
Dagon10d20

They can overgeneralize that feeling over all physical systems (like humans), missing out on the fact that this feeling should only be felt

I don't follow why this is "overgeneralize" rather than just "generalize".  Are you saying it's NOT TRUE for complex systems, or just that we can't fit it in our heads?   I can't compute the Mandelbrot Set in my head, and I can't measure initial conditions well enough to predict a multi-arm pendulum beyond a few seconds.  But there's no illusion of will for those things, just a simple acknowledgement of complexity.

Reply
Load More
14What epsilon do you subtract from "certainty" in your own probability estimates?
Q
7mo
Q
6
3Should LW suggest standard metaprompts?
Q
11mo
Q
6
8What causes a decision theory to be used?
Q
2y
Q
2
2Adversarial (SEO) GPT training data?
Q
2y
Q
0
24{M|Im|Am}oral Mazes - any large-scale counterexamples?
Q
3y
Q
4
17Does a LLM have a utility function?
Q
3y
Q
11
8Is there a worked example of Georgian taxes?
Q
3y
Q
12
9Believable near-term AI disaster
3y
3
2Laurie Anderson talks
4y
0
76For moderately well-resourced people in the US, how quickly will things go from "uncomfortable" to "too late to exit"?
Q
5y
Q
11
Load More