KristenBurke

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Whether we have a specific story of the future or not, we shouldn't assume a good outcome. But perhaps you're saying that we should at least have a vision of a good outcome in mind to steer toward.

Yes.

I think of absolutes as a fallacy (whet in the realm of utility as opposed to truth) -- it means you're not admitting trade-offs.

I may just not know of any principled ways of forming a set of outcomes to begin with, so that it may be treated as a lottery and so forth.

But it would seem that aesthetics or axiology must still have some role in the formation, since precise and certain truths aren't known about the future and yet at least some structure seems subjectively required—if not objectively required—through the construction of a (firm but mutable) set of highest outcomes.

So far my best attempts have involved not much more than basic automata concepts for personal identity and future configurations.

So at any level, you'd better get used to asking stupid questions.

It's probably just me but the Stack Exchange community seems to make this hard.

I think it would be nice if someone wrote a post on "visceral comparative advantage" giving tips on how to intuitively connect "the best thing I could be doing" with comparative advantage rather than absolute notions.

Yes, that would be nice. And personally speaking, it would be most dignifying if it could address (and maybe dissolve) those—probably less informed—intuitions about how there seems to be nothing wrong in principle with indulging all-or-nothing dispositions save for the contingent residual pain. Actually, just your first paragraph in your response seems to have almost done that, if not entirely.

I don't think many people on the "front lines" as you put it have concrete predictions concerning merging with superintelligent AIs and so on. We don't know what the future will look like; if things go well, the options at the time will tend to be solutions we wouldn't think of now.

It may not be completely the same, but this does feel uncomfortably close to requiring an ignoble form of faith. I keep hoping there can still be more very general but yet very informative features of advanced states of the supposed relevant kind.

I like how the book The Compound Effect makes you feel like anything is possible as long as you're consistent and get rid of bad habits.

yul, I think my worry is more about whether my past is a strong indication of my human maximum potential, and not as much whether I'll repeat the same poor decisions.

This does help, thank you. I'd come to similar judgments and maybe couldn't sustain them long because I didn't know of anyone else with them.

I think this also happens to help me ask my question better. What I'd also like to know:

What are the intended trajectories of people on the front-lines? Is it merging with super AIs to remain on the front-lines, or is it "gaming" in lower intelligence reservations structured by yet more social hierarchies and popularity contests? Is this a false dichotomy?

Neither is ultimately repugnant to me or anything. Nothing future pharmaceuticals couldn't probably fix. I just truly don't know what they think that they can expect. If I did, maybe I could have a better idea of what I can personally expect so that I don't unnecessarily choose some trajectory in exceeded vain.

I guess, above, what I was trying to communicate—if there's something there at all to communicate—is a kind of appreciation for how not-fun it may be to have no choice but to be in a lower intelligence reservation, being someone with analogous first-hand experience. So if all of us ultimately have no choice in such a matter, what would be some things we might see in value journals living in a reservation? (Assuming the values wouldn't be prone to be fundamentally derived from any kind of idolatry.)

This sounds like something I could maybe benefit from. But I still may need some prompting before this, which would lead to me doing this. I'm not yet sure. . . .

Due to a series of many poor decisions from my early teens to mid-twenties, I hadn't been able to muster up enough motivation, self-esteem, or whatever I needed to get back on the track I should've been on to begin with and get into a profession I could feel good about. But it could just be a too-low intelligence thing too.

This experience seems to have forever colored my ultimate outlook in a doloric way.

Maybe due to long periods of not feeling good about being nowhere near the front-lines of anything important, I deeply wonder how people in communities like this perceive future scenarios where "things go relatively very well" and we're not wiped out. In particular, if super AIs are running things, do you see yourselves having merged with them so that you're still on the front-lines of important things doing important things, or do you just see yourselves maybe as entities more like gamers, solving problems but relatively very unimportant problems, analogous to what people like me do today (virtually out of necessity)?

And personally, finding some small niche and indirectly bolstering the front-lines in some relatively small way, whether now or in the future, would not be valuable, satisfying, or something to particularly look forward to. Also why I'm asking.