Tenoke

https://svilentodorov.xyz/

Wiki Contributions

Comments

Tenoke20

They are not full explanations, but as far as, I at leat can get.

>tells you more about what exists

It's still more satisfying, because a state of ~ everything existing is more 'stable' than a state of a specific something existing,  in exactly the same way as to why I even think nothing makes more sense as a default state than something to be asking the queston. Nothing existing, and everything existing just require less explanation than a specific something existing. It doesn't mean it necesserily requires 0 explanation.
 

And, if everything mathemetically describable and consistent/computable exists, I can wrap my head around it not requiring an orgin more easily, in a similar way why I don't require an orgin for actual mathematical objects, but without it seeming like necesserily a Type error (though that's the counterargument I most consider here) like with most explanations. 

>because how can you have a "fluctuation" without something already existing, which does the fluctuating

That's at least somewhat more satisfying to me because we already know about virtual particles and fluctuations from Quantum Mechanics, so it's at least a recognized low-level mechanism that does cause something to exist even while the state is zero energy (nothing). 

It still leaves us with nothing existing over something overall in at least one way (zero energy), is already demonstratable with fields, which are at the lowest level of what we already know of how the universe works and which can be examined and thought about furtther. 

Tenoke20

The only appealing answers to why there is something instead of nothing for me currently are

1. MUH is true, and all universes that can be defined mathematically exist. It's not a specific something that exists but all internally consistent somethings. 
or
2. The default state is nothing but there are small positive and negative fluctuations (either literally quantum fluctuations or similar but at a lower level) and over infinite time those fluctuations eventually result in a huge something like our and other universes. 

Also even If 2  happens only at the regular quantum fluctuations level, there's a non-zero chance of a new universe emerging due to fluctuations after heat death, which over infinite time would mean it is bound to happen and a new universe/rebirth of ours from scratch will eventually emerge.

Also 1 can happen due to 2 if the fluctuations are at such a low level that any possible mathematical structure eventually emerges over infinite time.

Tenoke30

I am dominated by it, and okay, I see what you are saying. Whichever scenario results in a higher chance of human control of the light cone is the one I prefer, and these considerations are relevant only where we don't control it.

Tenoke20

Sure, but 1. I only put 80% or so on MWI/MUH etc.  and 2. I'm talking about optimizing for more positive-human-lived-seconds, not for just a binary 'I want some humans to keep living' .

Tenoke20

I have a preference for minds as close to mine continuing existence assuming their lives are worth living. If it's misaligned enough that the remaining humans don't have good lives, then yes it doesn't matter but I'd just lead with that rather than just the deaths. 

And if they do have lives worth living and don't end up being the last humans, then that leaves us with a lot more positive-human-lived-seconds in the 2B death case.

Tenoke20

Okay, then what are your actual probabilities? I'm guessing it's not sub-20% otherwise you wouldnt just say "<50%", because for me preventing a say 10% chance of extinction is much more important than even a 99% chance of 2B people dying. And your comment was specifically dismissing focus on full extinction due to the <50% chance.

Tenoke74

unlikely (<50% likely).

That's a bizarre bar to me! 50%!? I'd be worried if it was 5%.

Tenoke40

It's a potentially useful data point but probably only slightly comparable. Big, older, well-established companies face stronger and different pressures than small ones and do have more to lose. For humans that's much less the case after a point.

Tenoke1310

>"The problem is when people get old, they don't change their minds, they just die. So, if you want to have progress in society, you got to make sure that, you know, people need to die, because they get old, they don't change their mind." 

That's valid today but I am willing to bet a big reason why old people change their mind less is biological - less neuroplasticity, accumulated damage, mental fatigue etc. If we are fixing aging, and we fix those as well it should be less of an issue. 

Additionally, if we are in some post-death utopia, I have to assume we have useful, benevolent AI solving our problems, and that ideally it doesn't matter all that much who held a lot of wealth or power before. 

Tenoke1211

He does not have a good plan for alignment, but he is far less confused about this fact than most others in similar positions.

Yes he seems like a great guy but he doesn't just come up as not having a good plan but as them being completely disconnected about having a plan or doing much of anything

JS: If AGI came way sooner than expected we would definitely want to be careful about it.

DP: What would being careful mean? Presumably you're already careful, right

And yes aren't they being careful? Well, sounds like no

JS: Maybe it means not training the even smarter version or being really careful when you do train it. You can make sure it’s properly sandboxed and everything. Maybe it means not deploying it at scale or being careful about what scale you deploy it at

"Maybe"? That's a lot of maybes for just potentially doing the basics. Their whole approximation of a plan is 'maybe not deploying it at scale' or 'maybe' stopping training after that and only theoretically considering sandboxing it?. That seems like kind of a bare minimum and it's like he is guessing based on having been around, not based on any real plans they have.

He then goes on to molify, that it probably won't happen in a year.. it might be a whole two or three years, and this is where they are at.

First of all, I don't think this is going to happen next year but it's still useful to have the conversation. It could be two or three years instead.

It comes off as if all their talk of Safety is complete lip service even if he agrees with the need for Safety in theory. If you were 'pleasantly surprised and impressed' I shudder to imagine what the responses would have had to be to leave you disappointed.

Load More