I can't really get why would one need to know which configuration gave rise to our universe.
This was with respect to feasibility of locating our specific universe for simulation at full fidelity. It's unclear if it's feasible, but if it were, that could entail a way to get at an entire future state of our universe.
I can't see why we would need to "distinguish our world from others"
This was only a point about useful macroscopic predictions any significant distance in the future; prediction relies on information which distinguishes which world we're in.
...For n
reevaluate how you're defining all the terms that you're using
Always a good idea. As for why I'm pointing to EV: epistemic justification and expected value both entail scoring rules for ways to adopt beliefs. Combining both into the same model makes it easier to discuss epistemic justification in situations with reasoners with arbitrary utility functions and states of awareness.
Knowledge as mutual information between two models induced by some unspecified causal pathway allows me to talk about knowledge in situations where beliefs could follow from arbitra...
Serious thinkers argue for both trying to slow down (PauseAI), and for defensive acceleration (Buterin, Aschenbrenner, etc)
Yeah, I'm in both camps. We should do our absolute best to slow down how quickly we approach building agents, and one way is leveraging AI that doesn't rely on being agentic. It offers us a way to do something like global compute monitoring and could possibly also alleviate short-term incentives satisfiable by building agents, by offering a safer avenue. Insofar as a global moratorium stopping all large model research is feasible, we s...
The problem with that and many arguments for caution is that people usually barely care about possibilities even twenty years out.
It seems better to ask what would people do if they had more tangible options, such that they could reach a reflective equilibrium which explicitly endorses particular tradeoffs. People mostly pick not caring about possibilities twenty years out due to not seeing how their options constrain what happens in twenty years. This points to not treating their surface preferences as central insofar as they are not following from ...
He was talking about academic philosophers.
This was a joke referencing academic philosophers rarely being motivated to pick satisfying answers in a time-dependent manner.
Are you saying that the mechanism of correspondence is an "isomorphism"? Can you please describe what the isomorphism is?
An isomorphism between two systems indicates those two systems implement a common mathematical structure -- a light switch and one's mental model of the light switch are both constrained by having implemented this mathematical structure such that their central beh...
While many computations admit shortcuts that allow them to be performed more rapidly, others cannot be sped up.
In your game of life example, one could store larger than 3x3 grids and get the complete mapping from states to next states, reusing them to produce more efficient computations. The full table of state -> next state permits compression, bottoming out in a minimal generating set for next states. One can run the rules in reverse and generate all of the possible initial states that lead to any state without having to compute bottom-up for eve...
>They leave those questions "to the philosophers"
Those rascals. Never leave a question to philosophers unless you're trying to drive up the next century's employment statistics.
But why would there exist something outside a brain that has the same form as an idea? And even if such facts existed, how would ideas in the mind correspond to them? What is the mechanism of correspondence?
The missing abstraction here is isomorphism. Isomorphisms describe things that can be true in multiple systems simultaneously. How would the behavior of a light switch corresp...
You are Elon Musk instead of whoever you actually are.
This is a combination of descriptions only locally accurate in two different worlds and not coherent as a thought experiment asking about the one world fitting those descriptions.
Conditional prediction markets could resolve to the available options weighted by calibration on similar subjects of the holders in unconditional markets, rather than N/A. Such markets might end up looking like predicting what well-calibrated people will pick, or following on after they bet (implying not expecting significant well-calibrated disagreement). Well-calibrated people could then expect to earn a profit by betting in conditional markets if they bet closer to the consensus than the market does, partly weighted in their favor for being better calibrated relative to the whole market.
I'm glad you wrote this, it adds some interesting context that was unfamiliar to me for this market I opened around a week ago: https://manifold.markets/dogway/which-is-the-earliest-year-well-hav#wji33pv4fcj
I was entertaining the possibility of a powder or fluid-based metal as an input to a 3D printer which works today for fabricating metal components and seems likely to improve significantly with time. I was considering this avenue to be the most likely way that the threshold of full fidelity-preserving self-reproduction is passed, but I have no expertise...
I think it's pretty clear that any foundations are also subject to justificatory work
EV is the boss turtle at the bottom of the turtle stack. Dereferencing justification involves a boss battle.
there's some work to be done to make them seem obvious
There's work to show how justification for further things follows from a place where EV is in the starting assumptions, but not to take on EV as an assumption in the first place, as people have EV-calculatingness built into their behaviors as can be noticed to them.
...Sometimes—unavoidably, as far as I can tell—those
Some beliefs do not normatively require justification;
Beliefs have to be justified on the basis of EV, such that they fit in a particular way into that calculation, and justification comes from EV of trusting the assumptions. Justification could be taken to mean having a higher EV for believing something, and one could be justified in believing things that are false. Any uses of justification to mean something not about EV should end up dissolving; I don't think justification remains meaningful if separated.
Some justifications do not rest on beliefs
Justifi...
Multiple argument chains without repetition can demonstrate anything a circular argument can. No beliefs are constrained when a circular argument is considered relative to the form disallowing repetition (which could avoid costly epicycles). The initial givens imply the conclusion, and they carry through to every point in the argument, implying the whole.
One trusts proofs contextually, as a product of the trusts of the assumptions that led to it in the relevant context. Insofar as Bayesianism requires justification, it can be justified as a dependency in EV calculations.
We're not going to find a set of axioms which just seem obvious to all humans once articulated.
People understand EV intuitively as a justification for believing things, so this doesn't ring true to me.
The premise A can be contingently true rather than tautologically.
True, I should have indicated I was rejecting it on the basis of repe...
I think it's fair to say that the most relevant objection to circular arguments is that they are not very good at convincing someone who does not already accept the conclusion.
All circular reasoning which is sound is tautological and cannot justify shifting expectation.
The point is, you have to live with at least one of:
No branch of this disjunction applies. Justifications for assumptions bottom out in EV of the reasoning, and so are justified when the EV calculation is accurate. A reasoner can accept less than perfect...
'Self' is "when the other agent is out of range" and 'Other' is "when the other agent is out of range and you see it teleport to a random space". It's unclear to me what reducing the distance between these representations would be doing other than degrading knowledge of the other agent's position. The naming scheme seems to suggest that the agent's distinction of self and other is what is degrading, but that doesn't sound like it's the case. I doubt this sort of loss generalizes to stable non-deceptive behavior in the way that more purely defining the agent's loss in terms of a coalition value for multiple agents that get lower value for being deceived would.
I appreciate the speculation about this.
redesigning and going through the effort of replacing it isn't the most valuable course of action on the margin.
Such effort would most likely be a trivial expenditure compared to the resources those actions are about acquiring, and wouldn't be as likely to entail significant opportunity costs as in the case of humans taking those actions, as AIs could parallelize their efforts when needed.
The number of Von Neumann probes one can produce should go up the more planetary material is used, so I'm not sure the adequ...
Human Intelligence Enhancement via Learning:
Intelligence enhancement could entail cognitive enhancements which increase rate / throughput of cognition, increase memory, use of BCI or AI harnesses which offload work / agency or complement existing skills and awareness.
In the vein of strategies which could eventually lead to ASI alignment by leveraging human enhancement, there is an alternative to biological / direct enhancements which attempt to influence cognitive hardware, and instead attempt to externalize one's world model and some of the agency necessa...
'Alignment' has been used to refer to both aligning a single AI model, and the harder problem of aligning all AIs. This difference in the way the word alignment is used has led to some confusion. Alignment is not solved by aligning a single AI model, but by using a strategy which prevents catastrophic misalignment/misuse from any AI.
The original alignment thinking held that explaining human values to AGI would be really hard.
The difficulty was suggested to be in getting an optimizer to care about what those values are pointing to, not to understand them[1]. If in some instances the values mapped to doing something unwise, using an optimizer that understood those values might fail to constrain away from doing something unwise. Getting a system to use extrapolated preferences as behavioral constraints is a deeper problem than getting a system to reflect surface preferences. The high p(d...
Yudkowsky + Wolfram Debate
Some language to simplify some of the places where the debate got stuck.
Is-Ought
Analyzing how to preserve or act on preferences is a coherent thing to do, and it's possible to do so without assuming a one true universal morality. Assume a preference ordering, and now you're in the land of is, not ought, where there can be a correct answer (highest expected value).
Is There One Reality?
Let existence be defined to mean everything, all the math, all the indexical facts. "Ah, but you left out-" Nope, throw that in too. Everything. Exis... (read more)