Posts

Sorted by New

Wiki Contributions

Comments

hwold2mo20

although I'm not sure if he claims that every ruliad must have observers

 

Of course yes, since there's only one ruliad by definition, and we’re observers living inside it.

In Wolfram terms I think the question would more be like : "does every slice in rulial space (or every rulial reference frame) has an observer ?"

Possibly of interest : https://writings.stephenwolfram.com/2023/12/observer-theory/

One part that I don’t see as sufficiently emphasized is the "as a time-persistent pattern" part. It seems to me that that part is bringing with it a lot of constraints on what partition languages yield time-persistent patterns. 

hwold2mo21

Do we have the value of the sum as a function of x, before going to the limit as x goes to 0 ? If yes, it would help (bonus points if it can be proven in a few lines).

hwold2mo10

To me, the correct way to do this is to compute the (implied) rate of returns of solar investment over the lifetime of the panels :

(x-x^25)/(1-x)=20/3.2 => x ~ 0.85

x = 1/(1+r) => x ~ 0.17

So yes, a 17% rate of returns is insanely good (if the two assumptions, "25 years lifetime" and "3.2k/year", stands) and will beat pretty much every other investment.

(which should makes you suspicious about the assumptions)

hwold2mo811

A person stating which entities they admit into their hypotheses, that others may not (“I believe in atoms”; “I believe in God”).

This one does not looks like the others to me.

hwold3mo1-60

I am not a community organizer, but if I was, I would just edict the MAD rule : if any dispute can’t be reasonably settled (in a way that would escalate to a panel), both the plaintiff and the defendant are kicked out of the community. The ratio of assholes to victims being low (hopefully), frame it as a noble sacrifice from the victim to keep the rest of the community safe (whoever is the actual victim here). "Some of you may die, but it’s a sacrifice I’m willing to make".

True, you’re slightly disincentivizing the reporting of true abuses (but not so much, since you’re saving a lot on the trauma of investigation). But you’re also heavily disincentivizing both false accusations and "I’m a good manipulator so I’m confident I will be able to get away with it". Add to it the man-hours saved, and I’m pretty confident it is a net gain — as long as you can enforce it.

As a bonus, if a dispute end up with community bans, you can be reasonably sure that the plaintiff was the victim. But of course, that only works if you don’t go from that observation to concluding "therefore the victim is un-banned". A noble sacrifice, really, in service of both community and truth. A sacrifice nonetheless.

hwold4mo186

category boundaries should be drawn for epistemic and not instrumental reasons

 

Sounds very wrong to me. In my view, computationally unbounded agents don’t need categories at all ; categories are a way for computationally bounded agents to approximate perfect Bayesian reasoning, and how to judge the quality of the approximation will depend on the agent goals — different agents with different goals will care differently about a similar error.

(It's actually somewhat interesting; the logarithmic score doesn't work as a measure of category-system goodness because it can only reward you for the probability you assign to the exact answer, but we want "partial credit" for almost-right answers, so the expected squared error is actually better here, contrary to what you said in the "Technical Explanation" about what Bayesian statisticians do)

Yes, exactly. When you’re at the point when you’re deciding between log-loss and MSE, you’re no longer doing pure epistemics, you’re entering the realm of decision theory ; you’re crafting a measure of how good your approximation is, a measure that can and should be tailored to your specific goals as a rational agent. log-loss and MSE are only two possibilities in a vast universe of possible such measures, ones that are quite generic and therefore not optimal for a given agent goals.

hwold4mo30

Naive question : about immunogenicity, what are the problems with the obvious strategy to counter it ? (target the thymus first to "whitelist" the delivery method).

hwold5mo13-1

A prime example of what (I believe) Yudkowsky is talking about in this bullet point is Social Desirability Bias. 

"What is the highest cost we are willing to spend in order to save a single child dying from leukemia ?". Obviously the correct answer is not infinite. Obviously teaching an AI that the answer  to this class of questions is "infinite" is lethal. Also, incidentally, most humans will reply "infinite" to this question.

hwold3y20

In general Perpetuals trade above the price of the underlying coins

I’m confused by this. Doesn’t this means that long positions almost always pays short positions, even if the index is increasing ? If so, why would anyone go long on the future ?

What’s the point of buying bitcoins in your scheme ?

Load More