Benaya Koren

Wiki Contributions

Comments

Sorted by

I don't think that this solution gives you everything that you want from semantic categories. Assume for example that you have a multidimensional cluster with heavy tails (for simplicity, assume symmetry under rotation). You measure some of the features, and determine that the given example belongs to the cluster almost surely. You want to use this fact to predict the other features. knowing the deviation of the known features is still relevant for your uncertainty about the other features. You may think about this extra property as measuring "typicality", or as measuring "how much does it really belong in the cluster.

Grammatically, the most obvious interpretation is a universal quantification

Here I mostly agree

I think it's best to put such qualified language into your statements from the start.

Here I don't, for the same reason that I don't ask about "water in the refrigerator outside eggplant cells". Because pragmatics are for better or worse part of the language.

Would very much like to read such a post. I have the basic intuition that it is a soft form of "witness" (as in complexity/cryptography), but it is not very developed.

I think it would be helpful, when dealing with such foundational topics, to tabu "justification", "validity", "reason" and some related terms. It is too easy to stop the reduction there, and forget to check what are their cause and function in our self-reflecting epistemic algorithm.

The question shouldn't be whether circular arguments are "valid" or give me "good reason to believe", but whether I may edit the parts of my algorithm that handle circular arguments, and as a result expect (according to my current algorithm) to have stronger conviction in more true things.

Your bayesian argument, that if the claim was false the circle is likely it to end in contradiction- I find convincing, because I am already convinced to endorse this form of bayesian reasoning. Because as a normative it has properties that I have already learned to make sense according to earlier heuristics that were hopefully good. Including the heuristic that my heuristics are sometimes bad and I want to be reasonably robust to that fact. Also, that this principle may not be implemented absolutely without sacrificing other things that I care about more.

5 disagree and no dislikes on a rare political position - if only the rest of the world was that sane.

Hi, just saw the old thread. Anyway as an Israeli my answer is strongly 2, though it depends what you mean by ideology. The maximum that most Israelis would be willing to give due to national security considerations is less than she minimum that Palestinians are willing to get due to national pride and ethos - in terms of land degree of autonomy, and mostly solution for the descendants of the 1948-9 refugees inside Israel

From the US perspective far easier to just deliver an ultimatum on settlement building full stop

The question is different: is such an ultimatum more likely to be accepted?

the fewer settlers, the fewer troublemakers

It is not my impression that the troublemakers come from Ariel.

Also that provides an incentive for those who live in the settlements to come to an agreement on a two state solution since that will free up their land for further building.

Here our perception of people from Ariel may differ in the other direction: do you see them support any two states solution that a Palestinian agreed to, under any realistic circumstances?

End settlement construction. Full stop

I think some nuance is missing here. I agree that the settlements were a bad idea to begin with, and that expanding to new areas is bad. But the israeli cities like Ariel in the west bank are not going anywhere, nor places like Oranit. Given that you and I know that, it must be very visible to the Palestinians and other stakeholders - maybe even by building those places even denser, while keeping other areas visibly empty and ready for land swaps. Nothing is worse for peace than unrealistic expectations.

the "policymaker prior" is usually to think "if there is a dangerous, tech the most important thing to do is to make the US gets it first."

This sadly seem to be the case, and to make the dynamics around AGI extremely dangerous even if the technology itself was as safe as a sponge. What does the second most powerful country do when it see its more powerful rival that close to decisive victory? Might it start taking careless risks to prevent it? Initiate a war when it can still imaginably survive it, just to make the race stop?

find institutional designs and governance mechanisms that would appeal to both the US and China I'm not a fan of China, but actually expect the US to be harder here. From the point of view of china, race means losing, or WWIII and than losing. Anything that would slow down ai give them time to become stronger in the normal way. For the US, it interacts with politics and free market norms, and with the fantasy of getting Chinese play by the rules and loose.

My understanding is that under Georgism the state is supposed to be payed for value increases, not for usage. And that it can't just kick you out and put someone else in a long as you pay its honest estimate of the value increase. So it is still not the same as the state owning a land and being able to sell it to the next user.

Load More