ProgramCrafter

Wiki Contributions

Comments

Sorted by

I've started writing a small research paper on this, using mathematical framework, and understood that I had long conflated Shapley values with ROSE values. Here's what I found, having corrected that error.

ROSE bargaining satisfies Efficiency, Pareto Optimality, Symmetry*, Maximin Dominance and Linearity - a bunch of important desiderata. Shapley values, on other hand, don't satisfy Maximin Dominance so someone might unilaterally reject cooperation; I'll explore ROSE equilibrium below.

  1. Subjects: people and services for finding partners.
  2. By Proposition 8.2, ROSE value remains same if moves transferring money within game are discarded. Thus, we can assume no money transfers.
  3. By Proposition 11.3, ROSE value for dating service is equal or greater than its maximin.
  4. By Proposition 12.2, ROSE value for dating service is equal or less than its maximum attainable value.
  5. There's generally one move for a person to maximize their utility: use the dating service with highest probability of success (or expected relationship quality) available.
  6. There are generally two moves for a service: to launch or not to launch. First involves some intrinsic motivation and feeling of goodness minus running costs, the second option has value of zero exactly.
  7. For a large service, running costs (including moderation) exceed much realistic motivation. Therefore, maximum and maximin values for it are both zero.
  8. From (7), (3) and (4), ROSE value for large dating service is zero.
  9. Therefore, total money transfers to a large dating service equal its total costs.

So, why yes or why no?


By the way, Shapley values suggest paying a significant sum! Given value of a relationship of $10K (can be scaled), and four options for finding partners ( -- self-search,  -- friend's help,  -- dating sites,  -- the specialized project suggested up the comments), the Shapley-fair price per success would be respectively $550, $650 and $4400.

P.S. I'm explicitly not open to discussing what price I'd be cheerful to pay to service which would help to build relationships. In this thread, I'm more interested in whether there are new decision theory developments which would find maximin-satisfying equilibria closer to Shapley one.

at a $100k valuation of a marriage

I don't think one can coherently value marriage 20 times as much as than a saved life ($5k as GiveWell says)? Indeed there is more emotional attachment to a person who's your partner (i.e. who you are emotionally attached to) than to a random human in the world, but surely not that much?

And if a marriage is valued at $10k, then the credit assignment 1%/10% would make the allocation $100/$1000 - and it seems that people really want to round the former towards zero

When rereading [0 and 1 Are Not Probabilities], I thought: can we ever specify our amount of information in infinite domains, perhaps with something resembling hyperreals?

  1. An uniformly random rational number from  is taken. There's an infinite number of options meaning that prior probabilities are all zero (), so we need infinite amount of evidence to single out any number.
    (It's worth noting that we have codes that can encode any specific rational number with a finite word - for instance, first apply bijection of rationals to natural numbers, then use Fibonacci coding; but in expectation we need to receive infinite bits to know an arbitrary number).

    Since  symbol doesn't have nice properties with regards to addition and subtraction, we might define a symbol  which means "we need some information to single out one natural number out of their full set". Then, the uniform prior over  would have form  (prefix and suffix standing for values outside  segment) while a communication "the number is " would carry  bits of evidence on average, making the posterior .
  2. The previous approach suffers from a problem, though. What if two uniformly random rationals  are taken, forming a square on coordinate grid?
    If we've been communicated  information about , we clearly have learned nothing about  and thus cannot pinpoint the specific point, requiring  more bits.

    However, there's bijection between  and , so we can assign a unique natural number to any point in the square, and therefore can communicate it in  bits in expectation, without any coefficient .

When I tried exploring some more, I've validated that greater uncertainty (, communication of one real number) makes smaller ones () negligible, and that evidence for a natural number can presumably be squeezed into communication for a real value. That also makes the direction look unpromising.

 

However, there can be a continuation still: are there books/articles on how information is quantified given a distribution function?

Never say 'nothing' :-)

  1. the world might be in such state that attempts to do good bring it into some failure instead, and doing the opposite is prevented by society
    (AI rise and blame-credit which rationality movement takes for it, perhaps?)
  2. what if, for some numerical scale, the world would give you option "with 50%, double goodness score; otherwise, lose almost everything"? Maximizing EV on this is very dangerous...

I guess I'm looking for questions of this family:

  1. Do you sometimes tell things that are not literally true but help the person you're talking to in understanding?
  2. On average, do you believe statements by members of rationalist community significantly more (+1.0 bit of evidence or more) than exact same words from non-rationalists?
  3. What is the biggest barrier you face when trying to communicate rational ideas to others? [a) Emotional resistance b) Lack of shared vocabulary c) Time constraints d) Preexisting strong beliefs e) Complexity of ideas f) People disengaging randomly]

Also,

  • Have you ever intervened on someone's behalf where the person was failing and would prefer to succeed?
  • How many people can a [brainstorming] conversation hold on average, so that everyone is active?

I feel like an important question is: how far does this generalize? We can estimate the IQ gap between the dumbest person who successfully uses the internet (probably in the 80's) and the smartest malware author (got to be at least 150+).  Is that the limit somehow, or does this knack extend across even more orders of magnitude?

If imagine a world where 100 IQ humans are using an internet that contains malware written by 1000 IQ AGI, do humans just "avoid the bad parts"?

For reactive threats, the upper bound is probably at most "people capable of introspection who can detect they are not sure some action will be to net benefit, and therefore refuse to take it". For active threatening factors, that's an arms race (>=40% this race is not to infinity - basically, if more-cooperating DT strategies are any good).

Maybe the subject is researched more in biology? Example topic: eating unknown food (berries, nuts) in forest, and balance of lifetime adaptation vs evolutionary adaptation (which involves generations passing).

It would be nice to see at least three questions which would demonstrate how person extracts evidence from others' words, how much time and emotions could they spend if they needed to communicate a point precisely, etc.

I'll have to sleep on that, actually. Will return tomorrow, presumably with more concrete ideas)

99.9% of all cryptocurrency projects are complete scams (conservative estimate).

On first skim, I agree with the estimate as stated and would post a limit order for either side. I'd also like to note that "crypto in general is terrible" instead of "all crypto is terrible", as there have been applications developed that do not allow you to lose all funds without explicit acknowledgement.

Similarly, Cyber Security is terrible.  Basically every computer on the internet is infected with multiple types of malware.

It is presumably terrible (or, 30%, result of availability bias), and I've observed bugs happen because functionality upgrade did not consider its interaction with all other code. However, I disagree that every computer is infected; probably you meant that it is under constant stream of attack attempts?

 

The insecure domains mainly work because people have charted known paths, and shown that if you follow those paths your loss probability is non-null but small. As a matter of IT, it would be really nice to have systems which don't logically fail at all, but that requires good education and pressure-resistance skills for software developers.

I think TMS doesn't rewrite anything, instead activating neural circuits in another pattern? Then, new pattern is not depressed, brain can notice that (on either conscious or subconscious level) and make appropriate changes to neural connections.

Basically, I believe that whatever resulting patterns (including "other parts of you changed into something non-native and alien") you dis-endorse, are "committed" with significantly lower probability.

P(Bitcoin) What is the probability that the price of one bitcoin will be higher on July 1st, 2025, than it was on December 1st, 2024? ($???)

Probably best to include "what price of one bitcoin do you expect on July 1st, 2025, given that it was $??? on December 1st, 2024?" as well.
You could also include P(weak EMH) - instead of P(GPT-5 Release) if there's not enough space.

Overall, the questions seemed insufficiently checking social skills to me, instead preferring testing large, "impactful" beliefs.

Load More