samshap

Wiki Contributions

Comments

Sorted by
samshap10

There are, but what does having a length below 10^90 have to do with the solomonoff prior? There's no upper bound on the length of programs.

samshap5-3

Yes, you are missing something.

Any DEADCODE that can be added to a 1kb program can also be added to a 2kb program. The net effect is a wash, and you will end up with a  ratio over priors

Answer by samshap74

Thirder here (with acknowledgement that the real answer is to taboo 'probability' and figure out why we actually care)

The subjective indistinguishability of the two Tails wakeups is not a counterargument  - it's part of the basic premise of the problem. If the two wakeups were distinguishable, being a halfer would be the right answer (for the first wakeup).

Your simplified example/analogies really depend on that fact of distinguishability. Since you didn't specify whether or not you have it in your examples, it would change the payoff structure.

I'll also note you are being a little loose with your notion of 'payoff'. You are calculating the payoff for the entire experiment, whereas I define the 'payoff' as being the odds being offered at each wakeup. (since there's no rule saying that Beauty has to bet the same each time!)

To be concise, here's my overall rationale:

Upon each (indistinguishable) wakeup, you are given the following offer:

  • If you bet H and win, you get  dollars.
  • If you bet T and win, you get 1+ dollars.

If you believe T yields a higher EV, then you have a credence 

You get a positive EV for all N up to 2, so . Thus you should be a thirder.

Here's a clarifying example where this interpretation becomes more useful than yours:

The experimenter flips a second coin. If the second coin is Heads (H2), then N= 1.50 on Monday and 2.50 on Tuesday. If the second coin is Tails, then the order is reversed.

I'll maximize my EV if I bet T when , and H when . Both of these fall cleanly out of 'thirder' logic.

What's the 'halfer' story here? Your earlier logic doesn't allow for separate bets on each awakening.

samshap10

Thanks for sharing that study. It looks like your team is already well-versed in this subject!

You wouldn't want something that's too hard to extract, but I think restricting yourself to a single encoder layer is too conservative - LLMs don't have to be able to fully extract the information from a layer in a single step.

I'd be curious to see how much closer a two-layer encoder would get to the ITO results.

samshap20

:Here's my longer reply.

I'm extremely excited by the work in SAEs and their potential for interpretability, however I think there is a subtle misalignment in the SAE architecture and loss function, and the actual desired objective function.

The SAE loss function is:

, where  is the -Norm.

or 

I would argue that, however, what you are actually trying to solve is the sparse coding problem:

where, importantly, the inner optimization is solved separately (including at runtime).

Since is an overcomplete basis, finding  that minimizes the inner loop (also known as basis pursuit denoising[1] ) is a notoriously challenging problem, one which a single-layer encoder is underpowered to compute. The SAE's encoder thus introduces a significant error  , which means that you are actual loss function is:

The magnitude of the errors would have to be determined empirically, but I suspect that it is enough to be a significant source of error..

There are a few things you could do reduce the error:

  1. Ensuring that  obeys the restricted isometry property[2] (i.e. a cap on the cosine similarity of decoder weights), or barring that, adding a term to your loss function that at least minimizes the cosine similarities.
  2. Adding extra layers to your encoder, so it's better at solving for .
  3. Empirical studies to see how large the feature error is / how much reconstruction error it is adding.

 

 

  1. ^

    https://epubs.siam.org/doi/abs/10.1137/S003614450037906X?casa_token=E-R-1D55k-wAAAAA:DB1SABlJH5NgtxkRlxpDc_4IOuJ4SjBm5-dLTeZd7J-pnTAA4VQQ2FJ6TfkRpZ3c93MNrpHddcI

  2. ^

    http://www.numdam.org/item/10.1016/j.crma.2008.03.014.pdf

samshap10

This is great work. My recommendation: add a term in your loss function that penalizes features with high cosine similarity.

I think there is a strong theoretical underpinning for the results you are seeing.

I might try to reach out directly - some of my own academic work is directly relevant here.

samshap30

This is one of those cases where it might be useful to list out all the pros and cons of taking the 8 courses in question, and then thinking hard about which benefits could be achieved by other means.

Key benefits of taking a course (vs. Independent study) beyond the signaling effect might include:

  • precommitting to learning a certain body of knowledge
  • curation of that body of knowledge by an experienced third party
  • additional learning and insight from partnerships / teamwork / office hours

But these depend on the courses and your personality. The precommitment might be unnecessary due to your personal work habits, the curation might be misaligned with what you are interested in learning, and the other students or TAs may not have useful insights that you can't figure out in your own.

Hope that helps.

samshap20

Instead of demanding orthogonal representations, just have them obey the restricted isometry property.

Basically, instead of requiring  , we just require  .

This would allow a polynomial number of sparse shards while still allowing full recovery.

samshap10

I think the success or failure of this model really depends on the nature and number of the factions. If interfactional competition gets too zero-sum (this might help us, but it helps them more, so we'll oppose it) then this just turns into stasis.

During ordinary times, vetocracy might be tolerable, but it will slowly degrade state capacity. During a crisis it can be fatal.

Even in America, we only see this factional veto in play in a subset of scenarios - legislation under divided government. Plenty of action at the executive level or in state governments don't have to worry about this.

samshap10

You switch positions throughout the essay, sometimes in the same sentence!

"Completely remove efficacy testing requirements" (Motte) "... making the FDA a non-binding consumer protection and labeling agency" (Bailey)

"Restrict the FDA's mandatory authority to labeling" logically implies they can't regulate drug safety, and can't order recalls of dangerous products. Bailey! "... and make their efficacy testing completely non-binding" back to Motte again.

"Pharmaceutical manufactures can go through the FDA testing process and get the official “approved’ label if insurers, doctors, or patients demand it, but its not necessary to sell their treatment." Again implies the FDA has no safety regulatory powers.

"Scott’s proposal is reasonable and would be an improvement over the status quo, but it’s not better than the more hardline proposal to strip the FDA of its regulatory powers." Bailey again!

Load More