Wiki Contributions

Comments

I was confused until I realized that the "sparsity" that this post is referring to is activation sparsity not the more common weight sparsity that you get from L1 penalization of weights.

Wait why do you think inmates escaping is extremely rare? Are you just referring to escapes where guards assisted the escape? I work in a hospital system and have received two security alerts in my memory where a prisoner receiving medical treatment ditched their escort and escaped. At least one of those was on the loose for several days. I can also think of multiple escapes from prisons themselves, for example: https://abcnews.go.com/amp/US/danelo-cavalcante-murderer-escaped-pennsylvania-prison-weeks-facing/story?id=104856784 notable since the prisoner was an accused murderer and likely to be dangerous and armed. But there was also another escape from that same jail earlier that year: https://www.dailylocal.com/2024/01/08/case-of-chester-county-inmate-whose-escape-showed-cavalcante-the-way-out-continued/amp/ 

i have some reservations about the practicality of reporting likelihood functions and have never done this before, but here are some (sloppy) examples in python. Primarily answering number 1 and 3.
 

import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import matplotlib
import pylab

np.random.seed(100)

## Generate some data for a simple case vs control example
# 10 vs 10 replicates with a 1 SD effect size
controls = np.random.normal(size=10)
cases = np.random.normal(size=10) + 1
data = pd.DataFrame(
    {
        "group": ["control"] * 10 + ["case"] * 10,
        "value": np.concatenate((controls, cases)),
    }
)

## Perform a standard t-test as comparison
# Using OLS (ordinary least squares) to model the data
results = smf.ols("value ~ group", data=data).fit()
print(f"The p-value is {results.pvalues['group[T.control]']}")

## Report the (log)-likelihood function
# likelihood at the fit value (which is the maximum likelihood)
likelihood = results.llf
# or equivalently
likelihood = results.model.loglike(results.params)

## Results at a range of parameter values:
# we evaluate at 100 points between -2 and 2
control_case_differences = np.linspace(-2, 2, 100)
likelihoods = []
for cc_diff in control_case_differences:
    params = results.params.copy()
    params["group[T.control]"] = cc_diff
    likelihoods.append(results.model.loglike(params))

## Plot the likelihood function
fig, ax = pylab.subplots()
ax.plot(
    control_case_differences,
    likelihoods,
)
ax.set_xlabel("control - case")
ax.set_ylabel("log likelihood")


## Our model actually has two parameters, the intercept and the control-case difference
# We only varied the difference parameter without changing the intercept, which denotes the
# the mean value across both groups (since we are balanced in case/control n's)
# Now lets vary both parameters, trying all combinations from -2 to 2 in both values
mean_values = np.linspace(-2, 2, 100)
mv, ccd = np.meshgrid(mean_values, control_case_differences)
likelihoods = []
for m, c in zip(mv.flatten(), ccd.flatten()):
    likelihoods.append(
        results.model.loglike(
            pd.Series(
                {
                    "Intercept": m,
                    "group[T.control": c,
                }
            )
        )
    )
likelihoods = np.array(likelihoods).reshape(mv.shape)

# Plot it as a 2d grid
fig, ax = pylab.subplots()
h = ax.pcolormesh(
    mean_values,
    control_case_differences,
    likelihoods,
)
ax.set_ylabel("case - control")
ax.set_xlabel("mean")
fig.colorbar(h, label="log likelihood")

The two figures are:

I think this code will extend to any other likelihood-based model in statsmodels, not just OLS, but I haven't tested.

It's also worth familiarizing yourself with how the likelihoods are actually defined. For OLS we assume that residuals are normally distributed. For data points y_i at X_i the likelihood for a linear model with independent, normal residuals is:

where  is the parameters of the model,  is the variance of the residuals, and  is the number of datapoints. So the likelihood function here is this value as a function of  (and maybe also , see below).

So if we want to tell someone else our full likelihood function and not just evaluate it at a grid of points, it's enough to tell them  and . But that's the entire dataset! To get a smaller set of summary statistics that capture the entire information, you look for 'sufficient statistics'. Generally for OLS those are just  and . I think that's also enough to recreate the likelihood function up to a constant?

Note that  matters for reporting the likelihood but doesn't matter for traditional frequentist approaches like MLE and OLS since it ends up cancelling out when you're doing finding the maximum or reporting likelihood ratios. This is inconvenient for reporting likelihood functions and I think the code I provided is just using the estimated  from the MLE estimate. However, at the end of the day, someone using your likelihood function would really only be using it to extract likelihood ratios and therefore the  probably doesn't matter here either?

But yes, working out is mostly unpleasant and boring as hell as we conceive of it and we need to stop pretending otherwise. Once we agree that most exercise mostly bores most people who try it out of their minds, we can work on not doing that.

 

I'm of the nearly opposite opinion: we pretend that exercise ought to be unpleasant. We equate exercise with elite or professional athletes and the vision of needing to push yourself to the limit, etc. In reality, exercise does include that but for most people should look more like "going for a walk" than "doing hill sprints until my legs collapse".

On boredom specifically, I think strenuousness affects that more than monotony. When I started exercising, I would watch a TV show on the treadmill and kept feeling bored, but the moment I  toned down to a walking speed to cool off, suddenly the show was engaging and I'd find myself overstaying just to watch it. Why wasn't it engaging while I was running? The show didn't change. Monotony wasn't the deciding factor, but rather the exertion.

Later, I switched to running outside and now I don't get bored despite using no TV or podcast or music. And it requires no willpower! If you're two miles from home, you can't quit. Quitting just means running two miles back which isn't really quitting so you might as well keep going. But on a treadmill, you can hop off at any moment, so there's a constant drain on willpower. So again, I think the 'boredom' here isn't actually about the task being monotonous and finding ways to make it less monotonous won't fix the perceived boredom.

I do agree with the comment of playing tag for heart health. But that already exists and is socially acceptable in the form of pickup basketball/soccer/flag-football/ultimate. Lastly, many people do literally find weightlifting fun, and it can be quite social.

The American Heart Association (AHA) Get with the Guidelines–Heart Failure Risk Score predicts the risk of death in patients admitted to the hospital.9 It assigns three additional points to any patient identified as “nonblack,” thereby categorizing all black patients as being at lower risk. The AHA does not provide a rationale for this adjustment. Clinicians are advised to use this risk score to guide decisions about referral to cardiology and allocation of health care resources. Since “black” is equated with lower risk, following the guidelines could direct care away from black patients.

From the NEJM article. This is the exact opposite of Zvi's conclusions ("Not factoring this in means [blacks] will get less care").

I confirmed the NEJM's account by using an online calculator for that score. https://www.mdcalc.com/calc/3829/gwtg-heart-failure-risk-score Setting a patient with black=No gives higher risk than black=yes. Similarly so for a risk score from the AHA,: https://static.heart.org/riskcalc/app/index.html#!/baseline-risk 

Is Zvi/NYT referring to a different risk calculator? There are a lot of them out there. The NEJM also discuses a surgical risk score that has the opposite directionality, so maybe that one? Though there the conclusion is also about less care for blacks: "When used preoperatively to assess risk, these calculations could steer minority patients, deemed to be at higher risk, away from surgery." Of course, less care could be a good thing here!

I agree that this looks complicated.

Wegovy (a GLP-1 antagonist)

Wegovy/Ozempic/Semaglutide are GLP-1 receptor agonists, not GLP-1 antagonists. This means they activate the GLP-1 receptor, which GLP-1 also does. So it's more accurate to say that they are GLP-1 analogs, which makes calling them "GLP-1s" reasonable even though that's not really accurate either.

Broccoli is higher in protein content per calorie than either beans or pasta and is a very central example of a vegetable, though you'd also want to mix it with beans or something for a better protein quality. 3500 calories of broccoli is 294g protein, if Google's nutrition facts are to be trusted. Spinach, kale, and cauliflower all also have substantially better protein per calories than potatoes and better PDCAAS scores than I expected (though I'm not certain I trust them - does spinach actually get a 1?). I think potatoes are a poor example (and also not one vegetarians turn to for protein).

Though I tend to drench my vegetables in olive oil so these calories per gram numbers don't mean much to me in practice, and good luck eating such a large volume of any of these.

In my view, it's a significant philosophical difference between SLT and your post that your post talks only about choosing macrostates while SLT talks about choosing microstates. I'm much less qualified to know (let alone explain) the benefits of SLT, though I can speculate. If we stop training after a finite number of steps, then I think it's helpful to know where it's converging to. In my example, if you think it's converging to , then stopping close to that will get you a function that doesn't generalize too well. If you know it's converging to  then stopping close to that will get you a much better function - possibly exactly equally as good as you pointed out due to discretization.

Now this logic is basically exactly what you're saying in these comments! But I think if someone read your post without prior knowledge of SLT, they wouldn't figure out that it's more likely to converge to a point near  than near . If they read an SLT post instead, they would figure that out. In that sense, SLT is more useful.

I am not confident that that is the intended benefit of SLT according to its proponents, though. And I wouldn't be surprised if you could write a simpler explanation of this in your framework than SLT gives, I just think that this post wasn't it.

Everything I wrote in steps 1-4 was done in a discrete setting (otherwise  is not finite and whole thing falls apart). I was intending  to be pairs of floating point numbers and  to be floats to floats.

However, using that I think I see what you're trying to say. Which is that  will equal zero for some cases where  and  are both non-zero but very small and will multiply down to zero due to the limits of floating point numbers. Therefore the pre-image of  is actually larger than I claimed, and specifically contains a small neighborhood of .

That doesn't invalidate my calculation that shows that  is equally likely as  though: they still have the same loss and -complexity (since they have the same macrostate). On the other hand, you're saying that there are points in parameter space that are very close to  that are also in this same pre-image and also equally likely. Therefore even if  is just as likely as , being near to  is more likely than being near to . I think it's fair to say that that is at least qualitatively the same as SLT gives in the continous version of this.

However, I do think this result "happened" due to factors that weren't discussed in your original post, which makes it sound like it is "due to" -complexity. -complexity is a function of the macrostate, which is the same at all of these points and so does not distinguish between  and  at all. In other words, your post tells me which  is likely while SLT tells me which  is likely - these are not the same thing. But you clearly have additional ideas not stated in the post that also help you figure out which  is likely. Until that is clarified, I think you have a mental theory of this which is very different from what you wrote.

the worse a singularity is, the lower the -complexity of the corresponding discrete function will turn out to be

This is where we diverge. Please let me know where you think my error is in the following. Returning to my explicit example (though I wrote  originally but will instead use  in this post since that matches your definitions).

1. Let   be the constant zero function and  

2. Observe that  is the minimal loss set under our loss function and also  is the set of parameters  where  or .

3. Let  . Then  by definition of . Therefore, 

4. SLT says that  is a singularity of  but that  is not a singularity.

5. Therefore, there exists a singularity (according to SLT) which has identical -complexity (and also loss) as a non-singular point, contradicting your statement I quote.

Load More