Wiki Contributions

Comments

Sorted by
tgb20

Again, why wouldn't you want to read things addressed to other sorts of audiences if you thought altering public opinion on that topic was important? Maybe you don't care about altering public opinion but a large number of people here say they do care.

tgb20

He's influential and it's worth knowing what his opinion is because it will become the opinion of many of his readers. Hes also representative of what a lot of other people are (independently) thinking.

What's Scott Alexander qualified to comment on? Should we not care about the opinion of Joe Biden because he has no particular knowledge about AI? Sure, I'm doubt we learn anything from rebutting his arguments, but once upon a time LW cared about changing the public opinion on this matter and so should absolutely care about reading that public opinion.

Honestly, I embarrassed for us that this needs to be said.

tgb42

But you don’t need grades to separate yourself academically. You take harder classes to do that. And incentivizing GPA again will only punish people for taking actual classes instead of sticking to easier ones they can get an A in.

Concretely, everyone in my math department that was there to actually get an econ job took the basic undergrad sequences and everyone looking to actually do math started with the honors (“throw you in the deep end until you can actually write a proof”) course and rapidly started taking graduate-level courses. The difference on their transcript was obvious but not necessarily on their GPA.

What system would turn that into a highly legible number akin to GPA? I’m not sure, some sort of ELO system?

tgb10

I was confused until I realized that the "sparsity" that this post is referring to is activation sparsity not the more common weight sparsity that you get from L1 penalization of weights.

tgb40

Wait why do you think inmates escaping is extremely rare? Are you just referring to escapes where guards assisted the escape? I work in a hospital system and have received two security alerts in my memory where a prisoner receiving medical treatment ditched their escort and escaped. At least one of those was on the loose for several days. I can also think of multiple escapes from prisons themselves, for example: https://abcnews.go.com/amp/US/danelo-cavalcante-murderer-escaped-pennsylvania-prison-weeks-facing/story?id=104856784 notable since the prisoner was an accused murderer and likely to be dangerous and armed. But there was also another escape from that same jail earlier that year: https://www.dailylocal.com/2024/01/08/case-of-chester-county-inmate-whose-escape-showed-cavalcante-the-way-out-continued/amp/ 

tgb130

i have some reservations about the practicality of reporting likelihood functions and have never done this before, but here are some (sloppy) examples in python. Primarily answering number 1 and 3.
 

import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import matplotlib
import pylab

np.random.seed(100)

## Generate some data for a simple case vs control example
# 10 vs 10 replicates with a 1 SD effect size
controls = np.random.normal(size=10)
cases = np.random.normal(size=10) + 1
data = pd.DataFrame(
    {
        "group": ["control"] * 10 + ["case"] * 10,
        "value": np.concatenate((controls, cases)),
    }
)

## Perform a standard t-test as comparison
# Using OLS (ordinary least squares) to model the data
results = smf.ols("value ~ group", data=data).fit()
print(f"The p-value is {results.pvalues['group[T.control]']}")

## Report the (log)-likelihood function
# likelihood at the fit value (which is the maximum likelihood)
likelihood = results.llf
# or equivalently
likelihood = results.model.loglike(results.params)

## Results at a range of parameter values:
# we evaluate at 100 points between -2 and 2
control_case_differences = np.linspace(-2, 2, 100)
likelihoods = []
for cc_diff in control_case_differences:
    params = results.params.copy()
    params["group[T.control]"] = cc_diff
    likelihoods.append(results.model.loglike(params))

## Plot the likelihood function
fig, ax = pylab.subplots()
ax.plot(
    control_case_differences,
    likelihoods,
)
ax.set_xlabel("control - case")
ax.set_ylabel("log likelihood")


## Our model actually has two parameters, the intercept and the control-case difference
# We only varied the difference parameter without changing the intercept, which denotes the
# the mean value across both groups (since we are balanced in case/control n's)
# Now lets vary both parameters, trying all combinations from -2 to 2 in both values
mean_values = np.linspace(-2, 2, 100)
mv, ccd = np.meshgrid(mean_values, control_case_differences)
likelihoods = []
for m, c in zip(mv.flatten(), ccd.flatten()):
    likelihoods.append(
        results.model.loglike(
            pd.Series(
                {
                    "Intercept": m,
                    "group[T.control": c,
                }
            )
        )
    )
likelihoods = np.array(likelihoods).reshape(mv.shape)

# Plot it as a 2d grid
fig, ax = pylab.subplots()
h = ax.pcolormesh(
    mean_values,
    control_case_differences,
    likelihoods,
)
ax.set_ylabel("case - control")
ax.set_xlabel("mean")
fig.colorbar(h, label="log likelihood")

The two figures are:

I think this code will extend to any other likelihood-based model in statsmodels, not just OLS, but I haven't tested.

It's also worth familiarizing yourself with how the likelihoods are actually defined. For OLS we assume that residuals are normally distributed. For data points y_i at X_i the likelihood for a linear model with independent, normal residuals is:

where  is the parameters of the model,  is the variance of the residuals, and  is the number of datapoints. So the likelihood function here is this value as a function of  (and maybe also , see below).

So if we want to tell someone else our full likelihood function and not just evaluate it at a grid of points, it's enough to tell them  and . But that's the entire dataset! To get a smaller set of summary statistics that capture the entire information, you look for 'sufficient statistics'. Generally for OLS those are just  and . I think that's also enough to recreate the likelihood function up to a constant?

Note that  matters for reporting the likelihood but doesn't matter for traditional frequentist approaches like MLE and OLS since it ends up cancelling out when you're doing finding the maximum or reporting likelihood ratios. This is inconvenient for reporting likelihood functions and I think the code I provided is just using the estimated  from the MLE estimate. However, at the end of the day, someone using your likelihood function would really only be using it to extract likelihood ratios and therefore the  probably doesn't matter here either?

tgb50

But yes, working out is mostly unpleasant and boring as hell as we conceive of it and we need to stop pretending otherwise. Once we agree that most exercise mostly bores most people who try it out of their minds, we can work on not doing that.

 

I'm of the nearly opposite opinion: we pretend that exercise ought to be unpleasant. We equate exercise with elite or professional athletes and the vision of needing to push yourself to the limit, etc. In reality, exercise does include that but for most people should look more like "going for a walk" than "doing hill sprints until my legs collapse".

On boredom specifically, I think strenuousness affects that more than monotony. When I started exercising, I would watch a TV show on the treadmill and kept feeling bored, but the moment I  toned down to a walking speed to cool off, suddenly the show was engaging and I'd find myself overstaying just to watch it. Why wasn't it engaging while I was running? The show didn't change. Monotony wasn't the deciding factor, but rather the exertion.

Later, I switched to running outside and now I don't get bored despite using no TV or podcast or music. And it requires no willpower! If you're two miles from home, you can't quit. Quitting just means running two miles back which isn't really quitting so you might as well keep going. But on a treadmill, you can hop off at any moment, so there's a constant drain on willpower. So again, I think the 'boredom' here isn't actually about the task being monotonous and finding ways to make it less monotonous won't fix the perceived boredom.

I do agree with the comment of playing tag for heart health. But that already exists and is socially acceptable in the form of pickup basketball/soccer/flag-football/ultimate. Lastly, many people do literally find weightlifting fun, and it can be quite social.

tgb60

The American Heart Association (AHA) Get with the Guidelines–Heart Failure Risk Score predicts the risk of death in patients admitted to the hospital.9 It assigns three additional points to any patient identified as “nonblack,” thereby categorizing all black patients as being at lower risk. The AHA does not provide a rationale for this adjustment. Clinicians are advised to use this risk score to guide decisions about referral to cardiology and allocation of health care resources. Since “black” is equated with lower risk, following the guidelines could direct care away from black patients.

From the NEJM article. This is the exact opposite of Zvi's conclusions ("Not factoring this in means [blacks] will get less care").

I confirmed the NEJM's account by using an online calculator for that score. https://www.mdcalc.com/calc/3829/gwtg-heart-failure-risk-score Setting a patient with black=No gives higher risk than black=yes. Similarly so for a risk score from the AHA,: https://static.heart.org/riskcalc/app/index.html#!/baseline-risk 

Is Zvi/NYT referring to a different risk calculator? There are a lot of them out there. The NEJM also discuses a surgical risk score that has the opposite directionality, so maybe that one? Though there the conclusion is also about less care for blacks: "When used preoperatively to assess risk, these calculations could steer minority patients, deemed to be at higher risk, away from surgery." Of course, less care could be a good thing here!

I agree that this looks complicated.

tgb22

Wegovy (a GLP-1 antagonist)

Wegovy/Ozempic/Semaglutide are GLP-1 receptor agonists, not GLP-1 antagonists. This means they activate the GLP-1 receptor, which GLP-1 also does. So it's more accurate to say that they are GLP-1 analogs, which makes calling them "GLP-1s" reasonable even though that's not really accurate either.

tgb20

Broccoli is higher in protein content per calorie than either beans or pasta and is a very central example of a vegetable, though you'd also want to mix it with beans or something for a better protein quality. 3500 calories of broccoli is 294g protein, if Google's nutrition facts are to be trusted. Spinach, kale, and cauliflower all also have substantially better protein per calories than potatoes and better PDCAAS scores than I expected (though I'm not certain I trust them - does spinach actually get a 1?). I think potatoes are a poor example (and also not one vegetarians turn to for protein).

Though I tend to drench my vegetables in olive oil so these calories per gram numbers don't mean much to me in practice, and good luck eating such a large volume of any of these.

Load More