Nick_Tarleton

Wikitag Contributions

Comments

Sorted by

Neither the mortality-rate nor the energy-use map lines up that closely with the US geopolitical sphere of influence. (E.g. Russia and China on the one hand, Latin America on the other.)

I'm not saying the US government isn't partially responsible for unequal distribution, but your previous comment sounds like treating it as the only or primary significant factor.

(I'm also not sure what point you're trying to make at all with the energy-use map, given how similar it looks to the mortality-rate map.)

to try to shun or shame people for saying things that are outside the Overton window.

(emphasis mine) Is that what the OP is doing? Certainly not overtly. I fear that this is a fallacy I see all the time in politicized conversations:

  1. X is outside the Overton window
  2. A disapproves of B saying [some particular instance of X]
  3. Therefore A's disapproval must be motivated by X being outside the Overton window

That article is sloppily written enough to say "Early testers report that the AI [i.e. o3 and/or o4-mini] can generate original research ideas in fields like nuclear fusion, drug discovery, and materials science; tasks usually reserved for PhD-level experts" linking, as a citation, to OpenAI's January release announcement of o3-mini.

TechCrunch attributes the rumor to a paywalled article in The Information (and attributes the price to specialized agents, not o3 or o4-mini themselves).

(I have successfully done Unbendable Arm after Valentine showed me in person, without explaining any of the biomechanics. My experience of it didn't involve visualization, but felt like placing my fingertips on the wall across the room and resolving that they'd stay there. Contra jimmy's comment, IIRC I initially held my arm wrong without any cueing.)

Strongly related: Believing In. From that post:

My guess is that for lack of good concepts for distinguishing “believing in” from deception, LessWrongers, EAs, and “nerds” in general are often both too harsh on folks doing positive-sum “believing in,” and too lax on folks doing deception. (The “too lax” happens because many can tell there’s a “believing in”-shaped gap in their notions of e.g. “don’t say better things about your start-up than a reasonable outside observer would,” but they can’t tell its exact shape, so they loosen their “don’t deceive” in general.)

I feel like this post is similarly too lax on, not deception, but propositional-and-false religious beliefs.

Not sure what Richard would say, but off the cuff, I'd distinguish 'imparting information that happens to induce guilt' from 'guilting', based on intent to cooperatively inform vs. psychologically attack.

My read of the post is that some degree of "being virtuously willing to contend with guilt as a fair emergent consequence of hearing carefully considered and selected information" is required for being well-founded or part of a well-founded system (receive criticism without generating internal conflict, etc).

I don't feel a different term is needed/important, but n=1, due to some uses I've seen of 'lens' as a technical metaphor it strongly makes me think 'different mechanically-generated view of the same data/artifact', not 'different artifact that's (supposed to be) about the same subject matter', so I find the usage here a bit disorienting at first.

The Y-axis seemed to me like roughly 'populist'.

The impressive performance we have obtained is because supervised (in this case technically "self-supervised") learning is much easier than e.g. reinforcement learning and other paradigms that naturally learn planning policies. We do not actually know how to overcome this barrier.

What about current reasoning models trained using RL? (Do you think something like, we don't know, and won't easily figure out, how to make that work well outside a narrow class of tasks that doesn't include 'anything important'?)

Few people who take radical veganism and left-anarchism seriously either ever kill anyone, or are as weird as the Zizians, so that can't be the primary explanation. Unless you set a bar for 'take seriously' that almost only they pass, but then, it seems relevant that (a) their actions have been grossly imprudent and predictably ineffective by any normal standard + (b) the charitable[1] explanations I've seen offered for why they'd do imprudent and ineffective things all involve their esoteric beliefs.

I do think 'they take [uncommon, but not esoteric, moral views like veganism and anarchism] seriously' shouldn't be underrated as a factor, and modeling them without putting weight on it is wrong.

  1. ^

    to their rationality, not necessarily their ethics

I don't think it's an outright meaningless comparison, but I think it's bad enough that it feels misleading or net-negative-for-discourse to describe it the way your comment did. Not sure how to unpack that feeling further.

Load More