Nick_Tarleton

Wikitag Contributions

Comments

Sorted by

That article is sloppily written enough to say "Early testers report that the AI [i.e. o3 and/or o4-mini] can generate original research ideas in fields like nuclear fusion, drug discovery, and materials science; tasks usually reserved for PhD-level experts" linking, as a citation, to OpenAI's January release announcement of o3-mini.

TechCrunch attributes the rumor to a paywalled article in The Information (and attributes the price to specialized agents, not o3 or o4-mini themselves).

(I have successfully done Unbendable Arm after Valentine showed me in person, without explaining any of the biomechanics. My experience of it didn't involve visualization, but felt like placing my fingertips on the wall across the room and resolving that they'd stay there. Contra jimmy's comment, IIRC I initially held my arm wrong without any cueing.)

Strongly related: Believing In. From that post:

My guess is that for lack of good concepts for distinguishing “believing in” from deception, LessWrongers, EAs, and “nerds” in general are often both too harsh on folks doing positive-sum “believing in,” and too lax on folks doing deception. (The “too lax” happens because many can tell there’s a “believing in”-shaped gap in their notions of e.g. “don’t say better things about your start-up than a reasonable outside observer would,” but they can’t tell its exact shape, so they loosen their “don’t deceive” in general.)

I feel like this post is similarly too lax on, not deception, but propositional-and-false religious beliefs.

Not sure what Richard would say, but off the cuff, I'd distinguish 'imparting information that happens to induce guilt' from 'guilting', based on intent to cooperatively inform vs. psychologically attack.

My read of the post is that some degree of "being virtuously willing to contend with guilt as a fair emergent consequence of hearing carefully considered and selected information" is required for being well-founded or part of a well-founded system (receive criticism without generating internal conflict, etc).

I don't feel a different term is needed/important, but n=1, due to some uses I've seen of 'lens' as a technical metaphor it strongly makes me think 'different mechanically-generated view of the same data/artifact', not 'different artifact that's (supposed to be) about the same subject matter', so I find the usage here a bit disorienting at first.

The Y-axis seemed to me like roughly 'populist'.

The impressive performance we have obtained is because supervised (in this case technically "self-supervised") learning is much easier than e.g. reinforcement learning and other paradigms that naturally learn planning policies. We do not actually know how to overcome this barrier.

What about current reasoning models trained using RL? (Do you think something like, we don't know, and won't easily figure out, how to make that work well outside a narrow class of tasks that doesn't include 'anything important'?)

Few people who take radical veganism and left-anarchism seriously either ever kill anyone, or are as weird as the Zizians, so that can't be the primary explanation. Unless you set a bar for 'take seriously' that almost only they pass, but then, it seems relevant that (a) their actions have been grossly imprudent and predictably ineffective by any normal standard + (b) the charitable[1] explanations I've seen offered for why they'd do imprudent and ineffective things all involve their esoteric beliefs.

I do think 'they take [uncommon, but not esoteric, moral views like veganism and anarchism] seriously' shouldn't be underrated as a factor, and modeling them without putting weight on it is wrong.

  1. ^

    to their rationality, not necessarily their ethics

I don't think it's an outright meaningless comparison, but I think it's bad enough that it feels misleading or net-negative-for-discourse to describe it the way your comment did. Not sure how to unpack that feeling further.

https://artificialanalysis.ai/leaderboards/providers claims that Cerebras achieves that OOM performance, for a single prompt, for 70B-parameter models. So nothing as smart as R1 is currently that fast, but some smart things come close.

I don't see how it's possible to make a useful comparison this way; human and LLM ability profiles, and just the nature of what they're doing, are too different. An LLM can one-shot tasks that a human would need non-typing time to think about, so in that sense this underestimates the difference, but on a task that's easy for a human but the LLM can only do with a long chain of thought, it overestimates the difference.

Put differently: the things that LLMs can do with one shot and no CoT imply that they can do a whole lot of cognitive work in a single forward pass, maybe a lot more than a human can ever do in the time it takes to type one word. But that cognitive work doesn't compound like a human's; it has to pass through the bottleneck of a single token, and be substantially repeated on each future token (at least without modifications like Coconut).

(Edit: The last sentence isn't quite right — KV caching means the work doesn't have to all be recomputed, though I would still say it doesn't compound.)

Load More