cata

Programmer, rationalist, chess player, father, altruist.

Comments

Sorted by
cata70

Why is it cheaper for individuals to install some amount of cheap solar power for themselves than for the grid to install it and then deliver it to them, with economies of scale in the construction and maintenance? Transmission cost?

cata278

I was going to email but I assume others will want to know also so I'll just ask here. What is the best way to donate an amount big enough that it's stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?

cata20

But as a secondary point, I think today's models can already use bash tools reasonably well.

Perhaps that's true, I haven't seen a lot of examples of them trying. I did see Buck's anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).

I don't work in AI so maybe I don't know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn't had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.

cata50

I'm not confident but I am avoiding working on these tools because I think that "scaffolding overhang" in this field may well be most of the gap towards superintelligent autonomous agents.

If you imagine a o1-level entity with "perfect scaffolding", i.e. it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will, and its training includes the use of those functionalities, it's not completely clear to me that it wouldn't already be able to do a slow self-improvement takeoff by itself, although the cost might be currently practically prohibitive.

I don't think building that scaffolding is a trivial task at all, though.

cata40

I don't have a bunch of citations but I spend time in multiple rationalist social spaces and it seems to me that I would in fact be excluded from many of them if I stuck to sex-based pronouns, because as stated above there are many trans people in the community, of whom many hold to the consensus progressive norms on this. The EA Forum policy is not unrepresentative of the typical sentiment.

So I don't agree that the statements are misleading.

(I note that my typical habit is to use singular they for visibly NB/trans people, and I am not excluded for that. So it's not precisely a kind of compelled speech.)

cata40

I was playing this bot lately myself and one thing it made me wonder is, how much better would it be at beating me if it was trained against a model of me in particular, rather than how it actually was trained? I feel I have no idea.

cata100

2 data points: I have 15-20 years of experience at a variety of companies but no college and no FANG, currently semi-retired. Recruiters still spam me with many offers and my professional network wants to hire me at their small companies.

A friend of mine has ~2 years of experience as a web dev and some experience as a mechanical engineer + random personal projects, no college, and he worked hard to look for a software job and found absolutely nothing, with most companies never contacting him after an application.

cata20

One and a half years later it seems like AI tools are able to sort of help humans with very rote programming work (e.g. changing or writing code to accomplish a simple goal, implementing versions of things that are well-known to the AI like a textbook algorithm or a browser form to enter data, answering documentation-like questions about a system) but aren't much help yet on the more skilled labor parts of software engineering.

cata3011

It seems like Musk in 2018 dramatically underestimated the ability of OpenAI to compete with Google in the medium term.

cata226

Thanks for not only doing this but noting the accuracy of the unchecked transcript, it's always hard work to build a mental model of how good LLM tools are at what stuff.

Load More