LESSWRONG
LW

108
kave
3258Ω126114792
Message
Dialogue
Subscribe

Hello! I work at Lightcone and like LessWrong :-). I have made some confidentiality agreements I can't leak much metadata about (like who they are with). I have made no non-disparagement agreements.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
5kave's Shortform
Ω
2y
Ω
12
Thane Ruthenis's Shortform
kave2d235

In general, I felt like the beginning was a bit weak, with the informal-technical discussion the weakest part, and then it got substantially stronger from there.

I worry that I particularly enjoy the kind of writing they do, but we've already tapped the market of folks like me. Like, I worked at MIRI and now moderate LessWrong because I was convinced by the Sequences. So that's a pretty strong selection filter for liking their writing. Of course we should caveat my experience quite a bit given that.

But, for what it's worth, I thought Part 2 was great. Stories make things seem real, and my reader-model was relatively able to grant the plot beats as possible. I thought they did a good job of explaining that while there were many options the AI could take, and they, the authors, might well not understand why a given approach would work out or not, it wasn't obvious that that would generalise to all the AI's plans not working.

The other thing I really liked: they would occassionally explain some science to expand on their point (nuclear physics is the example they expounded on at length, but IIRC they mentioned a bunch of other bit of science in passing). I'm not sure why I liked this so much. Perhaps it was because it was grounding, or reminded me not to throw my mind away, or made me trust them a little more. Again, I'm really not sure how well this generalises to people for whom their previous writing hasn't worked.

Reply
LessWrong is migrating hosting providers (report bugs!)
kave5d170

Here are some other reasons, though I think they're a bit less central than the ones in Habryka's comment.

1.
I think current AI systems find it much easier to help with NextJS web apps than they did our sui generis palimpsest of frameworks and approaches. It's a bit unclear if this is on a trajectory to fix itself, but for now it seems like a relatively big difference. I think partly they're just way more familiar with this newer stuff, and partly serverless stuff is a bit more architecturally suited to LLMs making narrow changes.

2.
Another reason is that we had a lot of technical debt that we wanted to pay down. The project that became the hosting transfer was originally known as the "debungle"[1].

The codebase had a bunch of very particular ways of doing things (like you weren't supposed to just write and export new React components, but call a registration function on them. You weren't supposed to write direct queries against our GraphQL server, but use a system of helpers).

I don't think this stuff is necessarily bad. But because Lightcone is largely composed of generalists, onboarding costs are a bit higher. If you have a blessed way to make a query, and that blessed way is itself changing (as it needs to shift for performance or feature reasons), someone who is working on LessWrong one month in three is paying more cost for keeping up with the internal, undocumented framework magic.

There have been several times I've asked a distracted Habryka what the Standard Way to do something in our codebase is, implemented his quick answer, only to get a PR review from Robert asking why I'm doing stuff in a semi-deprecated way.

3.
Habryka mentioned wanting to use newer React features. I think possibly a bigger issue was the transitive out-of-date dependencies you get if you stick on an old React. You can't update Material UI, you can't update some other library, some of them have security holes, so you vendor the old version and patch it by hand, ... That stuff starts to grow as a maintenance and jank burden over time.


In general, I'm pro things being crufty and janky and not spending too much time "rewriting thing to be nice", and a lot of the stuff I listed above can be worked around. I think probably my list alone wouldn't be worth the effort of the shift. To be clear, I'm unsure if the combination of my list, Habryka's, the other arguments I'm aware of, and the expected strength of the arguments I'm not aware of, overall make this a worthwhile shift. I'm guessing yes, but it's too soon to say.

  1. ^

    We had our eyes on a NextJS switch early on. But we thought it was valuable to do even without that.

Reply
Open Thread - Summer 2025
kave6d74

Welcome to LessWrong! You didn't violate any norms or customs. Hope you have some interesting discussions :-)

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
kave8d51

Oh, hm. That's not the sort of things users follow-through on in my experience. Not saying that this makes Classified a bad idea, but I think it needs a different UI solution (e.g. appearing in the sidebar).

Reply1
Musings from a Lawyer turned AI Safety researcher (ShortForm)
kave8d20

Classified does seem kind of cool! Do you expect you would upweight "classified" higher than "personal" in your tag filters?

Reply
Yes, AI Continues To Make Rapid Progress, Including Towards AGI
kave9d2117

My current best read is that insiders at AI companies are overall more biased (towards pronouncing shorter timelines) than they are well-informed.

Reply
Musings from a Lawyer turned AI Safety researcher (ShortForm)
kave9dModerator Comment40

This seems like a pretty cool event and I'm excited it's happening.

That said, I've removed this Quick Take from the frontpage. Advertising, whether for events or for role openings or similar, is generally not something we want on the frontpage of LessWrong.

In this case, now that it's off the front page, this shortform might be insufficiently visible. I'd encourage you to make a top-level post / event about it, which will get put on personal, but might still be a bit more visible.

Reply1
Slack Has Positive Externalities For Groups
kave9d20

I struggle to follow the connection between the universal scalability law to sync vs async. Is the idea that you either run things relying on synchronicity (only doing few enough things that you can handle the contention, driving the α and β coefficients down through high bandwidth communication) or you build things so that it can work asynchronously (devolving more decision making to others)?  

Reply
Nathan Young's Shortform
kave12d40

I try pretty hard (and I think most of the team does) to at least moderate AI x-risk criticism more leniently. But of course, it's tricky to know if you're doing a good job. Am I undercorrecting or overcorrecting for my bias? If you ever notice some examples that seem like moderation bias, please lmk!

Of course, moderation is only a small part of what drives the site culture/reward dynamics

Reply
Benito's Shortform Feed
kave19d42

Divergence uses a nabla not a delta

Reply1
Load More
31What are the best standardised, repeatable bets?
Q
5mo
Q
10
78Gwern: Why So Few Matt Levines?
11mo
10
62Linkpost: Surely you can be serious
1y
8
151Daniel Dennett has died (1942-2024)
1y
5
577LessWrong's (first) album: I Have Been A Good Bing
1y
182
5kave's Shortform
Ω
2y
Ω
12
162If you weren't such an idiot...
2y
76
105New LessWrong review winner UI ("The LeastWrong" section and full-art post pages)
2y
64
41On plans for a functional society
2y
8
24A bet on critical periods in neural networks
2y
1
Load More
Bayes' rule
18 days ago
(+12/-35)
Vote Strength
a year ago
(-35)