Wiki Contributions

Comments

Quinn22h10

sure -- i agree that's why i said "something adjacent to" because it had enough overlap in properties. I think my comment completely stands with a different word choice, I'm just not sure what word choice would do a better job.

Quinn2d10

I eventually decided that human chauvinism approximately works most of the time because good successor criteria are very brittle. I'd prefer to avoid lock-in to my or anyone's values at t=2024, but such a lock-in might be "good enough" if I'm threatened with what I think are the counterfactual alternatives. If I did not think good successor criteria were very brittle, I'd accept something adjacent to E/Acc that focuses on designing minds which prosper more effectively than human minds. (the current comment will not address defining prosperity at different timesteps).

In other words, I can't beat the old fragility of value stuff (but I haven't tried in a while).

I wrote down my full thoughts on good successor criteria in 2021 https://www.lesswrong.com/posts/c4B45PGxCgY7CEMXr/what-am-i-fighting-for

AI welfare: matters, but when I started reading lesswrong I literally thought that disenfranching them from the definition of prosperity was equivalent to subjecting them to suffering, and I don't think this anymore.

Quinn7d100

Thinking about a top-level post on FOMO and research taste

  • Fear of missing out defined as inability to execute on a project cuz there's a cooler project if you pivot
  • but it also gestures at more of a strict negative, where you think your project sucks before you finish it, so you never execute
  • was discussing this with a friend: "yeah I mean lesswrong is pretty egregious cuz it sorta promotes this idea of research taste as the ability to tear things down, which can be done armchair"
  • I've developed strategies to beat this FOMO and gain more depth and detail with projects (too recent to see returns yet, but getting there) but I also suspect it was nutritious of me to develop discernment about what projects are valuable or not valuable for various threat models and theories of change (in such a way that being a phd student off of lesswrong wouldn't have been as good in crucial ways, tho way better in other ways).
    • but I think the point is you have to turn off this discernment sometimes, unless you want to specialize in telling people why their plans won't work, which I'm more dubious on the value of than I used to be

Idk maybe this shortform is most of the value of the top level post

Quinn10d10

He had become so caught up in building sentences that he had almost forgotten the barbaric days when thinking was like a splash of color landing on a page.

Edward St Aubyn

Quinn10d10

a -valued quantifier is any function , so when is bool quantifiers are the functions that take predicates as input and return bool as output (same for prop). the standard max and min functions on arrays count as real-valued quantifiers for some index set .

I thought I had seen as the max of the Prop-valued quantifiers, and exists as the min somewhere, which has a nice mindfeel since forall has this "big" feeling (if you determined for that (of which is just syntax sugar since the variable name is irrelevant) by exhaustive checking, it would cost whereas would cost unless the derivation of the witness was dependent on size of domain somehow).

Incidentally however, in differentiable logic it seems forall is the "minimal expectation" and existential is the "maximal expectation". Page 10 of the LDL paper, where a is the limit as gamma goes to zero of , or the integral with respect to a -ball about the min of rather than about the entire domain of . os in this sense, the interpretation of a universally quantified prop is a minimal expectation, dual where existentially quantified prop is a maximal expectation.

I didn't like the way this felt aesthetically, since as I said, forall feels "big" which mood-affiliates toward a max. But that's notes from barely-remembered category theory I saw once. Anyway, I asked a language model and it said that forall is minimal because it imposes the strictest of "most conservative" requirement. so "max" in the sense of "exists is interpreted to maximal expectation" refers to maximal freedom.

I suppose this is fine.

Quinn11d30

I'm excited for language model interpretability to teach us about the difference between compilers and simulations of compilers. In the sense that chatgpt and I can both predict what a compiler of a suitably popular programming language will do on some input, what's going on there---- surely we're not reimplementing the compiler on our substrate, even in the limit of perfect prediction? Will be an opportunity for a programming language theorist in another year or two of interp progress

Quinn21d20

Does anyone use vim / mouse-minimal browser? I like Tridactyl better than the other one I tried, but it's not great when there's a vim mode in a browser window everything starts to step on eachother (like in jupyter, colab, leetcode, codesignal)

Quinn21d10

claude and chatgpt are pretty good at ingesting textbooks and papers and making org-drill cards.

here's my system prompt https://chat.openai.com/g/g-rgeaNP1lO-org-drill-card-creator though i usually tune it a little further per session.

Here are takes on the idea from the anki ecosystem

I tried a little ankigpt and it was fine, i haven't tried the direct plugin from ankiweb. I'm opting for org-drill here cuz I really like plaintext.

Quinn23d40

one exercise remark: swimming! might have to pick up a little technique at first, but the way it combines a ton of muscle groups and cardio is unparalleled

(if anyone reading this is in the bay I will 100% teach you the 0->1 on freestyle and breaststroke technique, just DM)

Load More