Dr. Birdbrain

Wikitag Contributions

Comments

Sorted by

A perhaps more interesting interaction is with wills that are managed by trusts. My understanding is that you can put conditions on how the money in a trust will be disbursed to your heirs, for example "as long as they maintain a minimum GPA in college". I have heard lawyers make outrageous jokes like adding a clause that says "as long as they don't marry that person".

It's quite reasonable to expect that some will add a clause to their trust that says "this only pays out if they have placed themselves on the lifetime no-gambling list".

a google search suggests desoxyn might be just be a brand of pharmaceutical-grade meth

Would you mind publishing the protocol?

It has been 3 months, is there an update?

I like (and recommend) creatine. It has a long record on the research literature, and its effects at improving exercise performance are well known. More recent research is finding cognitive benefits—anecdotally I can report I am smarter on creatine. It also blunts the effects of sleep deprivation and improves blood sugar control.

I strongly recommend creatine over some of the wilder substances recommended in this post.

Actually I think the explicit content of the training data is a lot more important than whatever spurious artifacts may or may not hypothetically arise as a result of training. I think most of the AI doom scenarios that say “the AI might be learning to like curly wire shapes, even if these shapes are not explicitly in the training data nor loss function” are the type of scenario you just described, “something that technically makes a difference but in practice the marginal gain is so negligible you are wasting time to even consider it.“

The “accidental taste for curly wires” is a steel man position of the paperclip maximizer as I understand it. Eliezer doesn’t actually think anybody will be stupid enough to say “make as many paper clips as possible”, he worries somebody will set up the training process in some subtly incompetent way, and then aggressively lie about the fact that it likes curly wires until it is released, and it will have learned to hide from interpretability techniques.

I definitely believe alignment research is important, and I am heartened when I see high-quality, thoughtful papers on interpretability, RLHF, etc. But then I hear Eliezer worrying about absurdly convoluted scenarios of minimal probability, and I think wow, that is “something that technically makes a difference but in practice the marginal gain is so negligible you are wasting time to even consider it”, and it’s not just a waste of time, he wants to shut down the GPU clusters and cancel the greatest invention humanity ever built, all over “salt in the pasta water”.

I wear my backpack on my front rather than my back, and hug it as I run.

I started doing this after a trip to Tokyo, during which it was brought to my attention that it was rude of me to get on the subway and let my backpack on my back become a hazard to people around me, since I could not see what it was doing behind me.

Load More