Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
vV_Vv180

No offense, but the article you linked is quite terrible because it compares total deaths while completely disregarding the base rates of use. By the same logic, cycling is more dangerous than base jumping.

This said, yes, some drugs are more dangerous than others, but good policies need to be simple, unambiguous and easy to enforce. A policy of "no illegal drugs" satisfies these criteria, while a policy of "do your own research and use your own judgment" in practice means "junkies welcome".

vV_Vv100

My interpretation of the Cactus Person post is that it was a fictionalized account of personal experiences and and an expression of frustration about not being able to gather any real knowledge out of them, which is therefore entertained as a reasonable hypothesis to have in the first place. If I'm mistaken then I apologize to Scott, however the post is ambigious enough that I'm likely not the only person to have interpreted this way.

He also wrote one post about the early psychedelicists that ends with "There seems to me at least a moderate chance that [ psychedelics ] will make you more interesting without your consent – whether that is a good or a bad thing depends on exactly how interesting you want to be.", and he linked to Aella describing her massive LSD use, which he commented as "what happens when you take LSD once a week for a year?" (it should have been "what happens when this person takes LSD once a week for a year, don't try this at home, or you might end up in a padded cell or a coffin").

I've never interacted with the rationalist community IRL, and in fact for the last 5 or so years my exposure to them was mostly through SSC/ACX + the occasional tweet from rat-adjacent accounts that I follow, but my impression is that psychedelic drug use was rampant in the community, with leading figures, including Scott, either partaking themselves or at least knowing about it and condoning it as nothing more than an interesting quirk. Therefore, blaming it all on a single person sounds like scapegoating, which I found something interesting to note in a funny way.

As you say, psychedelics might be just a Bay Area thing, and maybe Vassar and his Vassarites were taking it to a different level compared to the rat/Bay Aryan baseline, I don't know them so it could be possible, in which case the finger pointing would make more sense. Still, whenever you have a formal or informal norm, you're going to have excesses at the tails of the distribution. If your norm is "no illegal drugs, only alcohol in moderation", the excesses will be some people who binge drink or smoke joints, if your norm is "psychedelics in moderation", the excesses will be people who fry their brains with LSD.

 

As for the cultish aspects, I get the impression that while not overall a cult, the IRL rat community tends to naturally coalesce into very tightly-knit subcommunities of highly non-neurotypical (and possibly "mentally fragile") people who hang together with little boundaries between workplace, cohabitation, friendship, dating, "spiritual" mentorship, with prevalence of questionable therapy/bonding practices ("debugging", "circling") and isolation from outsiders ("normies"). These subcommunites gravitate around charismatic individuals (e.g. Eliezer, Anna, Geoff, Vassar, Ziz) with very strong opinions that they argue forcefully, and are regarded as infallible leaders by their followers. I don't know to what extend these leaders encourage this idolatry delibrately and to what extent they just find themselves in the eye of the storm, so to speak, but in any case, looking from outside, whether you call it cultish or not, it doesn't appear like a healthy social dynamics.

vV_Vv70

Dusting off this old account of mine just to say I told you so.

 

Now, some snark:

"Leverage is a cult!"

"No, MIRI/CFAR is a cult!"

"No, the Vassarites are a cult!"

"No, the Zizians are a cult!"

Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.

Also Scott: beware this charismatic Vassar guy, he can give you psychosis!

Scott 2015: Universal love, said the cactus person

Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.

Scott 2018: promotes a scamcoin by Aella and Vinay Gupta, a differently sane tech entrepreneur-cum-spiritual guru, who apparently burned his brain during a “collaborative celebration” session.

Scott 2021: why do rationalists take so many psychedelic drugs? Must be Vassar's bad influence.

 

Btw, I hate to pick on Scott, since he's likely the sanest person in the whole community, but he's also one of the most influential people there, possibly even more than Eliezer, therefore I find his lack of self awareness disturbing.

That's all folks

vV_Vv60

Do you think that solving Starcraft (by self-play) will require some major insight or will it be just a matter of incremental improvement of existing methods?

vV_Vv40

There is an algorithm called "Evolution strategies" popularized by OpenAI (although I believe that in some form it already existed) that can train neural networks without backpropagation and without storing multiple sets of parameters. You can view it as a population 1 genetic algorithm, but it really is a stochastic finite differences gradient estimator.

On supervised learning tasks it is not competitive with backpropagation, but on reinforcement learning tasks (where you can't analytically differentiate the reward signal so you have to estimate the gradient one way or the other) it is competitive. Some follow-up works combined it with backpropagation.

I wouldn't be surpised if the brain does something similar, since the brain never really does supervised learning, it's either unsupervised or reinforcement learning. The brain could combine local reconstruction and auto-regression learning rules (similar to the layerwise-trained autoencoders, but also trying to predict future inputs rather than just reconstructing the current ones) and finite differences gradient estimation on reward signals propagated by the the dopaminergic pathways.

vV_Vv80
The utility function U(w) corresponds to the distribution P(w)∝exp(U(w)).

Not so fast.

Keep in mind that the utility function is defined up to an arbitrary positive affine transformation, while the softmax distribution is invariant only up to shifts: will be different distribution depending on the inverse temperature (the higher, the more peaked the distribution will be on the mode), while in von Neumann–Morgenstern theory of utility, and represent the same preferences for any positive .

Maximizing expected log probability under this distribution is exactly the same as maximizing the expectation of U.

It's not exactly the same.

Let's assume that there are two possible world states: 0 and 1, and two available actions: action A puts the world in state 0 with 99% probability () while action B puts the world in state 0 with 50% probability ().

Let

Under expected utility maximizaiton, action A is clearly optimal.

Now define

The expected log-probability (the negative cross-entropy) is nats, while is , hence action B is optimal.

You do get the action A as optimal if you reverse the distributions in the negative cross-entropies ( and ), but this does not correspond to how inference is normally done.