Alex K. Chen (parrot)

Extremely neophilic. Much of my content is on Quora (I was the Quora celebrity). I am also on forum.quantifiedself.com (people do not realize how alignment-relevant this is), rapamycin.news/latest, and crsociety.org

https://linktr.ee/simfish

...People say the craziest things about me, because I'm a peculiar star...

I care about neuroscience (esp human intelligence enhancement) and reducing genetic inequality. The point of transhumanism is to transcend genetic limitations - to reduce the fraction of variance of outcome explained by genetics. I know loads of people in self-experimentation communities (people in our communities need to be less risk-averse if we have to make any difference in our probability of "making it"). When we are right at "the precipice", traditionalism cannot win (I am probably the least traditionalist person ever). I get along well with the unattached.

Slowing human compute loss from reducing microplastics/pollution/noise/rumination/aging rate are alignment-relevant (insofar as the most tractable way of "human enhancement" is to slow decline with age + make human thinking clearer). As is tFUS. I aim to do all I can to make biology keep up with technology. Reconfiguring reward functions to reward "wholesome/growthful/novel tasks over "the past" [you are aged when you think too much of the past].

Alignment through integrating all the diverse skillsets (including those who are not math/CS geniuses) and integrating all their compute + not making them waste time/attention on "dumb things" + making people smarter/more neuroplastic (this is a hard problem, but 40Hz-tACS [1] might do a little).

Unschooling is also alignment-relevant (most value is destroyed in deceptive alignment, and school breeds deceptive alignment). As is inverting "things that feel unfun".

Chaotic people may depend more on a sense of virtue than others, but it takes a lot to get people to trust a group of people/make themselves authentic when school has taken out much of their authenticity. Some people don't lose much or get much emotional damage from it (I've noticed it from several who dropped out of school for alignment), but some people get way more, and this is a way easier problem to solve than directly increasing human intelligence.

I like Dionysians. However, I had to cut back after accidentally destroying an opportunity (a friend having egged me onto being manic...)

Breadth/context produces unique compute value of its own

https://twitter.com/InquilineKea 
facebook.com/simfish

I have a Twitter alt.

I trigger exponential growth trajectories in some. I helped seed the original Ivy League Psychedelics communities and am very good friends with Qualia Research Institute people (though I cannot try them much now)

Main objectives: not get sad, not get worked out over dumb things, not making my life harder than it is now.

I really like https://www.lesswrong.com/users/bhauth. Zvi is smart too https://www.lesswrong.com/users/zvi?from=post_header

[1] there are negative examples too

Wiki Contributions

Comments

Sorted by

How do they figure out what waveforms to use and at what frequencies on your brain? The ideal waveforms/frequencies depend a lot on your brainwaves and brain configuration

I've heard fMRI-guided TMS is the best kind, but many don't use it [and maybe it isn't necessary?]

Is anyone familiar with MeRT? It's what wave neuroscience uses, and is supposedly effective for more than just depression (there are ASDs and ADHD, where the effectiveness is way less certain, but where some people can have unusually high benefits). But response rates seem highly inconsistent (the clinic will say that "90% of people are responsive" but there is substantial reporting bias and every clinic seems to say this [no way to verify] so I don't believe these figures) and MeRT is still highly experimental so it's not covered by insurance. Some people with ASDs are desperate enough that they try it. TMS is probably useful for WAY more than severe treatment-resistant depression, but it's still the only indication that insurance companies are willing to cover.

I got my brainwaves scanned for MeRT and found that I have too much slowwave in ACC that could be sped up (though I'm still unsure about the effectiveness of MeRT, they don't seem to give you your data to understand your brain better in the same way that the Neurofield tACS people [or those at the ISNR conference] do)...

BTW there's a TMS Facebook group, and there's also the SAINT protocol where you only take a week out of your life for the TMS treatment for more treatments per day). I'm still unsure about the SAINT protocol b/c it's mostly developed just for severe depression and I'm not sure if this is what I have. There's also the NYC Neuromodulation conference where you can learn A LOT from TMS practitioners and TMS research (the Randy Buckner lab at Harvard has some of the most interesting research)

Every single public mainstream AI model has RLHF'd out one of the most fundamental facts about human nature: that there exist vast differences between humans in basic ability/competence and they matter.

Lucy Lai's new PhD thesis (and YouTube explainer) is really really worth reading/watching: https://x.com/drlucylai/status/1848528524790923669 and is more broadly relevant to people than most other PhD theses [esp on the original subject of making rational decisions under constraints of working memory].

How about TMS/tFUS/tACS => "meditation"/reducing neural noise?

Drastic improvements in mental health/reducing neural noise & rumination are way more feasible than increasing human intelligence (and still have huge potential for very high impact when applied on a population-wide scale [1]), and are possible to do on mass-scale (and there are some experimental TMS protocols like SAINT/accelerated TMS which aim to capture the benefits of TMS on a 1-2 week timeline) [there's also wave neuroscience, which uses mERT and works in conjunction with qEEG, but I'm not sure if it's "ready enough" yet - it seems to involve some sort of guesswork and there are a few negative reviews on reddit]. There are a few accelerated TMS centers and they're not FDA-approved for much more than depression, but if we have fast AGI timelines, the money matters less.

[speeding up feedback loops are also important for mass-adoption - which both accelerated TMS/SAINT and the "intense tACS program" that people like neurofield [Nicholas Dogris/Tiffany Thompson] and James Croall people try to do]. Ideally, the TMS/SAINT or tACS should be done in conjunction with regular monitoring of brainwaves with qEEG or fMRI throughout.

Effect sizes of tFUS are said to be small relative to certain medications/drugs [this is true for neurofeedback/TMS/tACS in general], but part of this may be that people tend to be conservative with tFUS. Leo Zaroff has created an approachable tFUS community in the bay area. Still worth trying b/c the opportunity cost of trying them (with the right people) is very low (and very few people in our communities have heard of them).

There are some like Jeff Tarrant and the Neurofield people (I got to meet many of them at ISNR2024 => many are coming to the Suisun Summit now) who explore these montages.

Making EEG (or EEG+fNIRS) much easier to get can be high impact relative to amount of effort invested [with minimal opportunity cost]). I was pretty impressed with the convenience of Zeto's portable EEG headset at #Sfn24, as well as the convenience of the imedisync at #ISNR2024 [both EEG headsets cost $20,000, which is high but not insurmountable - eg if given some sort of guarantee on quality and useability I might be willing to procure one] but still haven't evaluated the signal quality of each when comparing them to other high-quality EEG montages like the deymed). It also makes it easier to create the true dataset of EEGs (also look into what Jonathan Xu is doing, though his paper is more about visual processing than mental health). We also don't even have proper high-quality EEG+HEG+fMRI+fNIRS datasets of "high intelligence" people relative to others [especially when measuring these potentials in response to cognitive load - I know Thomas Feiner has helped create a freecap support group and has done a lot of thought on ERPs and their response to cognitive load - he helped take my EEG during a brainmaster session at #ISNR2024]

I've found that smart people in general are extremely underexposed to psychometrics/psychonomics (there are not easy ways to enter those fields even if you're a psychology or neuroscience major), and there is a lot of potential for synergy in this area.

[1] esp given the prevalence of anxiety and other mental health issues of people within our communities

It's one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.

tFUS could be one of the best techniques for improving rationality, esp b/c [AT THE VERY MINIMUM] it is so new/variance-increasing and if the default outcome is not one that we want (as was the case of Biden vs Trump, and Biden dropping out was the desireable variance-increasing move) [and is the case now among LWers who believe in AI doom], we should be increasing variance rather than decreasing it. tFUS may be the avenue for better aligning people's thought with actions, especially when their hyperactive DMN or rumination gets in the way of their ability to align with themselves (tFUS being a way to shut down this dumb self-talk).

Even Michael Vassar has said "eliezer becoming CEO of openwater would meaningfully increase humanity's survival 100x" and "MIRI should be the one buying openwater early devices trying to use them to optimize for rationality"

[btw if anyone knows of tFUS I could try out, I'm totally willing to volunteer]

Why is thing IQ measuring mostly lognormal

What are your scores on the US Economic Experts Comparison (Interactive Matrix)?

https://www.kentclarkcenter.org/economist-comparison-interactive-matrix/

How about people who just don't "give a fuck", are Nishkama Karma, and maintain emotional composure even in times when others doubt them/do not believe them (knowing that the end is what matters).They are graceful on the inside, and maintain internal composure in the face of chaos, but others may view their movements as ungraceful particularly b/c they have the sense (and enough of a reality distortion field) to "make the world adapt to them", rather than "adapt to the world" (if they succeed, they make the world adapt to them such that the world around them becomes more harmonious long-term after the initial reduction in harmony [due to the clumsiness of the world learning to adapt to them]). It takes time to learn grace, and when choosing the ordering of vital skills to learn, grace is often learned later than skills one has comparative advantage in.

[as an example, I know I have historically been ungraceful when reacting to my own dumb mistakes. I have historically done it to signal awareness/remorse/desire to correct, but in an overly emotional way that may cause some people to doubt my emotional stability near-term - is it really necessary? sometimes it's better just to have no contact for sufficiently long enough that when you re-emerge, you come off as so different they're surprised].

[in the long run, learning to read a room is one of the best ways of developing grace, though it matters more if one is ultra-famous than when one is mostly unknown and can afford to experiment with consequence-free failure]

(asking questions that appear dumb to some people can also be "ungraceful" to the audience, even if important. the strategic among that crowd will just have good enough models of everyone to know who the safest people are to ask the "dumb questions" to)

Sometimes, the fastest way to learn is to create faster feedback loops around yourself ("move fast and break things"). The phrase "move fast and break things" appears disharmonious/ungraceful, but (if done in a limited way that "takes profits" before turning into full-blown mania), can be one of the fastest ways of achieving a more harmonious broader state, even when creating some local chaos/disharmony.

People who appear to have high levels of grace can also be extremely dangerous because they can get people to trust them to the very end, especially if their project is an inherently destabilizing project. Ideally, you want a 1-1 correspondence between authenticity/robustness/lack of brittleness and grace, but people's perception of gracefulness at all levels is not high enough for the perception of gracefulness to be the most reliable perception.

Having grace often means doing "efficient calculations" without being explicit about these calculations. It's like keeping your words to yourself and not revealing your cards unless necessary (explicit calculations are clumsy/clunky). Sometimes, a proper understanding of Strauss is necessary to develop grace in some environments (what you say is not what you really mean, except to the readers who have enough context to jump all the layers of abstraction - it may also be needed to communicate unobvious messages in environments where discretion is important)

Patience is also grace (and not getting into situations that cause you to "lose control"/be impatient/exciteable/manic OR do things out of order). At the same time, there are ways of turning a reputation of ditching meetings into gracefulness (after all, most meetings do last longer than needed, as Yishan Wong once mentioned) [some projects also require a high deal of urgency, potentially including eras of accelerated AGI timelines]

Having the appearance of "whatever happens, happens" is graceful (being in command of your emotions no matter what life throws at you - eg John Young was very graceful when he navigated moon landings with a uniquely minimally-increased heartrate). Being able to keep a poker face is graceful. Not acting in distress/pain in order to gain people's sympathy is graceful. As someone who knows many in the longevity community, I know that having the appearance of "fearing death" or "wanting to live forever" is super-ungraceful (and gives PR image problems in its ungracefulness). There are some people in longevity who are closet immortalists who can appear graceful because they don't appear as if they care that much about whether or not they live forever. In a similar way, doomerism about AI is extremely ungraceful (though those who are closeted doomers/immortalists can sometimes be secretly graceful to those who are less closeted about these things).

Things that are not the most graceful: over-correcting/over-compensating, irritability, appearing emotional enough to lose control, constantly seeking feedback (implies lack of confidence), visibly chasing likes, obsessing over intermediate computations/near-term reinforcement loops, "people pleasing" (esp when one is obvious about it), perseverating, laughing at one's own jokes, not being steadfast, not knowing when to stop (autistics are prone to this..), going for the food too early (semaglutide can help with grace..) Autistic people often lack grace, though some are able to develop it really well over long timescales.

Grace is having confidence over the process without becoming too attentive to short-term reinforcement/feedback loops (this includes patience as part of the process).

As with everything else, intelligence makes grace easier (and makes it possible to learn some things gracefully), but there is enough variation in grace that one can more than make up for lower intelligence with context+grace+strategic awareness. There is also loss of grace with older ages as working memory decline can increase impatience (Richard Posner said writing ability is the last to go, but that's because there's no real time observation of the process, and there's grace in observing the dynamics).

Load More