Extremely neophilic. Much of my content is on Quora (I was the Quora celebrity). I am also on forum.quantifiedself.com (people do not realize how alignment-relevant this is), rapamycin.news/latest, and crsociety.org
...People say the craziest things about me, because I'm a peculiar star...
I care about neuroscience (esp human intelligence enhancement) and reducing genetic inequality. The point of transhumanism is to transcend genetic limitations - to reduce the fraction of variance of outcome explained by genetics. I know loads of people in self-experimentation communities (people in our communities need to be less risk-averse if we have to make any difference in our probability of "making it"). When we are right at "the precipice", traditionalism cannot win (I am probably the least traditionalist person ever). I get along well with the unattached.
Slowing human compute loss from reducing microplastics/pollution/noise/rumination/aging rate are alignment-relevant (insofar as the most tractable way of "human enhancement" is to slow decline with age + make human thinking clearer). As is tFUS. I aim to do all I can to make biology keep up with technology. Reconfiguring reward functions to reward "wholesome/growthful/novel tasks over "the past" [you are aged when you think too much of the past].
Alignment through integrating all the diverse skillsets (including those who are not math/CS geniuses) and integrating all their compute + not making them waste time/attention on "dumb things" + making people smarter/more neuroplastic (this is a hard problem, but 40Hz-tACS [1] might do a little).
Unschooling is also alignment-relevant (most value is destroyed in deceptive alignment, and school breeds deceptive alignment). As is inverting "things that feel unfun".
Chaotic people may depend more on a sense of virtue than others, but it takes a lot to get people to trust a group of people/make themselves authentic when school has taken out much of their authenticity. Some people don't lose much or get much emotional damage from it (I've noticed it from several who dropped out of school for alignment), but some people get way more, and this is a way easier problem to solve than directly increasing human intelligence.
I like Dionysians. However, I had to cut back after accidentally destroying an opportunity (a friend having egged me onto being manic...)
Breadth/context produces unique compute value of its own
https://twitter.com/InquilineKea
facebook.com/simfish
I have a Twitter alt.
I trigger exponential growth trajectories in some. I helped seed the original Ivy League Psychedelics communities and am very good friends with Qualia Research Institute people (though I cannot try them much now)
Main objectives: not get sad, not get worked out over dumb things, not making my life harder than it is now.
I really like https://www.lesswrong.com/users/bhauth. Zvi is smart too https://www.lesswrong.com/users/zvi?from=post_header
[1] there are negative examples too
Can't you theoretically use both CellPainting assays and light-sheet microscopy?
I mean, I did look at CellPainting assays a small amount of time ago and I was still struck by how little control one had over the process, and how it isn't great for many kinds of mechanistic interpretability. I know there's a Brazil team looking at use of CellPainting for sphere-based silver-particle nanoplastics, but there are still many concrete variables, like intrinsic oxidative stress, that you can't necessarily get from CellPainting alone.
CellPainter can be used for toxicological predictions of organophosphate toxicity (predicting that they're more toxic than many other classes of compounds), but the toxicological assays used weren't able to use much nuance, especially the kind that's relevant to physiological concentrations that people are normally exposed to. I remember ketocozanole scored very highly on toxicity, but what does this say about physiological doses that are much smaller than the ones used for CellPainter?
Also, the cell lines were all cancer cell lines (OS osteosarcoma cancer cell lines), which gives little predictive power for neurotoxicity or a compound's ability to disrupt neuronal signalling.
Still, the CellPainter support ecosystem is extremely impressive, even though it doesn't produce Janelia-standard PB datasets that are used for lightsheet.. [cf https://www.cytodata.org/symposia/2024/ ]
https://markovbio.github.io/biomedical-progress/
FWIW, some of the most impressive near-term work might be whatever the https://www.abugootlab.org/ lab is going to do soon (large-scale perturb-seq combined with optical pooling to do readouts of genetic perturbations...)
How do they figure out what waveforms to use and at what frequencies on your brain? The ideal waveforms/frequencies depend a lot on your brainwaves and brain configuration
I've heard fMRI-guided TMS is the best kind, but many don't use it [and maybe it isn't necessary?]
Is anyone familiar with MeRT? It's what wave neuroscience uses, and is supposedly effective for more than just depression (there are ASDs and ADHD, where the effectiveness is way less certain, but where some people can have unusually high benefits). But response rates seem highly inconsistent (the clinic will say that "90% of people are responsive" but there is substantial reporting bias and every clinic seems to say this [no way to verify] so I don't believe these figures) and MeRT is still highly experimental so it's not covered by insurance. Some people with ASDs are desperate enough that they try it. TMS is probably useful for WAY more than severe treatment-resistant depression, but it's still the only indication that insurance companies are willing to cover.
I got my brainwaves scanned for MeRT and found that I have too much slowwave in ACC that could be sped up (though I'm still unsure about the effectiveness of MeRT, they don't seem to give you your data to understand your brain better in the same way that the Neurofield tACS people [or those at the ISNR conference] do)...
BTW there's a TMS Facebook group, and there's also the SAINT protocol where you only take a week out of your life for the TMS treatment for more treatments per day). I'm still unsure about the SAINT protocol b/c it's mostly developed just for severe depression and I'm not sure if this is what I have. There's also the NYC Neuromodulation conference where you can learn A LOT from TMS practitioners and TMS research (the Randy Buckner lab at Harvard has some of the most interesting research)
Lucy Lai's new PhD thesis (and YouTube explainer) is really really worth reading/watching: https://x.com/drlucylai/status/1848528524790923669 and is more broadly relevant to people than most other PhD theses [esp on the original subject of making rational decisions under constraints of working memory].
How about TMS/tFUS/tACS => "meditation"/reducing neural noise?
Drastic improvements in mental health/reducing neural noise & rumination are way more feasible than increasing human intelligence (and still have huge potential for very high impact when applied on a population-wide scale [1]), and are possible to do on mass-scale (and there are some experimental TMS protocols like SAINT/accelerated TMS which aim to capture the benefits of TMS on a 1-2 week timeline) [there's also wave neuroscience, which uses mERT and works in conjunction with qEEG, but I'm not sure if it's "ready enough" yet - it seems to involve some sort of guesswork and there are a few negative reviews on reddit]. There are a few accelerated TMS centers and they're not FDA-approved for much more than depression, but if we have fast AGI timelines, the money matters less.
[speeding up feedback loops are also important for mass-adoption - which both accelerated TMS/SAINT and the "intense tACS program" that people like neurofield [Nicholas Dogris/Tiffany Thompson] and James Croall people try to do]. Ideally, the TMS/SAINT or tACS should be done in conjunction with regular monitoring of brainwaves with qEEG or fMRI throughout.
Effect sizes of tFUS are said to be small relative to certain medications/drugs [this is true for neurofeedback/TMS/tACS in general], but part of this may be that people tend to be conservative with tFUS. Leo Zaroff has created an approachable tFUS community in the bay area. Still worth trying b/c the opportunity cost of trying them (with the right people) is very low (and very few people in our communities have heard of them).
There are some like Jeff Tarrant and the Neurofield people (I got to meet many of them at ISNR2024 => many are coming to the Suisun Summit now) who explore these montages.
Making EEG (or EEG+fNIRS) much easier to get can be high impact relative to amount of effort invested [with minimal opportunity cost]). I was pretty impressed with the convenience of Zeto's portable EEG headset at #Sfn24, as well as the convenience of the imedisync at #ISNR2024 [both EEG headsets cost $20,000, which is high but not insurmountable - eg if given some sort of guarantee on quality and useability I might be willing to procure one] but still haven't evaluated the signal quality of each when comparing them to other high-quality EEG montages like the deymed). It also makes it easier to create the true dataset of EEGs (also look into what Jonathan Xu is doing, though his paper is more about visual processing than mental health). We also don't even have proper high-quality EEG+HEG+fMRI+fNIRS datasets of "high intelligence" people relative to others [especially when measuring these potentials in response to cognitive load - I know Thomas Feiner has helped create a freecap support group and has done a lot of thought on ERPs and their response to cognitive load - he helped take my EEG during a brainmaster session at #ISNR2024]
I've found that smart people in general are extremely underexposed to psychometrics/psychonomics (there are not easy ways to enter those fields even if you're a psychology or neuroscience major), and there is a lot of potential for synergy in this area.
[1] esp given the prevalence of anxiety and other mental health issues of people within our communities
It's one of the most important issues ever, and has a chance of solving mass instability/unhappiness caused by wide inequality in IQs in the population, by giving the less-endowed a shot to increase their intelligence.
tFUS could be one of the best techniques for improving rationality, esp b/c [AT THE VERY MINIMUM] it is so new/variance-increasing and if the default outcome is not one that we want (as was the case of Biden vs Trump, and Biden dropping out was the desireable variance-increasing move) [and is the case now among LWers who believe in AI doom], we should be increasing variance rather than decreasing it. tFUS may be the avenue for better aligning people's thought with actions, especially when their hyperactive DMN or rumination gets in the way of their ability to align with themselves (tFUS being a way to shut down this dumb self-talk).
Even Michael Vassar has said "eliezer becoming CEO of openwater would meaningfully increase humanity's survival 100x" and "MIRI should be the one buying openwater early devices trying to use them to optimize for rationality"
[btw if anyone knows of tFUS I could try out, I'm totally willing to volunteer]
Why is thing IQ measuring mostly lognormal
Could it be a good idea to enable a file uploading feature to LessWrong? (eg a file uploading feature of PDFs or certain images/media]. Could help against link rot, for example (and make posts from long ago last longer - I say this as someone who edits old posts to make them timeless).