Very interesting of you to think of it that way. It turns out that it's very in line with recent results from computation psychiatry. Basically in depression we can study and distinguish how much the lack of activity is due to "lack of ressource to act" vs "increased cost of action". Both look clinically about the same but underlying biochemical pathways differ, so it's a (IMHO) promising approach to shorten the times it takes for a doctor to find the appropriate treatment for a given patient.
If that's something you already know I'm sorry, I'm short on time and wanted this to be out :)
Just a detail : Haven't retinoids been discovered when looking for cancer treatments? I thought it was the origin story behind isotretinoin.
My personnal solution to this is to mostly use Anki for everything and anything.
my knowledge is limited to a couple of related keywords ("self-attention", "encoder"/"decoder") with no real gears-level understanding of anything.
feeling. Making it all the more easier to get back to reading.In fact I hate the number 2 feeling so much that it was a huge motivation to really master Anki. (1300 day streak or so with no sign of regret whatsoever)
I think very cheap carabiners are extremely fragile especially for repeated use. I saw a failing mode where the mobile axis just opens in the wrong way by going around the stator. Keep that in kind when choosing which carabiner to use.
Might be better to keep using ring keyholders but have one decently strong carabineer to hold the rings together instead of what you did : tiny carabiners that hols onto a ring no?
I don't really like any of those ideas. I think it's really interesting that aware is so related though. I think the best bet would be based on software. So something like deepsoftware, nextsoftware, nextgenerationsoftware, enhancedsoftware, etc.
For ayone trying to keep up with AI for film making, I recommend the youtube channel curious refuge https://www.youtube.com/channel/UClnFtyUEaxQOCd1s5NKYGFA
Also, velcro comes in many strength and sizes. I find heavy duty velcros to be frequently underused in such DIY projects
Medical student here, I get that a lot it's called interference at least in the supermemo sphere.
My personnal solution to this is to add more cards. For example "is friendA born before or after friendB?", "what are the birthday of friendA and friendB?".
The latter question is a crucial example actually. It makes you practice the recall of the distinction between the interference instead of the raw recall of each datum.
Also, as other suggested here, mnemonics help a ton: for example is there an intuitive reason you can link to friendB having an odd birthday ...
Based on?
The wikipedia page explicitely states that they don't have the same binding profile and also Ockham's razor as it seems unlikely that two different drugs with two different binding profile perform similarly on ADHD.
"stronger per weight impact on dopamine"? That's not how drugs or biology work. Every neurotransmitter and hormone has several different receptors that different drugs affect in different ways.
I'm aware. I know my sentence did not sound professional but it was on purpose. I think it's true nonetheless : using a more specific sent...
Sure!
To me the only relevant passage seems to be this one:
Ethylphenidate is more selective to the dopamine transporter (DAT) than methylphenidate, having approximately the same efficacy as the parent compound,[6] but has significantly less activity on the norepinephrine transporter (NET).[8] Its dopaminergic pharmacodynamic profile is nearly identical to methylphenidate, and is primarily responsible for its euphoric and reinforcing effects.
You said:
...methylphenidate is obligatory for some kids while ethylphenidate is illegal, and they're basically the
For the curious, famous researcher David Nutt is working on alcohol replacements too. He's part of Alcarelle, now called gaba labs and IIRC they bet on benzodiazepine derivatives.
for example, methylphenidate is obligatory for some kids while ethylphenidate is illegal, and they're basically the same but ethylphenidate is probably slightly better.
This sounds surprising to me. Can you elaborate on the source and thought process leading you to this?
There's a great youtuber called the thought emporium who did genetic engineering on himself. I highly recommend checking them out :
https://www.youtube.com/watch?v=J3FcbFqSoQY
And the 2year follow up: https://www.youtube.com/watch?v=aoczYXJeMY4
The tldr is he created a virus then ate it to make his digestive system have more of the gene that makes lactase as he was very intolerant. 2 years later the effects are starting to wear off as cells get replaced but it seems to have had a very high ROI
I bought a cheap watch : twatch 2020 that has wifi and a microphone. The goal is to have an easily accessible langchain agent connected to my localai.
I'm a bit stuck for now because of a driver in C while I know mostly python but I'm getting there.
You meant speech to text instead of text to speech. They just added the latter recently but we don't know the model behind it afaik
Na. Although you can see patient having binge that you then understand were just one bigmac, indicating something closer to anorexia.
The suicide rate is about 2% per 10 year which is insanely high. Also it is not uncommon for people with bulemia to have (sometimes severe) deficiencies regardless of their weight.
To add some perspective : I suspect some people don't really understand how large the caloric intake can be in boulemia. I routinely see patients eating upwards of 50 000 calories (even saw 100 000 a few times) per day when crisis occur. Things like eating several large peanut butter jars in a row etc
- The only difference between encoder and decoder transformers is the attention mask. In an encoder, future tokens can attend to past tokens (acausal), while in a decoder, future tokens cannot attend to past tokens (causal attention). The term "decoder" is used because decoders can be used to generate text, while encoders cannot (since you can only run an encoder if you know the full input already).
This was very helpful to me. Thank you.
Hi,
I had a question the other day and figured I'll post it here. Do we have any idea what would happen if we used the steering vector of the input itself?
For example : Take sentenceA, pass it through the LLM, store its embedding, take once again sentenceA, pass it through the LLM while adding the embedding.
As is, this would simply double the length of the hidden vector, but I'm wondering what would happen if we took played instead with the embedding say after the 5th token of sentenceA and add it at the 3rd token.
Similarly, would anything interesting happen with substraction? with adding a random orthogonal vector?
Thanks
Personnaly I come (and organize) meetups to make my brain sweat and actively avoid activities that leave me unchanged (I won't change much during a play while I grow a lot after each confrontation or discussion). But to each their own of course!
FWIW I tend to see a good part of ADHD medication's effect as changing the trade off between exploration and exploitation. ADHD being an excess of eploration, the meds nudging towards excess of exploitation. If you struggle with a perceived excess of exploration, you might ask yourself if you are helped by taking those medication or if you might fit the diagnostic criteria.
Related : Taking too much of those psychostimulants gives usually an extreme type of exploitation often called "tunnel vision", which can be detrimental as it feels like being a robot do...
That sounds like something easy to do with langchain btw
edit: I can make the prompt more or less compressed easily, just ask. The present example is "pretty compressed" but I can make a more verbose one
Not really what you're asking but :
I'm coincidentally working on the side on a DIY summarizer to manage my inputs. I summarized a bit of the beginning of part 1. If you think it has any value I can run the whole thing :
note that '- ---' indicate the switch to a new chunk of text by the llm
This is formatted as a logseq / obsidian markdown format.
- Carl Shulman (Pt 1) - Intelligence Explosion, Primate Evolution,
... That would most certainly cause a bad trip at night. As taking uppers to stay awake for long will also increase anxiety, which will not be helped by the residual hallucinations from the earlier hallucinogenic.
In my experience opinion. A good deal of bad trips are actually caused by being sleep deprived.
Can't check currently but IIRC there is a marked neurotoxicity cause by too much cholinergic activity during mania, leading to quicker than average dementia onset and proportional to time spent in mania. Might be controversial among specialist. Might not apply to hypomania but be a useful prior none the less. I recommend the website elicit to quickly reduce uncertainty on this question.
Edit: also related to wether putting everyone on at least a low adderall dose might be a good thing
edit: rereading your above comments. I see that I should have made clear that I was thinking more about learned architectures. In which case we apparently agree is I meant what you said in https://www.lesswrong.com/posts/ftEvHLAXia8Cm9W5a/data-and-tokens-a-30-year-old-human-trains-on?commentId=4QtpAo3XXsbeWt4NC
Thank your for taking the time.
I agree that it's probably terminology that is the culprit here. It's entirely my fault: I was using the word pretraining loosely and meant more something like that hyper parameters (number of layers, inputs, outputs, a...
If all humans have about as many neurons in a the gyri that is hardwired to receive from the eyes, it seems safe to assume that the vast majority of humans will end up with this gyri extracting the same features.
Hence my view is that evolution, by imposing a few hardwired connections and gyri geometries, gives an enormous bias in the space of possible networks, which is similar to what pretraining is.
In essence evolution gives a foundational model that we fine tune with our own experiences.
What do you think? Does that make sense?
I think that gyri are mostly hard coded by evolution and given how strongly they restrict the computation space that the cortical area can learn, one could consider the cortex to be heavily pre trained by evolution.
Studying geometrical gyri correlation with psychiatry is an ongoing hot topic
...b. Saying "no" to a certain activity means saying "yes" to myself and our relationship. When you propose something and I say "no" to it, I'm simultaneously saying "yes" to our relationship. Because when I say "yes" while I'm actually a "no", I slowly accumulate resentment that poisons the connection between us without you being able to do anything about it. And, you will inevitably sense when I said "yes" to something but my heart is not in it. Having been on both sides of this, I know how awkward that feels. So, the moment when I start to really feel com
Fyi actually radiology is not mostly looking at pictures but doing imagery-guided surgery (for example embolisation) which is significantly harder to automate.
Same for family octors : it's not just following guidelines and renewing scripts but a good part is physical examination.
I agree that AI can do a lot of what happens in medicine though.
Thanks! Regarding the survey, some people might be having issues like me for their lack of google account or google device. If you can consider using other forms like the one supplied by nextcloud (framaform etc) that might help!
Sorry for being that guy and thanks for the summaries :)
Question : what do you think of the opinion of the chinese officials on easily accessible LLM to chinese citizens? As long as alignment is unsolved, I can imagine china being extremely leery of how citizens could somehow be exposed to ideas that go against official propaganda (human rights, genocide, etc).
But china can't accept being left out of this race either is my guess.
So in the end china is incentivized to solve alignment or to as least slow down its progress.
Have you thought about any of this? I'm extremely curious about anyone's opinion on the matter.
I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.
What do you think?
The goal of this site is not to create AGI.
Pinging @stevenbyrnes : do you agree with me that instead of mapping those protoAGIs to a queue of instructions it would be best to have the AGI be made from a bunch of brain strcture with according prompts? For example "amygdala" would be in charge of returning an int between 0 and 100 indicating feat level. A "hypoccampus" would be in charge of storing and retrieving memories etc. I guess the thalamus would be consciousness and the cortex would process some abstract queries.
We could also use active inference and bayesian updating to model current theorie...
I don't understand what you mean by "inaccessible"
I don't like how it sounds but : i think you are missing a lot of biological facts about consciousness and that we're not as clueless as you seem to think. I definitely recommend reading the book "consciousness and the brain" by stanislas dehaene which is basically a collection of facts onbthe topic.
Don't you agree that certain brain lesion definitely make you not conscious? I think identifying which region is indispensable is important.
If I had to guess human can be conscious without a cerebellum but not without basal ganglia fwiw
Ley's put it lile this : if you had hours of interaction with this individual you'd have no reason to doubt it's conscious. I indeed don't know if it has the exact same sense of consciousnes as someone with a cerebellum but this is also true for everyone else : I don't know if you and I have the same conscious experience either.
Here's the full prompt:
>>> user
Write a short story in 2 paragraphs title "The Peril of the Great Leaks" describing how in less than 3 years hacking capabilities will be so advanced thanks to large language models that many many databases will get leaked and peple will unwillingly have their information easily accessible to anyone. Things like facebook, github, porn accounts, etc. End by talking about how those dumps will be all the more easy to parse that LLM will be easy to use.
This will be posted on the LessWrong.com forum, make it engaging
... Yes we do, it's in the sources.
I use RSS a lot, adds some articles to read in wallabag, annotate them there then create anki cards from the annotations.
On mobile but FYI langchain implements some kind of memory.
Also, this other post might interest you. It's about asking GPT to decide when to call a memory module to store data : https://www.lesswrong.com/posts/bfsDSY3aakhDzS9DZ/instantiating-an-agent-with-gpt-4-and-text-davinci-003
Two links related to RWKV to know more :
Sharing my setup too:
Personnaly I'm just self hosting a bunch of stuff: