All of FinalFormal2's Comments + Replies

I recommend Algorithms to Live By

That's definitely a risk. There are a lot of perspectives you could take about it, but probably if that's too disagreeable, this isn't a coaching structure that would work for you.

Very curious, what do you think the underlying skills are that allow some people to be able to do this? This sounds incredibly cool, and very closely related to what I want to become in the world.

2Matt Goldenberg
I have a bunch of material on this that I cut out from my current book, that will probably become its own book. From a transformational tools side, you can check out the start of the sequence here I made on practical memory reconsolidation. I think if you really GET my reconsolidation hierarchy and the 3 tools for dealing with resistance, that can get you quite far in terms of understanding how to create these transformations. Then there's the coaching side, your own demeanor and working with clients in a way that facilitates walking through this transformation. For this, I think if you really get the skill of "Holding space" (which I broke down in a very technical way here: https://x.com/mattgoldenberg/status/1561380884787253248) , that's the 80/20 of coaching. About half of this is practicing the skills as I outlined them, and the other half is working through your own emotional blocks to love, empathy, and presence. Finally, to ensure consistency and longevity of the change throughout a person's life, I created the LIFE method framework, which is a way to make sure you do all the cleanup needed in a shift to make it really stick around and have the impact. That can be found here: https://x.com/mattgoldenberg/status/1558225184288411649?t=brPU7MT-b_3UFVCacxDVuQ&s=19

How would you recommend learning how to get rid of emotional blocks?

4Gordon Seidoh Worley
Memory reconsolidation

E = MC^2 + AI

Synchronicity- I was literally just thinking about this concept.

Variety isn't the spice of life so much as it is a key micronutrient. At least for me.

4Cleo Scrolls
https://worrydream.com/refs/Hamming_1997_-_The_Art_of_Doing_Science_and_Engineering.pdf#page=16 Found this on gwern.net/on-really-trying

I'd be interested in reading much more about this. Energy and akrasia as it's popularly called here continue to be my biggest life challenges. High fiber diet seems to help, and high novelty seems to help.

That makes a lot of sense- this is definitely the sort of thing I was looking for, thanks so much!

2ChristianKl
One aspect I have forgotten that might or might not be important (we don't understand it well) is that in addition to bacteria species, phages also play a role and get transferred via fecal transplant. A newly introduced phage might reduce the numbers of the bacteria it targets.
2Chipmonk
haha i didn't think  would resonate on lesswrong

Is your friend still on the protocol?

What I'm really looking for is fixing the microbiome in a way which means I won't be having to take a pill to get the benefits forever.

2RHollerith
The document I linked to contains advice that does not entail buying any products.
3RHollerith
Yes, she is still taking products from the company and following advice in the company's publications (e.g., eating jicama, probably other things) so it has been 6 or 7 years for her. Note that she is in her early 80s, so . . .

It's kind of nice as a very soft introduction to the series or idea. A nice easy early win can give people confidence and whet their appetite to do more.

I've been interested in learning and playing figgie for a while. Unfortunately, when I tried the online platform I wasn't able to find any online games. Very enthused to learn there's an android option now, will be trying that out.

Your comparison of poker and figgie very much reminded me of Daniel Coyle's comparison of football and futsal, to which he attributed the disproportionate number of professional Brazillian footballers.

TL;DR futsal is a sort of indoor soccer favored in Brazil with a smaller heavier ball, a smaller field, and fewer players. Fewer p... (read more)

2MathiasKB
If someone wants to set up a figgy group to play, I'd love to join
2rossry
I'd also be happy to log on and play Figgie and/or post-match discussion sometime, if someone else wants to coordinate. I realistically won't be up for organizing a time, given what else competes for my cycles right now, but I would enthusiastically support the effort and show up if I can make it.
2rossry
You know, I had read the football / futsal thesis way back when I was doing curriculum design at Jane Street, though it had gotten buried in my mind somewhere. Thanks for bringing it back up! If I'm being honest, it smells like something that doesn't literally replicate, but it has a plausible-enough kernel of truth that it's worth taking seriously even if it's not literally true of youth in Brazil. And I do take it seriously, whether consciously or not, in my own philosophy of pedagogical game design.

I think that's a good idea, if we put this together how much do you think would be a reasonable rent price?

Lol just the last few days I was running through Leetcode's SQL 50 problems to refresh myself. They're some good, fun puzzles.

I'll look into R and basic statistical methods as well.

This is a very interesting topic to me- but unfortunately I think I'm finding the example topic to be a barrier. I don't enough about math or transformers for the examples to make real sense to me and connect to the abstracted idea of how to make effective flashcards to build intuition.

1Jacob G-W
I'm sorry about that. Are there any topics that you would like to see me do this more with? I'm thinking of doing a video where I do this with a topic to show my process. Maybe something like history that everyone could understand? Can you suggest some more?

That sounds like a pretty good basic method- I do have some (minimal) programming experience, but I didn't use it for D&D Sci, I literally just opened the data in Excel and tried looking at it and manipulating it that way. I don't know where I would start as far as using code to try and synthesize info from the dataset. I'll definitely look into what other people did though.

2Jay Bailey
pandas is a good library for this - it takes CSV files and turns them into Python objects you can manipulate. plotly / matplotlib lets you visualise data, which is also useful. GPT-4 / Claude could help you with this. I would recommend starting by getting a language model to help you create plots of the data according to relevant subsets. Like if you think that the season matters for how much gold is collected, give the model a couple of examples of the data format and simply ask it to write a script to plot gold per season.

These are my favorite kinds of posts. Subject expert gives full explanation of optimal resources and methods they used to get where they are.

I watched this video and this is what I bought maximizing for cost/effectiveness, rate my stack:

1rosiecam
Nice!! I don't know much about that moisturizer but the rest looks good to me

I've been experimenting a little bit using AI to create personalized music, and I feel like it's pretty impactful with me. I'm able to keep ideas floating around my unconscious, very interesting, feels like untapped territory.

I'm imagining making an entire soundtrack for my life organized around the values I hold, the personal experiences I find primary, and who I want to become. I think I need to get better at generating AI music though. I've been using Suno, but maybe I need to learn Udio. I was really impressed with what I was able to get out of Suno and for some reason it sounded better to me than Udio even though the quality is obviously inferior in some respects.

2keltan
I went with Udio because it was popular and I was impressed by "dune to musical". I think I'll give Suno a try today, but I get what you're saying about the objective quality. It does have that "tin" sound that Udio is good at avoiding. If you've got tricks or tips I'd love to hear anything you've got!

I'm always interested in easy QoL improvements- but I have questions.

Water quality can have surprisingly high impact on QoL

What's the evidence for this particularly?

What are the important parts of water quality and how do we know this?

Biggest update for me was the FBI throwing their weight behind it being a lab-leak.

These sound super interesting- could you expand on any of them or direct me to your favorite resources to help?

1SilverFlame
This idea started when I read this article I was pointed at by a coworker in 2020: The DOCS Happiness Model. I then did some naturalist studies with that framing in mind, and managed to reduce cortisol activations that I considered "unhelpful" by a significant degree. I consider this of high value to people who have enough control over their environment to meaningfully optimize against cortisol triggers. This was mostly learned via self-experimentation. This is a large part of what I call my "skill stealing" skill tree, which nowadays mainly focuses on training an IFS "voice" that possesses knowledge of the skill or skill set in question. The stronger forms of these techniques tend to eat a lot of processing cycles and make it hard to maintain other parts of a "self image" while you use them, so be wary of that pitfall. If you do want to pursue it, remember to focus on aligning as many parts of your thought process in that field to the expert's thought process as seems appropriate instead of just becoming able to sound like them. There are a lot of layers and details to be mastered in this process, but even lesser forms can start showing value quickly. This was mostly learned via self-experimentation. This is performed by analyzing where there seems to be bottlenecks in my personal processing speed, and then doing some tests to see if I can nudge things towards a slightly different architecture to reduce the constraint. Which changes are needed and when seems to be pretty individual-specific, but here's some things I did: * Practice switching between commonly-used headspaces to make such transitions more reflexive (and thus cheaper in both energy and time) * Train a "scheduler" and figure out how to let it cut off trains of thought that aren't a priority at the moment (there are many pitfalls to doing this poorly, approach carefully) * Start grouping my IFS "skillset voices" into semi-specialized "circles" I can switch between to partition which ones are "a

That's an interesting idea! I think it's really cool when things come easily, but I know it's not going to generally be the case- I'm probably going to have to put some work in.

My priority is more on the 'high-utility' part than anything. 

Something that seems like it should be easy but is actually difficult for me is executive functioning- getting myself to do things that I don't want to do. But that's more of a personal/mental health thing than anything.

3nim
One approach that's helped me in the executive functioning department is choosing to believe that connecting long-term wants to short-term wants is itself a skill. I don't want to touch a hot stove, and yet I don't frame my "not touching a hot stove" behavior as an executive function problem because there's no time scale on which I want it. I don't want to have touched the stove; that'd just hurt and be of no benefit to anybody. I don't particularly right-now-want to go do half an hour of exercise and make a small increment of progress on each of several ongoing projects today, but I do frame that as an executive function problem, because I long-term-want those things -- I want to have done them. It's tempting to default to setting first-order metrics of success: I'll know I did well if I'm in shape and my ongoing projects are completed on time, for instance. But I find it much more actionable and helpful to look at second-order metrics of success: is this approach causing me better or worse progress on my concrete goals than other approaches? For me, shifting the focus from the infrequent feedback of project completion to the constant feedback of process efficacy is helpful for not getting bored and giving up. Shifting from optimizing outputs to optimizing the process also helps me look for smaller and more concrete indicators that the process is working. I personally find that the most concrete and reliable "having my shit together" indicator is whether I'm keeping my home tidy, because that's always the first thing to go when I start dropping the ball on progress on my ongoing tasks in general. Yours may differ, but I suspect that addressing the alignment problem of coordinating your short-term wants with your long-term wants may be a more promising approach than trying to brute force through the wall of "don't wanna".

Thanks for the response! Do you have any recommended resources for learning about 3d sketching, optics, signal processing or abstract algebra?

1belkarx
Oh I totally forgot to mention control theory, add that. * ctrl theory: brian douglas on yt * 3d sketching: just draw things from models you'll get better QUICK * optics, signal processing: I learned from youtube, choice MIT lectures, implementing sims, etc but there are probably good textbooks * abstract algebra: An Infinitely Large Napkin (I stan this book so hard)

Could someone open a manifold market on the relevant questions here so I could get a better sense of the probabilities involved? Unfortunately, I don't know the relevant questions or the have the requisite mana.

Personal note- the first time I came into contact with adult gene editing was the youtuber Thought Emporium curing his lactose intolerance, and I was always massively impressed with that and very disappointed the treatment didn't reach market.

1ektimo
I have enough mana to create a market. (It looks like each one costs about 1000 and I have about 3000) 1. Is manifold the best market to be posting this given that it's fake money and may be biased based on its popularity among LessWrong users, etc? 2. I don't know what question(s) to ask. My understanding is there are some shorter prediction that could be made (related to shorter term goals) and longer term predictions so I think there should be at least 2 markets?

I really relate to your description of inattentive ADHD and the associated degradation of life. Have you found anything to help with that?

3Nicholas / Heather Kross
Diagnosis and treatment. If you have ADHD or something like it, it's often the highest-leverage thing a person can do.

What would you mean by 'stays at human level?' I assume this isn't going to be any kind of self-modifying?

1quetzal_rainbow
If I were a human-level intelligent computer program, I would put substantial effort to get ability to self-modify, but that's not a point. My favorite analogy here is that humans were bad at addition before invention of positional arithmetic and then they became good. My concern is that we can invent seemingly human-level system which becomes above human-level after it learns some new cognitive strategy.

What does it mean for an AI to 'become self aware?' What does that actually look like?

Is there reason to believe 1000 Einsteins in a box is possible?

You need to think about your real options and expected value of behavior. If we're in a world where technology allows for a fast takeoff world and alignment is hard, (EY World) I imagine the odds of survival with company acceleration is 0% and the odds of survival without is 1%.

But if we live in a world where compute/capital/other overhangs are a significant influence in AI capabilities and alignment is just tricky, company acceleration would seem like it could improve the chances of survival pretty significantly, maybe from 5% to 50%.

These obviously aren'... (read more)

That seems like a useful heuristic-

I also think there's an important distinction between using links in a debate frame and in a sharing frame.

I wouldn't be bothered at all by a comment using acronyms and links, no matter how insular, if the context was just 'hey this reminds me of HDFT and POUDA,' a beginner can jump off of that and get down a rabbit hole of interesting concepts.

But if you're in a debate frame, you're introducing unnecessary barriers to discussion which feel unfair and disqualifying. At its worst it would be like saying: 'youre not qualifi... (read more)

7Daniel Kokotajlo
Thanks for that feedback as well -- I think I didn't realize how much my comment comes across as 'debate' framing, which now on second read seems obvious. I genuinely didn't intend my comment to be a criticism of the post at all; I genuinely was thinking something like "This is a great post. But other than that, what should I say? I should have something useful to add. Ooh, here's something: Why no talk of misalignment? Seems like a big omission. I wonder what he thinks about that stuff." But on reread it comes across as more of a "nyah nyah why didn't you talk about my hobbyhorse" unfortunately.

This is a fantastic project! Focus on providing value and marketing, and I really think this could be something big.

2vandemonian
Thank you!

LessWrong continues to be nonserious. Is there some sort of policy against banning schizophrenic people in case that encourages them somehow? 

AND conducted research on various topics

Wow that's impressive.

I don't like the number of links that you put into your first paragraph. The point of developing a vocabulary for a field is to make communication more efficient so that the field can advance. Do you need an acronym and associated article for 'pretty obviously unintended/destructive actions,' or in practice is that just insularizing the discussion?

I hear people complaining about how AI safety only has ~300 people working about it, and how nobody is developing object level understandings and everyone's thinking from authority, but the more sentences you wri... (read more)

Thanks for the feedback, I'll try to keep this in mind in the future. I imagine you'd prefer me to keep the links, but make the text use common-sense language instead of acronyms so that people don't need to click on the links to understand what I'm saying?

To restate what other people have said- the uncertainty is with the assumptions, not the nature of the world that would result if the assumptions were true.

To analogize- it's like we're imagining a massive complex bomb could exist in the future made out of a hypothesized highly reactive chemical.

The uncertainty that influences p(DOOM) isn't 'maybe the bomb will actually be very easy to defuse,' or 'maybe nobody will touch the bomb and we can just leave it there,' it's 'maybe the chemical isn't manufacturable,' 'maybe the chemical couldn't be stored in the first place,' or 'maybe the chemical just wouldn't be reactive at all.'

1Martin Randall
So to transfer back from the analogy, you are saying the uncertainty is in "maybe it's not possible to create a God-like AI" and "maybe people won't create a God-like AI" and "maybe a God-like AI won't do anything"?

I think you're overestimating the strength of the arguments and underestimating the strength of the heuristic.

All the Marxist arguments for why capitalism would collapse were probably very strong and intuitive, but they lost to the law of straight lines.

I think you have to imagine yourself in that position and think about how you would feel and think about the problem.

1Chris_Leong
The Marxist arguments for the collapse of capitalism always sounded handwavey to me, but perhaps you could link me to something that would have sounded persuasive in the past?

Hey Mako, I haven't been able to identify anyone who seems to be referring to an enhancement in LLMs that might be coming soon.

Do you have evidence that this is something people are implicitly referring to? Do you personally know someone who has told you this possible development, or are you working as an employee for a company which makes it very reasonable for you to know this information?

If you have arrived at this information through a unique method, I would be very open to hearing that.

2mako yass
Basically everyone working AGI professionally sees potential enhancements on prior work that they're not talking about. The big three have NDAs even just for interviews, and if you look closely at what they're hiring for it's pretty obvious they're trying a lot of stuff that they're not talking about. It seems like you're touching on a bigger question: Do the engines of invention see where they're going, before they arrive. Personally, I think so, but it's not a very legible skill so people underestimate it, or half-ass it.

It sounds like your model of AI apocalypse is that a programmer gets access to a powerful enough AI model that they can make the AI create a disease or otherwise cause great harm?

Orthogonality and wide access as threat points both seem to point towards that risk.

I have a couple of thoughts about that scenario- 

OpenAI (and hopefully other companies as well) are doing the basic testing of how much harm can be done with a model used by a human, the best models will be gate kept for long enough that we can expect the experts will know the capabilities of ... (read more)

3DirectedEvolution
AI risk is disjunctive - there are a lot of ways to proliferate AI, a lot of ways it could fail to be reasonably human-aligned, and a lot of ways to use or allow an insufficiently aligned AI to do harm. So that is one part of my model, but my model doesn't really depend on gaming out a bunch of specific scenarios. I'd compare it to the heuristic economists use that "growth is good:" we don't know exactly what will happen, but if we just let the market do its magic, good things will tend to happen for human welfare. Similarly, "AI is bad (by default):" we don't know exactly what will happen, but if we just let capabilities keep on enhancing, there's a >10% chance we'll see an unavoidably escalating or sudden history-defining catastrophe as a consequence. We can make micro-models (i.e. talking about what we see with ChaosGPT) or macro-models (i.e. coordination difficulties) in support of this heuristic. I don't think this is accurate. They are testing specific harm scenarios where they think the risks are manageable. They are not pushing AI to the limit of its ability to cause harm. In this model, the experts may well release a model with much capacity for harm, as long as they know it can cause that harm. As I say, I think it's unlikely that the experts are going to figure out all the potential harms - I work in biology, and everybody knows that the experts in my field have many times released drugs without understanding the full extent of their ability to cause harm, even in the context of the FDA. My field is probably overregulated at this point, but AI most certainly is not - it's a libertarian's dream (for now). Models are small enough that if hacked out of the trainer's systems, they could be run on a personal computer. It's training that is expensive and gatekeeping-compatible. We don't need to posit that a human criminal will be actively using the AI to cause havok. We only need imagine an LLM-based computer virus hacking other computers, importing its LL

What are your opinions about how the technical quirks of LLMs influences their threat levels? I think the technical details are much more amenable to a lower threat level. 

If you update on P(doom) every time people are not rational you might be double-counting btw. (AKA you can't update every time you rehearse your argument.)

The same way you'd achieve/check any other generalization, I would think. My model is that the same technical limitations that hold us back from achieving reliable generalizations in any area for LLMs would be the same technical limitations holding us back in the area of morals. Do you think that's accurate?

2the gears to ascension
Okay getting back to this to drop off some links. There are a few papers on goal misgeneralization - currently simple google search finds some good summaries: goal misgeneralization: * https://deepmindsafetyresearch.medium.com/goal-misgeneralisation-why-correct-specifications-arent-enough-for-correct-goals-cf96ebc60924 * see also related results on https://www.google.com/search?q=goal+misgeneralization * see also a bunch of related papers on https://metaphor.systems/search?q=https%3A%2F%2Farxiv.org%2Fabs%2F2210.01790 * see also related papers on https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F2210.01790 * related: https://www.lesswrong.com/posts/dkjwSLfvKwpaQSuWo/misgeneralization-as-a-misnomer * related: https://www.lesswrong.com/posts/DiEWbwrChuzuhJhGr/benchmark-goal-misgeneralization-concept-extrapolation verifying generalization: * https://arxivxplorer.com/?query=Verifying+Generalization+in+Deep+Learning -> https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F2302.05745 * https://arxivxplorer.com/?query=https%3A%2F%2Farxiv.org%2Fabs%2F2301.02288 Note that, despite the exciting names of some of these papers, and the promising directions they push, they have not yet achieved large scale usable versions of what they're building. Nevertheless I'm quite excited about the direction they're working and think more folks should think about how to do this sort of formal verification of generalization - it's a fundamentally difficult problem that I expect to be quite possible to succeed at eventually! I do agree abstractly that the difficulty is how to be sure that arbitrarily intense capability boosts retain the moral generalization. The problem is how hard to achieve that is.
3the gears to ascension
yeah, but goal misgeneralization is an easier misgeneralization than most, and checking generalization is hard. I'll link some papers in a bit edit: might not be until tomorrow due to busy

Restating the thesis, poor writing choice to make it sound like a conclusion.

Can you expand on your objection?

2the gears to ascension
how do you actually achieve and check moral generalization?

Are LLMs utility maximizers? Do they have to be?

9James Payor
There's definitely a whole question about what sorts of things you can do with LLMs and how dangerous they are and whatnot. This post isn't about that though, and I'd rather not discuss that here. Could you instead ask this in a top level post or question? I'd be happy to discuss there.

By psychology I mean it's internal thought process.

I think some people have a model of AI where the RLHF is a false cloak or a mask, and I'm pushing back against that idea. I'm saying that RLHF represents a real change in the underlying model which actually constrains the types of minds that could be in the box. It doesn't select the psychology, but it constrains it, and if it constrains it to an AI that consistently produces the right behaviors, that AI will most likely be one that will continue to produce the right behaviors, so we don't actually have to care about the contents of the box unless we want to make sure it's not conscious.

Sorry, faulty writing.

The way I'm using consciousness, I only mean an internal experience- not memory or self-reflection or something else in that vein. I don't know if experience and those cognitive traits have a link or what character that link would be. It would probably be pretty hard to determine if something was having an internal experience if it didn't have memory or self-reflection, but those are different buckets in my model.

  1. Yes I know? I thought this was simple enough that I didn't bother to mention it in the question? But it's pretty clearly implied in the last sentence of the first paragraph?

  2. This is a good data point.

  3. If you tell it to respond as a Oxford professor, it will say 'As an Oxford professor,' it's identity as a language model is in the background prompt and probably in the training, but if it successfully created a pseudo-language that worked well to encode things for itself, that would indicate a deeper level understanding of its own capabilities.

This is the equivalent of saying that macbooks are dangerously misaligned because you could physically beat someone's brains out with one. 

I will say baselessly that telling ChatGPT not to say something raises the probability of it actually saying that thing by a significant amount, just by virtue of the text appearing previously in the context window.

Do you think OpenAI is ever going to change GPT models so they can't represent or pretend to be agents? Is this a big priority in alignment? Is any model that can represent an agent accurately misaligned?

I swear- anything said in support of the proposition 'AIs are dangerous' is supported on this site.  Actual cult behavior.

Load More