CstineSublime

Music Video maker and self professed "Fashion Victim" who is hoping to apply Rationality to problems and decisions in my life and career probably by reevaluating and likely building a new set of beliefs that underpins them. 

Wiki Contributions

Comments

Sorted by

"Babbling Better" this is a work in progress -and still requires more thinking 

In short - need a methodology or at least heuristics for identifying the "right problem" to solve, and noticing when one is solving the "wrong problem". Better problem framing leads to better and more focused answers to questions and hopefully eventual resolving of problems. I've come across two techniques: The Five Whys to understand problems better, and using adverbs of manner to babble more constructively. 

So far:


It is easy to babble, babies do it. It is still quite easy to babble comprehensible but wrong sentences, such as LLM hallucinations. Your pruning is only as good as your babble.

With regards to problem solving, low quality babble doesn't contribute to resolving the problem. For example, let's say the problem is "camera autofocus doesn't focus on eyes" a low quality "babble" answer might be "Burn a stick of incense and pray to Dionysius". The acts themselves are feasible and the sentence is comprehensible. But any desired change in the camera's autofocus performance will be pure coincidence.

Yet, sometimes low quality babble appears to be high quality babble because we simply are not solving the right problem but it appears to be perfectly suited for the problem. Especially if incentives are involved.

My hunch is that to babble better not only do you need better methods of babbling, but you need to better understand what goals you are trying to babble towards. And that requires better understanding of why the problem is a problem.

5 Why's on yourself: Asking "why I think this is a problem?" to at least 5 levels

Not to be mistaken for the Burger joint. The "Five Whys" technique was apparently invented at the Toyota Corporation as a system for uncovering the root causes of production faults. 

The choice of "why" falls into broader pattern which takes me back to documentary filmmaking and interviewing: you learn more through open ended questions, usually those where the key interrogative is "why" or "how" than through close ended questions. These, as Wittgenstein pointed out, basically seek to affirm or negative a proposition or conditional: "Do you like him?" "Is he still there?" "Would you call that green or turquoise?".

If I am a manager or investigator, trying to ascertain the cause of a fault on a production line, open ended questions make sense since I will not be in possession of all known or knowable facts.
This still holds if I am a novice or just someone enquiring to an expert for help in achieving some goal. If I ask an experienced cinematographer "how would that scene be light?" even if they don't know specifically, they have a large body of experience and knowledge that would mean they could probably make useful guesses on how to replicate the effect.

If i intend on asking for advice from an expert, I can't give them the responsibility of figuring out the kind of help I need. The better I can define the problem myself the better and more informative the question I can ask them. Be too vague about your problem and you can only hope to get generic responses like "be confident".

It seems ridiculous though, doesn't it? Socratic or even from  Yes, Minister: Why should I ask myself open ended questions if I don't know what I don't know? While I may not understand the problem, what I can do is at least explain why it's a problem and how I see it. And one effective way to do that I've found is to use the Five Whys Technique.

It is often exceedingly difficult to know what the right problem to solve is, what we may have a better chance of defining is why we perceive it as a problem and why we expect it to cause conflict.

To - Do: add more techniques to my arsenal to better defined problems... the step before babbling

Adverbs and Creativity?  Strategically Efficaciously Productively Babbling

I have recently come across a technique for higher-quality babble, at least for creative purposes. It is as simply as employing a Adverb of Manner to modify a verb. This is a minor variation on a technique used to allow mime artists to create a character - you take a situation or process like "make breakfast" and do it with an attitude: happy, hungover, lovelorn.

It is surprisingly easy to come up with scenarios and even stories with arcs - goals, conflict, and comedic pay-offs complete with a character who has distinct mannerisms - by just cycling through adverbs. Compare these three adverbs: grumpily, overzealously, nervously.

He bartends grumpily - he tries to avoid eye contact with customers, sighs like a petulant teenager when he does make eye contact, he slams down glasses, he spills drinks, on his face a constant scowl, he waves customers away dismissively. Even a simple glass of beer he treats like one of the labours of Herakles

He bartends overzealously - he invites customers to the bar, he slams down glasses too, he spills them, he accidently breaks glasses in his zeal but always with a smile on his face, he's more than happy to do a theatrical shake of the mixer, throw it even if it doesn't quite make it's landing. He's always making a chef's kiss about any cocktail the customer asks for

He bartends nervously - he doesn't realize when a customer is trying to order, giving a "who me?" reaction, he scratches his head a lot, he takes his time, he fumbles with bottles and glasses, he even takes back drinks and starts again.

These scenarios appear to "write themselves" for the purposes of short pantomime bits. This is the exact type of technique I have spent years searching for.

 To do - Does this technique of better babbling through adverbs of manner apply to non-creative applications? If not then develop methodology or at least heuristics for identifying the right problem, noticing a "wrong problem"

Update (October 2024)- it is interesting looking back on this 8 months later as I think I have just hit upon a means of "babbling better". I intend to revise and go into detail this means after a period of actually trying it out. It's certainly not original, it vaguely resembles the method at Amazon of writing Memos and speculative Press Releases for a new proposal and uses your 'internal simulator'.

in brief the way I employ this new method is taking the first kneejerk 'babble' or solution to the problem I come up with. Then I try to write a speculative narrative where this solution or action delivers a satisfactory or worthwhile result, being very methodical about the causation.  This is not, I stress, a prediction or prognostication.
What I find is that by writing  a speculative narrative, and making it as convincing as possible to myself, it forces me to explicate my framework and mental model around the problem, my hunches, suspicions, assumptions, belief, fears, hopes, observations, knowledge and reasoning. Much of which I may not be consciously aware of.

With the framework explicated, I can now go about babbling. But it will be much more targeted and optimized based on my expectations, knowledge, and the framework in general.

Some (not yet confirmed) secondary bonuses of this method:

- it fights analysis paralysis, instead of babbling for breadth, it forces thinking about causation and consequences
- it is inherently optimistic, as you're forcing yourself to write a structured argument why this could or would work
- having explicated your framework, you may be able to verify specific hunches or assumptions that hereto you weren't aware they were influencing your thinking

One caveat: why a satisfactory narrative, why not a best case scenario? I think a best case scenario will assume a lot of coincidence, serendipity and as a means for reflection and explication of your mental modelling or framework of the problem is less informative. For that reason, causative words and phrases like "because" "owing to" "knowing that.... it follows such..." "for this reason" should be abundant.

I will update after more real world employment.

 

As always, I may not be the intended audience, so please excuse my questions that might be patently obvious to the intended audience.

Am I right in understanding a very simplified version of this model is that if you use willpower too much without deriving any net benefits, eventually you'll suffer 'burnout' which really is just a mistrust of using willpower ever, which may have negative effects on other aspects of your life even where willpower is needed like, say, cleaning your house?

Willpower, as I understand it is another word for 'patience' or 'discipline', variously described as the ability to choose to endure pain (physical or emotional). Whether willpower actually exists is a question I won't get into here, let's assume for the sake of this model it does, and fits the description of the ability to choose to endure pain.

For me this sentence I find especially alien to me:

your psyche’s conscious verbal planner “earns” willpower (earns trust with the rest of your psyche) by choosing actions that nourish your fundamental, bottom-up processes in the long run. 

 

what is the "psyche's conscious verbal planner"?  I don't know what this is or what part of my mind, person, identity, totality as a organism or anything really that I can equate this label to. Also without examples of what actions are that nourish (again, would cleaning the house, cooking healthy meals be examples?), that are fundamental and those that aren't, it's even harder to pin down what this is and why you attribute willpower to it.

It appears to have the ability to force one's-self to go on a date, which really makes the "verbal" descriptor confusing since a lot of the processes that are involved in going on a date don't feel like they are verbal, lexical, or take the form of the speaker's native language written or spoken. At least in my experience, a lot of the thoughts, feelings, and motivations behind going on a date are not innately verbal for me and if you asked me "why did you agree to see this person?" - even if I felt no fear of embarrassment explaining my reasons - I'd have a hard time putting that into words. Or the words I'd use would be so impossibly vague ("they seem cool") as to suggest that there was a nonverbal reasoning or motivation.

Would this 'conscious verbal planner' also be the part of my mind and body that searches an online store a week later to see if those shoes I want are on special? Or would you attribute that to a different entity?

Is there an unconscious verbal planner?

When I am thinking very carefully about what I'm saying, but not so minutely that I'm thinking about the correct grammatical use, would the grammar I use be my unconscious verbal planner, while the content of my speech be the conscious verbal planner?

A lot of example, for me, of willpower often are nonverbal and come from guilt. Guilt felt as a somatic or bodily thing. I can't verbalize why I feel guilty, although it verbally equates to the words "should"  "must" and even "ought" when used as imperatives, not as modals. 
 

Yes I assumed it was a conscious choice (of the company that develops an A.I.) and not a limitation of the architecture. Although I am confused by the single-turn reinforcement explanation as while this may increase the probability of any individual turn being useful, as my interaction over the hallucinated feature in Instagram attests to, it makes conversations far less useful overall unless it happens to correctly 'guess' what you mean.

Why don't LLM's ask clarifying questions?

Caveat: I know little to nothing about the architecture of such things, please take this as naive user feedback if you wish, or you could ignore it.

Just now I was asking the Meta AI chatbot how to do an 'L-Cut' using the Android Instagram app. It hallucinated for quite a few messages instructions how to 'overlap' two video tracks when editing a reel before it finally admitted that no such ability in fact exists in the Instagram App.

My grossly negligent mistake was assuming that a AI LLM with Meta Branding would have current or accurate knowledge of Meta properties and products. 

However, imagine that there are two versions of the Instagram App, one that has this feature and one that doesn't - why didn't it ask "Just to check, what kind of phone are you using?" (which, also, would allay fears of 'spying' and invasion of privacy) and then, and only then give me advice or apologetically inform me that such a feature is not available. (In other branded/commercial chatbots, wouldn't this be an opportunity for an up-sell?[1])

Maybe I'm not prompting LLMs in the right way, but they almost never ask me questions. And certainly never clarification questions. They make an assumption, and produce a slab of text with a boilerplate "let me know if this helps?" tacked on the end. I assume this is a conscious choice and not a limitation of the technology.
To test my observation I entered the same prompt into three different LLMs:

I just asked Claude Haiku 3 "When was Rosberg's first Grand Prix?" to see if it assumes I mean Keke Rosberg, or his son, Nico Rosberg.Claude apologies for not having information on Nico Rosberg's first Grand Prix... and then goes on to state that "Formula One has had multiple drivers named Rosberg compete over the years. More details would be needed to determine the exact driver and their debut race."

What I find interesting, not one sentence in the answer takes the form of an interrogative!

Next, I asked GPT-4o mini the exact same question, to which it confidently assumes which Rosberg I mean: "Nico Rosberg's first Formula One Grand Prix was at the 2006 Bahrain Grand Prix, which took place on March 12, 2006. He raced for the Williams team."

Microsoft CoPilot makes the same assumption, with the traditional tacked on boilerplate question:
"Nico Rosberg made his Formula One debut at the 2006 Bahrain Grand Prix. He went on to have a successful career, culminating in winning the World Drivers' Championship in 2016. Is there anything else you'd like to know about his career or Formula One in general? [FI car Emoji, Smoke Emoji]"

Meta AI makes the same assumption, and gives a lot more unsolicited information, but it does mention his father:
"Nico Rosberg's first Grand Prix was the 2006 Bahrain Grand Prix. This marked the beginning of his Formula One career, which would eventually see him drive for Williams from 2006 to 2009 and then Mercedes from 2010 to 2016. During his time with Mercedes, Rosberg won the Formula One Driver's championship in 2016, a feat that made him and his father, Keke Rosberg, the second-father pairing to achieve this impressive milestone" 
Not that it is relevant, but Graham and Damon Hill were the first.

This is not an exhaustive sample by any means, but does confirm my naive observation that LLMs do not ask clarifying questions or use interrogatives in order to better shape their responses to my needs. 

  1. ^

    I imagine such a commercial hellscape would look a little like this:

    "I was just wondering why I wasn't earning any Ultrabonus points with my purchases"
    "Before we continue, could you tell me, do you have a Overcharge Co. Premium savings account, or a Overcharge Co. Platinum savings account?"
    "Uhh I think it is a Premium."
    "I'm so sorry. if you have a Overcharge Co. Platinum savings account then you will not be able to enjoy our Overcharge co. ultrabonus points loyalty system. However you may be suprised that for only a small increase in account fee, you too can enjoy the range of rewards and discounts offered with the Overcharge co. ultrabonus points loyalty system. Would you like to learn more?"

Edit: [on reflection I think perhaps as a newcomer what you should do is acquaint yourself with what other intelligent and perceptive posts have been made over the last decade on Lesswrong on issues around A.I. extinction before you try and write a high level theory of your own. Maybe even create a excel spreadsheet of all the key ideas as a Post-Graduate researcher does when preparing for their lit review]

I am still not sure what your post is intended to be about, what is it about "A.I. Extinction" is it that you have new insight into? I stress "new". 

As for your re-do of the opening sentence, those two examples are not comparable: getting a prognosis directly from a oncologist who has studied oncology, and presumably you've been referred to because they have experience with other cases of people with similar types of cancer, and has seem multiple examples over a number of years develop, is vastly different from the speculative statement of a unnamed  "A.I." researcher. The A.I. researcher doesn't even have the benefit of analogy, because there has never been throughout the entire Holocene anything close to a mass extinction event of human life perpetuated by a super-intelligence. An oncologist has a huge body of scientific knowledge, case studies, and professional experience to draw upon which are directly comparable, together with intimate and direct access to the patient.

Who specifically is the researcher you have in mind who said that humanity has only 5 years? 
 

If I were to redo your post, I would summarize whatever new and specific insight you have in one sentence and make that the lead sentence. Then spend the rest of the post backing that up with credible sources and examples.

Thank you for the clarification. Do you have a process or a methodology for when you try and solve this kind of "nobody knows" problems? Or is it one of those things where the very nature of these problems being so novel means that there is no broad method that can be applied?

I can't speak for the community but after having glanced at your entire post I can't be sure just what it is about. The closest you come to explaining it is near the end you promise to present a "high-level theory on the functional realities" that seem to be related to everything from increased military spending to someone accidentally creating a virus in the lab that wipes out humanity to combating cognitive bias. But what is your theory?

Your post also makes a number of generalize assumptions about the reader and human nature and invokes the pronoun "we" far too many times. I'm a hypocrite for pointing that out, because I tend to do it as well - but the problem is that unless you have a very narrow audience in mind, especially a community that you are a native to and know intimately, often you run the risk of making assumptions or statements they will at best be confused by, and at worst will get defensive for being included with.

Most of your assumptions aren't backed up by specific examples, citations to research. For example, in your first sentence you say that we subconsciously optimize for there being no major societal changes precipitated by technology. You don't back this up. I would assume that part of the reason why there are gold- bugs, just proves there is a huge contingent of people who invest real money based precisely on the fact that they can't anticipate what major economic changes future technologies might bring. There are currently billions of dollars being spent by firms like Apple, Google, even JP Morgan Chase into A.I. assistants, in anticipation of a major change.

I could one by one go through all these general assumptions, but there are too many for it to be worth my while. Not only that, most of the footnotes you use don't make reference to any concepts or observations which are particularly new or alien. The pareto principle, Compound Effect, Rumsfeld's Epistemology... I would expect your average Lesswrong reader is very familiar with these, they present no new insights.

I'm missing a key piece of context here - when you say "doing something good" are you referring to educational or research reading; or do you mean any type of personal project which may or may not involve background research?

I may have some practical observations about note-taking which may be relevant, if I understand the context.

I'm curious why you opted for Aristotle (albeit "modern") as the prompt pre-load? Most of those responses seem not directly tethered to Aristotelian concepts/books or even what he directly posits as being the most important skills and faculties of human cognition. For example, cold reading, I don't recall anything of the sort anywhere in any Aristotle I've read. 

While we're not sure Aristotle himself designed the layout of the corpus, we do know that in the Nicomachean Ethics lists the faculties of "whereby the soul attains Truth":

Techne (Τεχνε) - which refers to conventional ways of achieving goals, i.e. without deliberation 
Episteme (Επιστήμε) - which is apodeiktike or the faculty of arguing from proofs
Phronesis (Φρονέσις) - confusingly translated as "practical wisdom" this refers to the ability to deliberate to attain goals by means of deliberation. Excellence in phronesis is translated by the latinate word 'Prudence'.
Sofia (Σοφια) - often translated as 'wisdom' - Aristotle calls this the investigation of causes.
Nous (Νους ) - which refers to the archai - or the 'first principles'


According to Diogenes Laertius, the corpus (at least as it has come to us) divides into the practical books and the theoretical - the practical itself would be subdivided between the books on Techne (say Rhetoric and Poetics), and Phronesis (Ethics and Politics), the theoretical is then covered in works like the Metaphysics (which is probably not even a cohesive book, but a hodge-podge), Categories etc. etc. 

This would appear to me to be a better guide for the timeless education in Aristotelian tradition and how we should guide a modern adaptation.

Examples of how not to write a paragraph are surprisingly rare

Epistemic Status: one person's attempt to find counter-examples blew apart their own ( subjective) expectations

I try to assemble as many examples of how not to do something as 'gold standard' or best practice examples of how the same task should be done. The principle is similar to what Plutarch wrote: Medicine to produce health must examine disease, and music to create harmony must investigate discord. 

However when I tried to examine how not to write, in particular examples of poorly written paragraphs -- I was surprised by how rare they were. There are a great many okay paragraphs on the internet and in books, but very few that were so unclear or confusing that they were examples of 'bad' paragraphs. 

In my categorization paragraphs can be great - okay - bad.

Okay paragraphs are the most numerous, they observe the rule of thumb - keep one idea to one paragraph. To be an 'okay' paragraph and rise above 'bad' all a paragraph needs to do is to successfully convey at least one idea. Most paragraphs I found do that.

What elevates great paragraphs above okay paragraphs is they do an especially excellent job of conveying at least one idea. There are many qualities they may exhibit, including persuasiveness, the appearance of insight, brevity and simplicity in conveying an otherwise impenetrable or 'hard to grasp' idea.

In some isolated cases a great paragraph may actually clearly and convincingly communicate disinformation or a falsehood. I believe there is much more to learn about the forms paragraphs take from a paragraph that conveys a falsehood convincingly than a paragraph that clearly conveys what is generally accepted as true. 

What was surprising is how hard it is to find examples that invert the principle - a paragraph that is intended to convey an idea that is truthful but is hard to understand would be a bad paragraph in my categorization. Yet, despite actively looking for examples of 'bad paragraphs' I struggled to find some that were truly confusing or hopeless at conveying one single idea. This experience is especially surprising to me because it challenges a few assumptions or expectations that I had:

  1. Assumption 1  - people who have mistaken or fringey beliefs are disproportionately incapable of expressing those beliefs in a clear and intelligible form. I expected that looking for the least popular comments on Reddit, I would find many stream of consciousness rants that failed to convey ideas. This was far less common than rants that at least conveyed intent and meaning intelligibly.
  2. Assumption 2 - that as a whole, people need to learn to communicate better. I must reconsider, it appears on the transmission side, they already communicate better than I expected (counter-counterpoint: 1% rule)
  3. Assumption 3 - the adage that good writing = good thinking. Perhaps not, it would seem that you can write clearly enough to be understood yet that doesn't mean your underlying arguments are strong or your thinking is more 'intelligent'.
  4. Assumption 4 - That I'm a merely a below average communicator. It appears that if everyone is better than I expected, than I'm much further below average than I expected.

I have no take-out or conclusion on this highly subjective observation, hence why it is a quick-take and not a post. But I will add my current speculation:

My current theory for why is "I wasn't looking in the right places". For example, I ignored much academic or research literature because the ability of the writers to convey an idea is often difficult to assess without relevant domain knowledge as they are seldom made for popular consumption. Likewise I'm sure there's many tea-spilling image boards where more stream-of-consciousness rants of greater impenetrability might be found.

My second theory is pareidolia: perhaps I highly overrate my comprehension and reading skills because I'm a 'lazy reader' who fills in intention and meaning that is not there?

Load More