All of Olomana's Comments + Replies

Are you doing this from within Obsidian with one of the AI plugins?  Or are you doing this with the ChatGPT browser interface and copy/pasting the final product over to Obsidian?

1Solenoid_Entity
Currently just copy-pasting into GPT-4 via the web interface. I've got it working via the GPT-3 API as well today, but for now I prefer to suffer the inconvenience and get the better model. The questions it asks are MUCH more insightful.

Thank you for sharing this.  FYI, when I run it, it hangs on "Preparing explanation...".  I have an OpenAI account, where I use the gpt-3.5-turbo model on the per-1K-tokens plan.  I copied a sentence from your text and your prompt from the source code, and got an explanation quickly, using the same API key.  I don't actually have the ChatGPT Plus subscription, so maybe that's the problem.

ChatGPT has changed the way I read content, as well.  I have a browser extension that downloads an article into a Markdown file.  I open the ... (read more)

2DirectedEvolution
Edit: I managed to solve this issue, which appears to be a widespread problem with accessing ChatGPT via the API. The fix is incorporated to the code on Github.

Is there a database listing, say, article, date, link and tags?  That would give you the ability to find trending tags.  It would also allow a cluster analysis and a way to find articles that are similar to a given article, "similar" meaning "nearest within the same cluster".

I agree with the separation, but offer a different reason.  Exploratory writing can be uncensored; public writing invites consideration of the reaction of the audience.

As an analogy, sometimes I see something on the internet that is just so hilarious... my immediate impulse is to share it, then I realize that there is no upside to sharing because I pretend to be the type of person who wouldn't even think that was funny.  Similarly, on more philosophical subjects, sometimes I will have an insight that is better kept private.

You see what I did there?  If I were writing this in my journal, I'd include a concrete example.  However, this is a public comment, and it's smarter not to.

1Johannes C. Mayer
I agree with this. This is a constraint, otherwise, I would have more posts already. You don't want to constrain yourself by needing to think about if what you are writing is something that you can say in public. Though I wonder how much value is lost by people not posting certain kinds of content, because of this or similar reasons. If you want to provide more value, a good heuristic might be to talk about stuff that seems important, but that you do not want to share, because that probably indicates that other people will also not talk about this.

I copied our discussion into my PKM, and I'm wondering how to tag it... it's certainly meta, but we're discussing multiple levels of abstraction.  We're not at level N discussing level N-1, we're looking at the hierarchy of levels from outside the hierarchy.  Outside, not necessarily above.  This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.

1M. Y. Zuo
So have you conceived of some new way to identify level 3 books?

I was looking for real-life examples with clear, useful distinctions between levels.  

The distinction between "books about books" and "books about books about books" seems less useful to me.  However, if you want infinite levels of books, go for it.  Again, I see this as a practical question rather than a theoretical one.  What is useful to me may not be useful to you.

1M. Y. Zuo
If a clear delineation between ’books about books’ and ’books about books about books’ does not exist, how can we be so sure of the same between meta-thoughts and meta-meta-thoughts, which are far more abstract and intangible? (Or a meta-meta-rationality for that matter?) But before that, I can’t think of even a single concrete example of a meta-meta-book, and if you cannot either, then that seems like a promising avenue to investigate. If none truly exists, we are unconstrained in imagining what it may look like.

I don't see this as a theoretical question that has a definite answer, one way or the other.  I see it as a practical question, like how many levels of abstraction are useful in a particular situation.  I'm inclined to keep my options open, and the idea of a theoretical infinite regress doesn't bother me.

I did come up with a simple example where 3 levels of abstraction are useful:

  • Level 1: books
  • Level 2: book reviews
  • Level 3: articles about how to write book reviews
1M. Y. Zuo
In your example, shouldn’t level 3 be reviews of book reviews?  EDIT: Or perhaps more generally it should be books about books about books?

We're using language to have a discussion.  The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn't prevent us from thinking together.

Similarly, using a PKM is like having an extended discussion with myself.  The discussion is what matters, not the implementation details.

1M. Y. Zuo
Isn’t that exactly what is in question here? Words on a screen via LessWrong, the ‘implementation details’, may or may not be what’s preventing us from having a cogent discussion on meta-memes… If implementation details are irrelevant a priori, then there should be nothing stopping you from clearly stating why you believe so, one way or the other.

I view my PKM as an extension of my brain.  I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory.  You can make the distinction if you like, but I find it more useful to focus on the similarities.

As for meta-meta-thoughts, I'm content to let those emerge... or not.  It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.

1M. Y. Zuo
I’m having trouble visualizing any ‘PKM’ system that has any recognizable similarity at all with the method of information storage employed by the human brain, though this is not fully understood either. Can you explain how you organize yours in more detail?

I don't see your distinction between thoughts and notes.  To me, a note is a thought that has been written down, or captured in the PKM.

No, I don't have an example of thinking meta-meta-rationally, and if I did, you'd just ask for an example of thinking meta-meta-meta-rationally.  I do think that if I got to a place where I needed another level of abstraction, I'd "know it when I see it", and act accordingly, perhaps inventing new words to help manage what I was doing.

1M. Y. Zuo
If you haven’t ever experienced it, how did you ascertain that meta-meta-thoughts exist? Also, you don’t believe there’s a distinction between notes stored on paper, or computer,  and thoughts stored in human memory?

I am a fan of PKM systems (Personal Knowledge Management).  Here the unit at the bottom level is the "note".  I find that once I have enough notes, I start to see patterns, which I capture in notes about notes.  I tag these notes as "meta".  Now I have enough meta notes that I'm starting to see patterns... I'm not quite there yet, but I'm thinking about making a few "meta meta notes".

Whether we're talking about notes, memes or rationality, I think the usefulness of higher levels of abstraction is an emergent property.  Standing at ... (read more)

1M. Y. Zuo
A meta-meta-note seems straightforward to construct because interactions with notes, meta-notes, meta-meta-notes, etc., are carried out in the same manner, i.e. linearly. But thoughts are different, since you cannot combine several dozen thoughts into a meta-meta-thought, unlike notes. (maybe that would work in a hive mind?) How would you think meta-meta-rationally? Can you give an example?

What's the problem with infinite regress?  It's turtles all the way up.

1M. Y. Zuo
There may or may not be a vicious infinite regress so I left it ambiguous as to whether that itself is a 'problem'. In any case, it seems extremely difficult to derive anything from a meta-meta-etc-rationality. How exactly would it be applied?

One nanosecond is slightly less than to one meter.

No, one nanosecond is slightly less than one foot.

2lsusr
Fixed. Thanks.

The problem statement says "arbitrary real numbers", so the domain of your function P is -infinity to +infinity.  P represents a probability distribution, so the area under the curve is equal to 1.  P is strictly increasing, so... I'm having trouble visualizing a function that meets all these conditions.

You say "any" such function... perhaps you could give just one example.

2Oscar_Cunningham
In this case P is the cumulative distribution function, so it has to approach 1 at infinity, rather than the area under the curve being 1. An example would be 1/(1+exp(-x)).

Interesting.  I think of heuristics as being almost the same as cognitive biases.  If it helps System 1, it's a heuristic.  If it gets in the way of System 2, it's a cognitive bias.

Not a disagreement, just an observation that we are using language differently.

1Jay
I basically agree.  A heuristic lets System 1 function without invoking (the much slower) System 2.  We need heuristics to get through the day; we couldn't function if we had to reason out every single behavior we implement.  A bias is a heuristic when it's dysfunctional, resulting in a poorly-chosen System 1 behavior when System 2 could give a significantly better outcome.   One barrier to rationality is that updating one's heuristics is effortful and often kind of annoying, so we always have some outdated heuristics.  The quicker things change, the worse it gets.  Too much trust in one's heuristics risks biased behavior; too little yields indecisiveness.

Regarding the first enigma, the expectation that what has worked in the past will work in the future is not a feature of the world, it's a feature of our brains.  That's just how neural networks work, they predict the future based on past data.

Regarding the third enigma, ethical principles are not features of the world, they are parameters of our neural networks, however those parameters have been acquired.

Regarding the second enigma, I am less confident, but I think something similar is going on.  Here my metaphor is not the ML branch of AI, but... (read more)

5TAG
If it's a feature of our brains , but not the world, then it's not going to work. Unless you get very Kantian and insist that out brains are determining the world.... Which, again, doesn't address the issue of how valid they are...even if you invented something, you can do better or worse at it.
4Alex Flint
Yeah right, we are definitely hard-wired to predict the future based on the past, and in general the phenomenon of predicting the future based on the past is a phenomenon of the mind, not of the world. But it sure would be nice to know whether that aspects of our minds is helping us to see things clearly or not. For me personally, I found it very difficult to get to work with full conviction without spending some real time investigating this. Another way to say this is that we are born hard-wired to do all kinds of things, and we can look at our various hard-wirings and reflect on whether they are helping us to see things clearly and decide what to do about them. Now you might say that neural networks predict the future based on the past in a way that is a level more ingrained than any one particular heuristic or bias. But to me that just makes it all the more pressing to investigate whether this deep aspect of our brains is helping or hurting our capacity to see things clearly. I just found that I could put this question aside for only so long.

Right, and if doing computer-generated sudokus is a kata for developing the heuristics for doing sudokus, then perhaps solving computer-generated logic problems could be a kata for developing the heuristics for rationality.

1Jay
I think we need to distinguish between some related things here: * Rote learning is the stuff of katas, multiplication tables, etc.  It's not rationality in itself, but reason works best if you have a lot of reliable premises. * Developing heuristics is the stuff of everyday education.  Most people get years of this stuff, and it's what makes most people as rational as they are. * Crystallized intelligence it the ability to reason by applying heuristics.  Most people aren't very good at it, which is the main limitation on education.  AFAIK, we don't know how to give people more. * Fluid intelligence is the ability to reason creatively without heuristics.  It's the closest to what I mean by "rationality", but also the hardest to train. * Executive function includes some basic cognitive processes that govern people's behavior.  Unfortunately, it is almost entirely (86-92%) genetic.
4ChristianKl
The problem is that to be good at rationality you need to be good at interacting with the real-world with all it's uncertainty. 

I do sudokus.  These are computer-generated, and of consistent difficulty.  so I can't solve them from memory.  Perhaps something similar could be done for math or logic problems, or story problems where cognitive biases work against the solutions.

1Jay
Perhaps, but it would surprise me if you don't have hundreds of common sudoku patterns in your memory.  Not entire puzzles, but heuristics for solving limited parts of the puzzle.  That's how humans learn.  We do pattern recognition whenever possible and fall back on reason when we're stumped.  "Learning" substantially consists of developing the heuristics that allow you to perform without reason (which is slow and error-prone).

Is gradient hacking a useful metaphor for human psychology?  For example, peer pressure is a real thing.  If I choose to spend time with certain people because I expect them to reinforce my behavior in certain ways, is that gradient hacking?

I have taken a few MOOCs and I agree with your assessment.

MOOCs are what they are.  I see them as starting points, as building blocks.  In the end, I'd rather take a free, dumbed-down intro MOOC from Andrew Ng at Stanford, than pay for an in-person, dumbed-down intro class from some clown at my local community college.  At least there's no sunk cost, so it's easy to walk away if I lose interest.

An Einstein runs on pretty much the same hardware as the rest of us.  If genetic engineering can get us to a planet full of Einsteins without running into hardware limitations, that may not qualify as an "intelligence explosion", but it's still a singularity in that we can't extrapolate to the future on the other side.

Another thought... genetic engineering may be what will make us smart enough to build a safe AGI.

OK, good points.  There is a spectrum here... if you live in a place where there's a civil war every few years, then prepping for civil war makes a lot of sense.  If you live in a place where the last civil war was 150 years ago, not so much.

CHAZ took place in a context where the most likely outcome was the failure of CHAZ, not the collapse of the larger society.  CHAZ failed to prep for the obvious, if not the almost inevitable.

1Alex Hollow
I'd be careful with thinking of prepping as a binary "do/don't prep" distinction. If you live somewhere where a civil war happens every 2-3 years, the expected value of something that only has value in a civil war scenario is much higher than if one happens every 150 years or so. However, that doesn't mean you should "prep" in one case and not the other, just that some actions that would be worth it if civil wars were frequent are not worth it if civil wars are infrequent. Water may be useful in both, but training your friends in wilderness survival or whatever, maybe less so.

For things like hurricanes, one can look at the historical record, make a reasonable estimate, and do a prudent amount of prepping.  For a societal collapse, there's no data, so the estimate is based on a narrative.  The narrative may be socially constructed, for example, a religious narrative about the End Times.  Or it may be that prepping has become a hobby, and preppers talk to each other about their preps, and the guy that has 6 months of water and stored food gets more respect than the guy who has a week's supply of water under his bed... (read more)

1bfinn
Societies have collapsed before. Plenty of data on eg civil wars presumably. So one could make a useful ballpark estimate of the annual risk of this. Which I suspect is surprisingly high even for rich countries; if we factor in covid as a near-miss. And things like the BLM protests/riots could also lead to local civil breakdown with resulting shortages. Oh, come to think of it it did - CHAZ. In which IIRC the protesters ran short of food and had to request outside supplies.

How do you distinguish between your having a good day, and your opponent having a bad day?

5gjm
One option is to do tactics puzzles instead of playing actual games. I don't know which is likely to correlate better with other aspects of mental performance.
3NunoSempere
This is easier to do by playing twenty 1-minute games.

If you read a Wikipedia article and think it's very problematic, take five minutes and write about why it's problematic on the talk page of the article. 

FYI, I did exactly that a couple of weeks ago, and nothing happened (yet, at least).  No politically charged issues, just a simple conflation of two place names with similar spelling.  I thought about splitting the one page into two and figuring out what other pages should link to them... and decided that there was probably someone much more qualified than I was, who would actually enjoy cle... (read more)

I was thinking of #1.  #2 applies both to genetic selection and cultural selection.

ADA is definitely a contender, but my concern is that they may be too slow.  I'd rather own a few coins, and rebalance as things develop.

(I own some ADA, and added more on the recent dip, but I have more ETH than ADA.)

A modest suggestion: first, learn how to shoot.  Something simple, like a .22 target pistol.  Find someone who knows what they're doing and ask them to teach you.  Learn how to load it, how to stand, how to hold it, how to aim, how to pull the trigger.  Feel the recoil.  Practice at a target range.  None of this is particularly complicated, but "gun" will no longer be an abstraction, it will be something tied to body memory.

Now, think about whether you want to own a gun.

Thank you for writing this up!  This is also something I want to learn about.  FYI, there is a book coming out in a couple of months:

https://www.amazon.com/dp/1800563191

2adamShimi
You're welcome! Thanks for the link. After a quick look, it seems like a good complementary resource for what I did in week 3 and 4.

Even cultural heritage may be seen as especially effective compression heuristics that are being passed down through generations.

"Especially effective" does not imply "beneficial to you as an individual".  

1pchvykov
That's an interesting question - I was assuming that there is a sort of "natural selection" process that acts over generations, and picks out the "best" algorithms. This way, I can understand your comment in two ways: 1. the selection pressures may not be directed at individual benefit, but rather at group survival or optimal transmission (rules that are easier to remember are easier to pass down) 2. the selection that led to our algorithms may be outdated in our modern world Am I getting it, or did you have something else in mind?

I like it.  By all means, as long as we're thinking about thinking, let's think about how we label ourselves.

When I solve a sudoku, I typically make quick, incremental progress, then I get "stuck" for a while, then there is an insight, then I make quick, incremental progress until I finish.  Not that there is anything profound about sudokus, but something like this might provide a controlled environment for studying insights.  http://websudoku.com/ provides an endless supply of classic sudokus in 4 levels of difficulty.  My experience is that the "Evil" level is consistently difficult.  I have noticed that my being tired or distracted is enoug... (read more)

Instead of an either/or decision based on first principles, you might frame this as a "when" decision based on evidence.  We've had about 4 months of real-world experience with the mRNA vaccines... if you wait another 4 months, that's double the track record, and it's always possible that new options will open up (say, a more traditional vaccine that's more effective than J&J).

[anonymous]120

The converse of that is that 225 million doses have been given and the serious negative effect rate is extremely low.  It's improbable that merely another doubling of time and doses will reveal any new information.  

If there is some new way this method causes the human body to fail it won't be found for years.  

Conversely, there's still the risk of Covid, and isolation has holes.  The biggest one being you might get sick and have to see medical treatment, and hospital acquired infections are estimated to happen 1.7 million times a year.... (read more)

I would like to know which other ethical thought experiments have this pattern...

Isn't the answer just "all of them"?  The contrapositive of an implication is always true.

If (if X then Y) then (if ~Y then ~X).  Any intuitive dissonance between X and Y is preserved by negating them into ~X and ~Y.

2Mati_Roy
Yeah that makes sense

Excellent introduction!  My own experience with DeFi is a few months in the Yearn USDT vault.  (It seemed like a low-risk way to learn the mechanics.)  The quoted APYs vary quite a bit from week to week.  If I calculate the APY myself over the whole time, it's about 9% annualized.  That's not bad for a stablecoin, but after gas fees for entry and exit, it's hardly worth the bother for the amount I was willing to experiment with.

I find that I like strategies with a lot of transactions, like dollar-cost averaging or asset allocation ... (read more)

Answer by Olomana-20

Some cryptocurrencies, notably Bitcoin, are designed to be deflationary.

Bitcoin is not deflationary.  It is slightly inflationary, much less inflationary than fiat currencies, but it is not deflationary.

4Dawn Drescher
Fascinating, thanks! I found this article. Is that roughly what you’re referring to? It sounds like the author would agree that it is deflationary so long as the user base grows faster than the supply. In that case, my scenario above should self-correct eventually, unless a more deflationary coin catches on.

"a feeling of supreme insight without any associated insight"... I call this a "content-free Aha moment".

Regarding math education, you might look into the Moore Method of teaching topology.

Changing the rules tends to neutralize acquired knowledge.  A strong club player is strong in part because he has an opening repertoire, a good knowledge of endgames, a positional sense in the middlegame, and recognizes tactical themes from experience.  Beginners tend to be weak players precisely because they lack those things, because they haven't  yet made the investment in time and effort to acquire them.

Changing the rules appeals to weaker players because it levels the playing field.

Of course, by saying this, I'm signaling that I'm a chess snob, that I have substantial acquired knowledge, and that I'm strong enough to play "real chess".

2MikkW
I mean, Capablanca was World Champion. Same thing with Bobby Fischer and Fischer960 chess

I enjoyed this and tagged it as Humor.

1willbobaggins
Great catch-thank you!

This means that if I see substantially more advertising for Brand X than for superficially-similar Brand Q, I can reasonably assume that Brand X is likely to have a better product than Brand Q.

I have the opposite reaction.  Example: two products sell for the same price, Brand X spends 50% on manufacturing the product and 50% on advertising, Brand Q spends 80% on the product and 20% on advertising.  If I buy Brand Q, I am getting more product and less advertising.

Another example: Diet Coke is twice as expensive as Sam's Diet Cola (Walmart's house ... (read more)

Gee, if I do the training twice, can I get 20 - 40 points?

IQs are defined on a normal curve, and a standard deviation is 15 or 16 points, about the midpoint of the promised 10 to 20 point gain. A 1-sigma gain (for any reason) becomes statistically less and less plausible as one moves to the right of the curve. Based on the education levels in the user survey, Less Wrong readers are already a lot smarter than average. So, for us, probably not. For Joe Average, maybe so.

3Viliam
Careful, kids, this is how you get the intelligence explosion! Especially if the 20 extra points allow you to complete the later trainings faster...

I like berries on my oatmeal, and have tried various kinds. Blueberries freeze well, and a thawed frozen blueberry is a reasonable approximation of a fresh blueberry. There is the same resistance, pop and release of tartness and flavor. Raspberries turn to mush when they thaw. Strawberries are somewhere in between.

2jefftk
Yup! Luckily if I'm substituting them for jam mush is good.

Can you give me a away to copy your templates into my own Airtable workspace?

1Harri Besceli
Sure, messaged you with a link

Ergonomics! Raise your seat to get full or almost full leg extension. Raise you handlebars if needed. Experiment. I find that slight adjustments to the bike make a big difference in where I get sore.

Also, look into interval training / HIIT, although this is more about maximizing output over time (cardio) than minimizing pain.

1rmoehn
Plus if the saddle is higher, you sit more hinged, which decreases air resistance and thus makes riding easier. If that makes your rear hurt, get a different saddle, such as https://sqlab-usa.com/collections/saddles/products/602-m-d-active-saddle. Here's a guide for setting up a bicycle correctly: https://bike.bikegremlin.com/360/setting-up-riding-position-bike-fitting/ They do it the way I do it and I'ver never had knee problems from riding a bike. (Just thought: Another reason for knee pain might be riding with knees collapsed inwards.)

I suggest making a distinction between non-programmable and programmable systems. We have non-searchable systems, like physical notebooks, and we have searchable systems, like wikis. Going from searchable systems to programmable systems is a similar quantum leap.

One might say that programmability goes beyond the bounds of notetaking, but if our larger domain (exobrain) includes both notetaking and programmability, do we want to mix them or keep them separate?

As a simple example, I can have Google Calendar email me every Thursday morning (programmability)... (read more)

2quanticle
What do you mean by "programmable"? I keep my notes as a directory of markdown files in a git repo. I can manipulate these files with all the standard Unix command line tools that are specialized for manipulating text. In your mind, does that meet your threshold for programmability, or are you looking for something else?
The first group is remote-workers. These people are generally able to maintain their economic output while maintaining heavy social isolation.

Not necessarily. An example would be software development. If a business is facing declining revenue, suddenly that rush software project can be delayed or stretched out a few months, leaving the remote programmers with fewer paid hours.

Any belief that is the opposite of a social construct that most people around me have internalized. I'd give an example if I could post anonymously.

3Stuart Anderson
-

You might look into Topic Modeling, or Topological Data Analysis. The basic idea is to build a database of entries and lists of words they contain, then run the data through a machine learning algorithm, which groups the entries into "topics", and generate a page for each topic listing the entries that belong to the topic. Then you can add a toolbar to the bottom of each entry containing lists to all the topics that entry belongs to.

The algorithms have been reduced to black boxes, and there are tutorials for the black boxes. The difficult part... (read more)

Load More