Thank you for sharing this. FYI, when I run it, it hangs on "Preparing explanation...". I have an OpenAI account, where I use the gpt-3.5-turbo model on the per-1K-tokens plan. I copied a sentence from your text and your prompt from the source code, and got an explanation quickly, using the same API key. I don't actually have the ChatGPT Plus subscription, so maybe that's the problem.
ChatGPT has changed the way I read content, as well. I have a browser extension that downloads an article into a Markdown file. I open the ...
I agree with the separation, but offer a different reason. Exploratory writing can be uncensored; public writing invites consideration of the reaction of the audience.
As an analogy, sometimes I see something on the internet that is just so hilarious... my immediate impulse is to share it, then I realize that there is no upside to sharing because I pretend to be the type of person who wouldn't even think that was funny. Similarly, on more philosophical subjects, sometimes I will have an insight that is better kept private.
You see what I did there? If I were writing this in my journal, I'd include a concrete example. However, this is a public comment, and it's smarter not to.
I copied our discussion into my PKM, and I'm wondering how to tag it... it's certainly meta, but we're discussing multiple levels of abstraction. We're not at level N discussing level N-1, we're looking at the hierarchy of levels from outside the hierarchy. Outside, not necessarily above. This reinforces my notion that structure should emerge from content, as opposed to trying to fit new content into a pre-existing structure.
I was looking for real-life examples with clear, useful distinctions between levels.
The distinction between "books about books" and "books about books about books" seems less useful to me. However, if you want infinite levels of books, go for it. Again, I see this as a practical question rather than a theoretical one. What is useful to me may not be useful to you.
I don't see this as a theoretical question that has a definite answer, one way or the other. I see it as a practical question, like how many levels of abstraction are useful in a particular situation. I'm inclined to keep my options open, and the idea of a theoretical infinite regress doesn't bother me.
I did come up with a simple example where 3 levels of abstraction are useful:
We're using language to have a discussion. The fact that the Less Wrong data center stores our words in a way that is unlike our human brains doesn't prevent us from thinking together.
Similarly, using a PKM is like having an extended discussion with myself. The discussion is what matters, not the implementation details.
I view my PKM as an extension of my brain. I transfer thoughts to the PKM, or use the PKM to bring thoughts back into working memory. You can make the distinction if you like, but I find it more useful to focus on the similarities.
As for meta-meta-thoughts, I'm content to let those emerge... or not. It could be that my unaided brain can only manage thoughts and meta-thoughts, but with a PKM boosted by AI, we could go up another level of abstraction.
I don't see your distinction between thoughts and notes. To me, a note is a thought that has been written down, or captured in the PKM.
No, I don't have an example of thinking meta-meta-rationally, and if I did, you'd just ask for an example of thinking meta-meta-meta-rationally. I do think that if I got to a place where I needed another level of abstraction, I'd "know it when I see it", and act accordingly, perhaps inventing new words to help manage what I was doing.
I am a fan of PKM systems (Personal Knowledge Management). Here the unit at the bottom level is the "note". I find that once I have enough notes, I start to see patterns, which I capture in notes about notes. I tag these notes as "meta". Now I have enough meta notes that I'm starting to see patterns... I'm not quite there yet, but I'm thinking about making a few "meta meta notes".
Whether we're talking about notes, memes or rationality, I think the usefulness of higher levels of abstraction is an emergent property. Standing at ...
The problem statement says "arbitrary real numbers", so the domain of your function P is -infinity to +infinity. P represents a probability distribution, so the area under the curve is equal to 1. P is strictly increasing, so... I'm having trouble visualizing a function that meets all these conditions.
You say "any" such function... perhaps you could give just one example.
Regarding the first enigma, the expectation that what has worked in the past will work in the future is not a feature of the world, it's a feature of our brains. That's just how neural networks work, they predict the future based on past data.
Regarding the third enigma, ethical principles are not features of the world, they are parameters of our neural networks, however those parameters have been acquired.
Regarding the second enigma, I am less confident, but I think something similar is going on. Here my metaphor is not the ML branch of AI, but...
I have taken a few MOOCs and I agree with your assessment.
MOOCs are what they are. I see them as starting points, as building blocks. In the end, I'd rather take a free, dumbed-down intro MOOC from Andrew Ng at Stanford, than pay for an in-person, dumbed-down intro class from some clown at my local community college. At least there's no sunk cost, so it's easy to walk away if I lose interest.
An Einstein runs on pretty much the same hardware as the rest of us. If genetic engineering can get us to a planet full of Einsteins without running into hardware limitations, that may not qualify as an "intelligence explosion", but it's still a singularity in that we can't extrapolate to the future on the other side.
Another thought... genetic engineering may be what will make us smart enough to build a safe AGI.
OK, good points. There is a spectrum here... if you live in a place where there's a civil war every few years, then prepping for civil war makes a lot of sense. If you live in a place where the last civil war was 150 years ago, not so much.
CHAZ took place in a context where the most likely outcome was the failure of CHAZ, not the collapse of the larger society. CHAZ failed to prep for the obvious, if not the almost inevitable.
For things like hurricanes, one can look at the historical record, make a reasonable estimate, and do a prudent amount of prepping. For a societal collapse, there's no data, so the estimate is based on a narrative. The narrative may be socially constructed, for example, a religious narrative about the End Times. Or it may be that prepping has become a hobby, and preppers talk to each other about their preps, and the guy that has 6 months of water and stored food gets more respect than the guy who has a week's supply of water under his bed...
If you read a Wikipedia article and think it's very problematic, take five minutes and write about why it's problematic on the talk page of the article.
FYI, I did exactly that a couple of weeks ago, and nothing happened (yet, at least). No politically charged issues, just a simple conflation of two place names with similar spelling. I thought about splitting the one page into two and figuring out what other pages should link to them... and decided that there was probably someone much more qualified than I was, who would actually enjoy cle...
A modest suggestion: first, learn how to shoot. Something simple, like a .22 target pistol. Find someone who knows what they're doing and ask them to teach you. Learn how to load it, how to stand, how to hold it, how to aim, how to pull the trigger. Feel the recoil. Practice at a target range. None of this is particularly complicated, but "gun" will no longer be an abstraction, it will be something tied to body memory.
Now, think about whether you want to own a gun.
Thank you for writing this up! This is also something I want to learn about. FYI, there is a book coming out in a couple of months:
When I solve a sudoku, I typically make quick, incremental progress, then I get "stuck" for a while, then there is an insight, then I make quick, incremental progress until I finish. Not that there is anything profound about sudokus, but something like this might provide a controlled environment for studying insights. http://websudoku.com/ provides an endless supply of classic sudokus in 4 levels of difficulty. My experience is that the "Evil" level is consistently difficult. I have noticed that my being tired or distracted is enoug...
Instead of an either/or decision based on first principles, you might frame this as a "when" decision based on evidence. We've had about 4 months of real-world experience with the mRNA vaccines... if you wait another 4 months, that's double the track record, and it's always possible that new options will open up (say, a more traditional vaccine that's more effective than J&J).
The converse of that is that 225 million doses have been given and the serious negative effect rate is extremely low. It's improbable that merely another doubling of time and doses will reveal any new information.
If there is some new way this method causes the human body to fail it won't be found for years.
Conversely, there's still the risk of Covid, and isolation has holes. The biggest one being you might get sick and have to see medical treatment, and hospital acquired infections are estimated to happen 1.7 million times a year....
Excellent introduction! My own experience with DeFi is a few months in the Yearn USDT vault. (It seemed like a low-risk way to learn the mechanics.) The quoted APYs vary quite a bit from week to week. If I calculate the APY myself over the whole time, it's about 9% annualized. That's not bad for a stablecoin, but after gas fees for entry and exit, it's hardly worth the bother for the amount I was willing to experiment with.
I find that I like strategies with a lot of transactions, like dollar-cost averaging or asset allocation ...
Changing the rules tends to neutralize acquired knowledge. A strong club player is strong in part because he has an opening repertoire, a good knowledge of endgames, a positional sense in the middlegame, and recognizes tactical themes from experience. Beginners tend to be weak players precisely because they lack those things, because they haven't yet made the investment in time and effort to acquire them.
Changing the rules appeals to weaker players because it levels the playing field.
Of course, by saying this, I'm signaling that I'm a chess snob, that I have substantial acquired knowledge, and that I'm strong enough to play "real chess".
This means that if I see substantially more advertising for Brand X than for superficially-similar Brand Q, I can reasonably assume that Brand X is likely to have a better product than Brand Q.
I have the opposite reaction. Example: two products sell for the same price, Brand X spends 50% on manufacturing the product and 50% on advertising, Brand Q spends 80% on the product and 20% on advertising. If I buy Brand Q, I am getting more product and less advertising.
Another example: Diet Coke is twice as expensive as Sam's Diet Cola (Walmart's house ...
Gee, if I do the training twice, can I get 20 - 40 points?
IQs are defined on a normal curve, and a standard deviation is 15 or 16 points, about the midpoint of the promised 10 to 20 point gain. A 1-sigma gain (for any reason) becomes statistically less and less plausible as one moves to the right of the curve. Based on the education levels in the user survey, Less Wrong readers are already a lot smarter than average. So, for us, probably not. For Joe Average, maybe so.
I like berries on my oatmeal, and have tried various kinds. Blueberries freeze well, and a thawed frozen blueberry is a reasonable approximation of a fresh blueberry. There is the same resistance, pop and release of tartness and flavor. Raspberries turn to mush when they thaw. Strawberries are somewhere in between.
Ergonomics! Raise your seat to get full or almost full leg extension. Raise you handlebars if needed. Experiment. I find that slight adjustments to the bike make a big difference in where I get sore.
Also, look into interval training / HIIT, although this is more about maximizing output over time (cardio) than minimizing pain.
I suggest making a distinction between non-programmable and programmable systems. We have non-searchable systems, like physical notebooks, and we have searchable systems, like wikis. Going from searchable systems to programmable systems is a similar quantum leap.
One might say that programmability goes beyond the bounds of notetaking, but if our larger domain (exobrain) includes both notetaking and programmability, do we want to mix them or keep them separate?
As a simple example, I can have Google Calendar email me every Thursday morning (programmability)...
The first group is remote-workers. These people are generally able to maintain their economic output while maintaining heavy social isolation.
Not necessarily. An example would be software development. If a business is facing declining revenue, suddenly that rush software project can be delayed or stretched out a few months, leaving the remote programmers with fewer paid hours.
You might look into Topic Modeling, or Topological Data Analysis. The basic idea is to build a database of entries and lists of words they contain, then run the data through a machine learning algorithm, which groups the entries into "topics", and generate a page for each topic listing the entries that belong to the topic. Then you can add a toolbar to the bottom of each entry containing lists to all the topics that entry belongs to.
The algorithms have been reduced to black boxes, and there are tutorials for the black boxes. The difficult part...
Are you doing this from within Obsidian with one of the AI plugins? Or are you doing this with the ChatGPT browser interface and copy/pasting the final product over to Obsidian?