All of AnthonyC's Comments + Replies

I will definitely be checking out those books, thanks, and your response clarified the intent a lot for me.

As for where new metaphors/mechanisms come from, and whether they're ever created out of nothing, I think that that is very very rare, probably even rarer than it seems. I have half-joked with many people that at some level there are only a few fundamental thoughts humans are capable of having, and the rest is composition (yes, this is metaphorically coming from the idea of computers with small instruction sets). But more seriously, I think it's mostl... (read more)

6adamShimi
Oh, that's a great response! I definitely agree with you that there is something like a set of primitives or instructions (as you said, another metaphor) that used everywhere by humans. We're not made to do advanced maths, create life-like 2D animation, cure diseases. So we're clearly retargeting processes that were meant for much more prosaic tasks. The point reminds me of this great quote from Physics Avoidance, a book I'm taking a lot of inspiration for my model of methodology: (p.32) This is clearly the part of my model of methodology/epistemology that is the weakest. I feel there is something there, and that somehow the mix of computational constraints thinking from Theoretical CS and language design thinking from Programming Language Theory might make sense of it, but it's the more mechanistic and hidden part of methodology, and I don't feel I have enough phenomenological regularities to go in that direction. Digging more into the Faraday question, this raises another subtlety: how do you differentiate the sort of "direct" reuse/adaptation of a cognitive primitive to a new task, from the analogy/metaphor to a previous use in the culture. Your hypotheses focus more on the latter, considering where Faraday could have seen or heard geometric notions in context that would have inspired him for his lines of forces. My intuition is that this might instead be a case of the former, because Faraday was particularly graphic in his note taking and scientific practice, and so it is quite natural for him to convergently rediscover graphic/visual means of explanations. Exploratory Experiments, my favoured treatment of Faraday's work on Electromagnetism (though focused on electromagnetic induction rather than the lines of forces themselves), emphasizes this point. (p.235,241) (As a side note, Faraday's work in Electromagnetism is probably one of the most intensely studied episode in the history of science. First because of its key importance for the development of ele

Agreed on all counts. I really, genuinely do hope to see your attempt at such a benchmark succeed, and believe that such is possible.

(1) I agree, but don't have confidence that this alternate approach results in faster progress. I hope I'm proven wrong.

(4) Also agreed, but I think this hinges on whether the failing plans are attempted in such a way that they close off other plans, either by affecting planning efforts or by affecting reactions to various efforts.

(5) Fair enough. 

Liron: Carl Feynman. What is your P(Doom)?

Carl: 43%.

Comments like this always remind me of the Tetlock result that forecasters who report probability estimates using more-precise, less-round numbers do in fact outperform others, and are more correctly incorporating the sources of information available.

I'm curious if you have an opinion on the relatives contributions of different causes, such as:

  1. Inability of individuals to think outside established metaphors, without realizing they're inadequate
  2. Inability of individuals to think outside established metaphors, even while knowing they're inadequate
  3. Inability of individuals to think of better new metaphors
  4. Inability to have public conversations through low-bandwidth channels without relying on established metaphors, whether or not the individuals on either end know they're inadequate

I'm thinking (as an example... (read more)

2adamShimi
I'm unsure if that's what you meant, but your comment has made me realize that I didn't neatly separate the emergence of a new mechanism (pseudo or not) from the perpetuation of an existing one. The whole post weaves back and forth between the two. For the emergence of a new mechanism, this raises a really interesting question: where does it come from. The examples I mentioned, and more that come to mind, clearly point to a focus on some data, some phenomenological compression as a starting point (Galileo, Kepler, and other's observations and laws for Newton, say). But then it also feels like the metaphor being used is never (at least I can't conjure up an instance) completely created out of nothing. People pull it out of existing technology (maybe clockwork for Newton? definitely some example in the quote from The Idea of the Brain at the beginning of the post), out of existing science (say the use of the concept of field by Bourdieu in sociology from Physics) out of stories (how historical linguistics and Indo-European linguistics were bootstrapped with an analogy to Babel), out of elements of their daily life and culture (as an example, one of my friend has a strong economics background, and so they always tend towards economic explanations; I have a strong theoretical computer science background, and so I always tend towards computational explanations...) On the other hand, I know of at least one example where the intensity of the pattern gave life to a whole new concept, or at least something that was hardly tied with existing scientific or technological knowledge at the time: Faraday's discovery of lines of forces, which prefigures the concept of field in physics. To go deeper into this (which I haven't done), I would maybe look at the following books: * The work of Nancy Nersessian in general * Forces and Fields by Mary B. Hesse * A lot of intellectual histories, especially of concepts that have proven successful.

It's kind of fun to picture AI agents working during the day and resting at night. Maybe that's the true AGI moment.

In context, this will depend on the relative costs of GPUs and energy storage, or the relative value of AI vs other uses of electricity that can be time-shifted. I would happily run my dryer or dishwasher during the daytime instead of at night in order to get paid to let OpenAI deliver a few million extra tokens. Liberalizing current electricity market participation and the ability to provide ancillary services has a lot of unrealized potenti... (read more)

This would be great to have, for sure, and I wish you luck in working on it!

I wonder if, for the specific types of discussions you point to in the first paragraph, it's necessary or even likely to help? Even if all the benchmarks today are 'bad' as described, they measure something, and there's a clear pattern of rapid saturation as new benchmarks are created. METR and many others have discussed this a lot. There have been papers on it. It seems like the meta-level approach of mapping out saturation timelines should be sufficient to convince people that fo... (read more)

1Chapin Lenthall-Cleary
Just from seeing narrow benchmarks saturate, one could argue that what's happening is LLMs are picking up whatever narrow capabilities are in-focus enough to train into them. (I emphatically do not think this is what's happening in 2025, but narrow benchmark scores alone aren't enough to show that.) A well-designed intelligence benchmark, by contrast, would be impossible to score well into the human range without having an ability to do novel (and thereby general) problem-solving, and unsaturateable without the ability to do so at above-genius level. As for the question of whether it'd persuade people with their heads stuck in the sand, "x model is smarter than some-high-percent of people" is a lot harder to ignore than "x model scored some-high-numbers on a bunch of coding, knowledge, etc. benchmarks". Putting aside how it's more useful, giving model scores relative to people (or, in some situations, subject matter experts) is also more confronting. That said, I don't doubt that there are many people who wouldn't be persuaded by even that.

Upvoted - I do think lack of a coherent, actionable strategy that actually achieves goals if successful is a general problem of many advocacy movements, not just AI. A few observations:

(1) Actually-successful historical advocacy movements that solved major problems usually did so incrementally over many iterations, taking the wins they could get at each moment while putting themselves in position to take advantage when further opportunities arose.

(2) Relatedly, don't complain about incremental improvements (yours or others'). Celebrate them, or no one will... (read more)

2Alvin Ånestrand
Thank you for sharing your thoughts! My responses: (1) I believe most historical advocacy movements have required more time than we might have for AI safety. More comprehensive plans might speed things up. It might be valuable to examine what methods have worked for fast success in the past. (2) Absolutely. (3) Yeah, raising awareness seems like it might be a key part of most good plans. (4) All paths leading to victory would be great, but I think even plans that would most likely fail are still valuable. They illuminate options and tie ultimate goals to concrete action. I find it very unlikely that failing plans are worse than no plans. Perhaps high standards for comprehensive plans might have contributed to the current shortage of plans. “Plans are worthless, but planning is everything.” Naturally I will aim for all-paths-lead-to-victory plans, but I won't be shy in putting ideas out there that don't live up to that standard. (5) I don't currently have much influence, so the risk would be sacrificing inclusion in future conversations. I think it's worth the risk. I would consider it a huge success if the ideas were filtered through other orgs, even if they just help make incremental progress. In general, I think the AI safety community might benefit from having comprehensive plans to discuss and critique and iterate on over time. It would be great if I could inspire more people to try.

I can't tell which answer to this question is meant to be 'for' or 'against' the OP's point, but it sounds like the latter. Even if it's the case the neurons contain something useful nutritionally (and I'd be surprised but not too surprised if it were), consider that these shellfish have neurons, and unlike other meats, we actually eat the neurons instead of them being part of organs we remove before eating. Also, that we have very good reason to avoid eating the neural tissues of mammals.

1Hruss
True, but I would also think that there are nutritional differences in the other parts of the body, as brains significantly change how the organism functions in their other eating behaviors, and energy consumption 

Is the question how many calf-equivalents map to current dairy consumption, or to the counterfactual dairy consumption of someone trying to reduce other animal products without going vegan?

1BryceStansfield
EDIT: Nvm, this dataset was of a niche religious group (The seventh day adventists), I should've read more throughly before commenting. Assuming no major dietary differences between vegetarian converts, and lifelong vegetarians, it appears that they consume less dairy by about a half: https://pmc.ncbi.nlm.nih.gov/articles/PMC4232985/#!po=39.8438 So, assuming that someone moves to the mean lacto-ovo vegetarian diet, you can assume about one half calf less over a lifetime.

On that first paragraph, we agree.

On the second paragraph - I could see this being an interesting approach, if you can get a good critical mass of employers with sufficiently similar needs and a willingness to try it. Certainly much better than some others I've seen, like (real example from 2009) "Let's bring in 50 candidates at once on a Saturday for the whole day, and interview them all in parallel for 2 positions."

I think one, slightly deeper problem is - who is doing the short interviews or the screenings? Do they actually know what the job entails and... (read more)

4dr_s
Classic problem, but I see a lot of that happening already. Less of a problem for non-specialized jobs, but for tech jobs (like what I'm familiar with), it would have to be another tech person, yeah. Honestly for the vast majority of jobs anything other than the technical interview (like the pre-screening by a HR guy who doesn't know the difference between SQL and C++, or the "culture fit" that is either just validation of some exec's prejudices or an exercise in cold reading and bullshitting on the fly for the candidate) is probably useless fluff. So basically that's a "companies need to actually recognise who is capable of identifying a good candidate quickly and accept that getting them to do that is a valuable use of their time" problem, which exists already regardless of the screening methodologies adopted.

Ah, ok, I totally misunderstood that one then, thanks

Upvoted. There's a lot I find interesting and a lot I agree with in this (not always the same things).

This one stood out to me as a non-central point I don't think I agree with, though:

Humans would probably appreciate their art, at least simple forms of their art (intended e.g. for children), more than they would appreciate the artistic value of marginal un-optimized nature on the alien planet.

I actually don't think this applies to me even on Earth, among humans. If you show me a random piece of average art made by and for humans, or for human children, an... (read more)

3jessicata
I was trying to make a claim about marginal value. Like, if the planet has 1 billion trees, then the last tree doesn't add much aesthetic value, compared with whatever art could be made with that tree. That said, it gets more complicated if marginal nature consumption reduces biodiversity significantly.

If so, does this predict a barber poll type effect where a better misaligned model will successfully fool the Sorting Hat and present as Gryffindor?

Agreed, and for sure it has been working well for 4 years. I just don't think it's what I want for the next 40. It's not intolerable. The benefits have been great, especially with the travel involved in my case. And good communal spaces help. But that's a different question from whether the costs of having more space outweigh the benefits, in general or for particular people.

To be clear, I disagree with your unstated assumption that anything happening in London can be used to draw conclusions about the impact of YIMBY anything. If London were building an order of magnitude more homes per year, with the same dynamic, then sure, after a decade I'll admit the point. But barring that, a world where you have a steady large influx of public funds to spend on actual residents, at no cost to those residents, is a good thing.

2[comment deleted]

Boomers are selling their houses for high prices and then downsizing and paying for starter homes with cash. So young families can’t afford the homes they’re putting for sale and then get outbid the houses they can afford. They’ve totally ruined the market it’s insane.

 

So... who does someone like this think is buying the boomers' homes, and deciding how much they're willing and able to pay? What houses do they think those buyers are moving from, and therefore vacating?

 

They also seem to think the concept of a starter home is new?

Pretty sure that looks (at worst) like "Rich people buy them all, and the locals have the same housing stock but much larger property base paid for by someone else."

2cousin_it
Sure, but the point of YIMBY is to solve the housing problem. That's why people are spending their time and effort on it. So saying "at worst it won't solve the problem" doesn't seem encouraging. How should we spend our time and effort to actually solve the problem?

I'm not sure what I think about the value of spaciousness. I'd love to hear about any specific examples you have in mind.

Sure. To be clear: It doesn't apply to everything. And individual rooms don't/wouldn't need to be huge. I've looked at house floor plans with walk-in closets in the master bedroom that are larger than my whole trailer, or big empty bathrooms and entryways, and find them to be just silly for any purpose I could imagine wanting. But for some personal examples where I think this applies:

My wife is a therapeutic musician who makes online cou... (read more)

2Adam Zerner
That all makes sense. Work productivity and trivial inconveniences are important. At first I was thinking that 20-30 minutes to set up a work area is comparable to a commute and not too big a deal, but then I remembered that a) commutes suck and b) the raw number of minutes is only part of the story. Kitchen space is the most important thing for me in terms of wanting space. I get a little overwhelmed when I'm cooking and things are tight. But this can be mitigated by focusing on a) meals that don't require as much space and b) when I do want to cook a meal that requires more space, just take my time and go slow. A few years out of college I ended up living in a 200 square foot micro-apartment. And my girlfriend lived with me there part-time. There were definitely things about it that aren't ideal, but ultimately it was pretty tolerable. I think a big reason why I don't mind smaller spaces too much is because I don't mind utilizing space outside of my apartment: communal areas in the apartment complex, coffee shops, libraries, parks. Not everyone's like that though. Some people kinda need the privacy and comfort of home to relax.

One thing I am unclear on is (if we know) why trust was decreasing. 

Where the entities losing trust becoming less trustworthy? 

Were they always untrustworthy but this became more widely known? 

Were they still trustworthy but people began thinking they weren't for some other reason?

In other words, should the goal be to increase trust, increase trustworthiness, or increase accuracy of perceptions and evaluations of trustworthiness?

5B Jacobs
That's beyond the scope of this post. I presented some studies that point the finger at rising inequality, but it's probably more than just the increase in wealth disparity. If I had to guess, social trust is probably also one of those common goods capitalists are burning when maximizing shareholder value, but I'd have to look into it more. The goal should obviously be to increase trustworthiness, but since that's (at least somewhat) subjective, I would settle for increasing the benefits that come with high-trust-societies I mentioned in the post.

To be fair, I think they should just be banned from having no-human-in-the-loop screenings

In principle I agree. In practice I'm less sure.

save a few hours of reading 

Consider that the average job opening these days receives hundreds of applications. For some people that's a few hours of reading, but it's reading hundreds of near-identical-looking submissions and analyzing for subtle distinctions.

I do think automated screenings rule out a lot of very good options because they don't look like the system thinks they should, in ways a thoughtful human wou... (read more)

3dr_s
The entire market is quite fucked right now. But the thing is, if you have more and more applicants writing their applications with AI, and more and more companies evaluating them with AI, we get completely away from any kind of actual evaluation of relevant skills, and it becomes entirely a self-referential game with its own independent rules. To be sure this is generally a problem in these things, and the attempt to fix it by bloating the process even more is always doomed to failure, but AI risks putting it on turbo. Of course it's hard to make sure the applicants don't use AI, so if only the employer is regulated that creates an asymmetry. I'm not sure how to address that. Maybe we should just start having employment speed-dating sessions where you get a bunch of ten minutes in-person interviews with prospective employers looking for people and then you get paired up at the end for a proper interview. At least it's fast, efficient, and no AI bullshit is involved. And even ten minutes of in person talking can probably tell more than a hundred CVs/cover letters full of the same nonsense.

From personal experience: I grew up in a much bigger house than I owned as an adult, and definitely had a tendency to hold onto stuff I didn't need. Then in 2021 I sold my house and for the past 4 years  my wife and I have been living full-time in a 28' trailer, working from the road and traveling the country. So learning a minimalist mindset has been an essential life skill. I'm trying to separate the minimalism component from the travel component in what's below, but admittedly it's a big part of the value for me.

There are tradeoffs. I pay more for ... (read more)

2Adam Zerner
That's all cool to hear! Yeah, it's pretty crazy. A similar thought has occurred to me. I used to drive from Vegas to Mexico with my girlfriend for dental work. I remember passing through areas that felt incredibly remote, but even the most remote areas were never really more than an hour or so away from a Walmart or something. I think it'd take some actual effort to find a place that is truly remote. I'm glad to hear it! I'm in the same ballpark. I wonder how common this sort of thing is. I feel like it's something that many people should at least experiment with though. I suspect that a lot of people would predict the mental overhead to be a big deal but after trying they'd find that it was actually a big deal. I also suspect that this mental overhead affects people in ways that are hard to notice. Like maybe it leads to procrastination or something. I'm not sure what I think about the value of spaciousness. I'd love to hear about any specific examples you have in mind. Ah yeah, that's a good point!

Total, but I don't think the difference is as large as it might seem. Fundamentally, barring another collapse that stops our advancement, I don't think we have more than about a century, at the high end, before we reach a point technologically where we're no longer inescapably dependent on the climate for our survival. Which means almost all my probability for how climate could cause human extinction involves something drastic happening within the next handful of decades.

Most of that remaining probability looks something like "We were wrong to reject the m... (read more)

4MichaelDickens
Ok, it sounds like we agree on pretty much everything except what it means for something to "be an existential risk". I think 0.01% still counts as a risk worth worrying about (or it would, if AI x-risk weren't multiple orders of magnitude higher).

I also don't think I would estimate anywhere near that low, especially since the risk is spread over many years. On a per-year basis that is near or below asteroid x-risk level. 99.9 to 99.99 seems like the right range to me.

2MichaelDickens
Are you saying 99.9 to 99.99 per year, or total?

I agree with you that deploying AI in high-impact safety-critical applications under these conditions and relying on the outputs as though they met standards they don't meet is insane.

I would, naturally, note that (1), (2), and (3) also apply to humans. It's not like we have any option for deploying minds in these applications that don't have some version of these problems. LLMs have different versions we don't understand anywhere near as well as we do the limitations of humans, but what that means for how and whether we can use them, even now, to improve ... (read more)

(5) I look forward to it.

(2) I hope you'll dig into this more in those future posts, because I think it is extremely non-obvious.

(3) Yes, I will concede that example, you're right. For any observer in any possible world, there are an arbitrarily large number of larger universes within which it could be a perfect simulation, and these would be indistinguishable from the inside. This is a thing we cannot know, and the choice to then act as if those unknowable things don't exist is an additional choice. I definitely did not think this was the kind of metaphys... (read more)

I think this is an important point, especially when experts are talking to other experts about their respective fields. I once had a client call this "thinking in webs." If you have a conclusion that you reached via a bunch of weak pieces of evidence collected over a bunch of projects and conversations and things you've read all spread out over years, it might or might not be epistemically correct to add those up to a strong opinion. But, there may be literally no verbally compelling way to express the source of that certainty. If you try, you'll have forg... (read more)

It's sorta like collaborating with a human that you don't trust, except you can conduct as many experiments as you want to improve your understanding of their biases, and of how they respond to different ways of interacting. AI tells me I'm wrong all the time, but it takes work to make sure that stays the case.

I reminds me a little of a class where the teacher asked us how to get truly random results from a coin that may or may not be biased, with unknown bias. The answer is that even if H and T are not equiprobable, HT/TH, or HHTT/TTHH, or HHHHTTTT/TTTTHH... (read more)

Ok, fair, 'prove' is a strong word and we can have different opinions on both the probability estimate of climate-induced-extinction and the threshold for that probability being low enough to count as 'not an x-risk.'

In order to actually wipe out all humanity, such that there were no residual populations able to hang on long enough to recover and rebuild, the climate would need to change faster than any human population, anywhere in the world, could adapt or invent solutions. Even if life sucked for a decade or a millennium, or if there were only 10k of us... (read more)

3MichaelDickens
I directionally agree but I don't think that's the sort of reasoning in which you can be >99.9% confident. I'm also concerned about runaway warming making earth uninhabitable. Climate models suggest that won't happen but Halstead (implicitly) expects a <0.001% chance of runaway warming which seems hard to justify to me.

If we were, no one ever told us, and no one I knew ever did. If nothing else, to do so, we would have had to skip lunch entirely, because we weren't allowed to be in the halls without a pass signed by a teacher, and there would not have been anyone in the cafeteria to write one.

Admittedly, after 9th grade I stopped taking lunch so I could fit in an extra elective. Also in 9th grade, we had 4 instances of students calling in fake bomb threats in order to get out of class, and ended up with much stricter rules about who could be where, when, than had been th... (read more)

The school essay is designed to be writable within the time constraints of an in-class exam, and to let teachers grade a whole five class's worth of essays fast enough to get them back long enough before the next exam.

As a kid I was always confused about why schools had libraries. In elementary school you got sent to them once a week and were allowed to take out one book, and the librarian taught how to use a library. Otherwise, there was never a time you could actually go to them and do research on anything. (Ok, except for once in second grade, when I co... (read more)

1James Camacho
Were you not allowed to go to the library during lunch?

Jabberwocky is my favorite poem. People who know me well hear that and are completely unsurprised :-)

I am locating the meaning outside the statement, rather than applying validity or invalidity to the statement itself. 

This is critical, I think. In daily life I make up words all the time, and the people around me with whom I share the necessary context and are also native English speakers have no trouble intuiting what they mean.

It’s a welcome start. The actual call to action is disappointingly content-free, as these things usually are

Potentially is this how a politician signals, "Hey, please give me some useful information and good ideas for what to actually try to get done"?

That was, as far as I can tell, one strong downvote from me (-7, from a starting value of 2). As my comment above hopefully indicates, I did read the whole thing. I don't know if it was as fast as five minutes after posting, but this post happened to be second on the front page when I looked, so I read through it, downvoted, then commented. It's about 2200 words, which usually means anywhere from 5-10 minutes read time for me. I did reread it slower while commenting, as well, and the second readthrough did not cause me to change my downvote.

(1) Ok, fair enough, that wasn't clear to me on first read. I do think it's worth noting that he does, in fact, consider many other viewpoints before rejecting them, and gives clear explanations of his reasons for doing so, whether you agree or not. He also in many places discusses why he thinks introducing those other viewpoints does not actually help. Others in the community have since engaged with similar ideas from many other viewpoints.

(2) That conclusion does not follow from the premises. In particular, you have not considered the set of possible wor... (read more)

1Jáchym Fibír
2) Yes, that is true. I did leave out a sentence saying that "this assumes that there are no higher P(doom) realities in our list of plausible realities." I left it our for readibility for the audience of the original publication (Phi/AI). I concede that for LW I should have done a more rigorous version.  But still I think the logic to lower our P(doom) holds in that specific analysis (all 3 alternatives might have some failsafes). And in my eyes it would hold also if we look at the current landscape of the top most plausible metaphysics, where there really is not much more "unsafe" than physicalism in terms of human survival. 3) I think you are not correct in your conclusions about physicalism. Physicalism is, by its proper definition, a philosophical belief: "Physicalism is the philosophical view that everything in existence is fundamentally physical, or at least ultimately depends on the physical, meaning there is "nothing over and above" the physical world." This means that physicalism goes beyond the "simple logic" you described. The simple logic you described can only ever explain the parts of our reality that can be subjected to experimental observation - i.e. it's limited by the descriptive scope of science. But physicalism goes beyond that by believing that there is nothing "extra" added beyond that.  For example, if our world would be a simulation with fixed rules (physical laws) run by an alien, your simple logic could not distinguish that from a scenario where our world just "popped up from nothing." So the only "special place" physicalism holds among philosophical views is that it introduces the least amount of "extra assumptions." But that says nothing about its ultimate plausibility. Another way to picture this is that everytime we want to build a complete model of reality, there will be two parts: one verifiable by experiment (science) and the other inherently unverifiable (philosophy). The fact that physicalism is picking the "simplest, least co

Strongly downvoted, seems to not realize how deeply EY has engaged with and written about metaphysics, or at least not to engage with any of his relevant writings or those of the rest of the rationalist community over the last almost 20 years. 

Besides that, though: It's not clear to me how a non-physicalist metaphysics actually helps reduce x-risk, except to the extent that there is some probability of an outside force intervening in our physical cosmos. For one example among many, consciousness is not required to run physical simulations and identify... (read more)

3Jáchym Fibír
Thank you for the feedback. I'll try to address the key points.  1) I actually have looked into EY's metaphysical beliefs and my conclusion is they are inconsistent and opaque at best, and have been criticized for that here. In any case, when I say someone operates from a single metaphysical viewpoint like physicalism, this is not any kind of criticism of their inability to consider something properly or whatnot. It just tries to put things into wider context by explaining that changing the metaphysical assumptions might change their conclusions or predictions. 2) The post in no way says that there is something that would "prevent" the existential risk. It clearly states such risk would not be mitigated. I could have made this more explicit. What the post says is that by introducting a "possibility," no matter how remote, of certain higher coordination or power that would attempt to prevent X-risk because it is not in its interest, then in such a universe the expected p(doom) would be lower. Does that make sense? 3) You say that My reaction to that is that here your are exactly conflating physicalism with the "descriptive scope of science" which is exactly the category mistake I'm trying to point to! There will always be something unexplainable beyond the descriptive scope of science, and physicalism is filling that with "nothing but more physical-like clockwork things." But that is a belief. It might be the "most logical believe with fewest extra assumptions." But that should not grant it any special treatment among all other metaphysical interpretations. 4) Yes, I used the word "share/transmit information across distance" while describing non-locality. And while you cannot "use entanglement to transmit information," I think it's correct to say that the internal information of an entangled particle transmits information of its internal state to its entangled partner? 5) Please, don't treat this as an "attack on AI safety" - all I'm trying is to open it to a wi

Upvoted, well written explanation.

Might be worth explicitly including a link back to A Human's Guide to Words.

I don't think I implied it was a bad thing? I certainly didn't intend to imply that.

Others have already added some good ones, here's a few more.

A few that are likely familiar throughout this community even if you have never considered them this way, but still often not believed or noticed elsewhere:

  • Diagnostic algorithms outperforming human doctors
  • Autonomous vehicles safer than human drivers

In a similar vein:

  • EVs being as safe and reliable as ICE vehicles (fire risk is the worry I hear about most regularly)
  • Solar and wind being cheap and predictable enough to make a significant contribution to the grid
  • Sustainable agriculture being able to pr
... (read more)

I think it's also worth keeping in mind that the overall state of the field of "People who make and publish reports forecasting the future of emerging technologies" (which was my field for over a decade) is usually really, really bad (this includes the kinds of reports executives and investors will pay $5k a pop for, to help them make big decisions). When I read AI 2027 and the accompanying documents, it was very easily within the top 1% of such reports I've seen in quality, thoughtfulness, thoroughness, and reasonableness-of-assumptions-made.

I'd also add ... (read more)

Yeah, that boundary gets very confused very fast. I've come across articles, written by professionals for a general audience, calling whole wheat flour ultra-processed, and others listing cutting as a processing method that makes food less healthy. My general opinion of most standard diet advice is that it's at about this level of reliability.

It seems like you're aware of this, but AFAICT surprisingly many people who speak out about seed oils aren't, so it may be worth stating outright: uncooked or lightly cooked PUFAs are among the healthiest fats you can eat, come primarily from nuts and seeds (and fatty fish), and include all the essential and metabolically-important fatty acids. They diffuse faster across mitochondrial membranes for energy production, esterify and de-esterify at greater rates to provide energy between meals, and are preferentially stored in parts of the body where they're l... (read more)

This seems right, and I'm glad to have book-length explanations and investigations of it. "Climate change is not an x-risk" is the kind of thing you can easily (and correctly) prove to yourself in a matter of hours, but it is surprisingly hard to get other people to both notice and admit it, especially en masse.

I will say this was not so clear 20+ years ago. Back then we hadn't yet shown the runaway warming scenarios were unlikely, clean energy was far from cheap anywhere in the world, many industrial processes had no clear path to electrification, and aff... (read more)

3MichaelDickens
How do you do that? I've spent several hours researching the topic and I'm still not convinced, but I think there's a lot I'm still missing, too. My current thinking is 1. Existential risk from climate change is not greater than 1%, because if it were, climate models would show a noticeable probability of extinction-level outcomes. 2. But I can't confidently say that existential risk is less than 0.1% because the assumptions of climate models may break down when you get into tail outcomes, and our understanding of climate science isn't robust enough to strongly rule out those tail outcomes.

I appreciate that point, yes, and I have looked up standard definitions. I'm probably not looking in the right places, though, because they ones I have found are either too vague and imprecise for me to make sense of, or focus on generating hypotheses/explanations/models. If you do have a good source for a better explanation, I'd actually really like to learn more.

I'm not sure what this thought experiment shows us other than the fact that these views of what constitutes identify are in fact different?

4Filip Sondej
To be honest I just shared it because I thought that it's a funny dynamic + what I said in the comment above. BTW, if such swaps were ever to become practical (maybe in some simpler form or between some future much simpler beings than humans), minds like Alice would quickly get exploited out of existence. So you could say that in such environments belief in "continuity of personhood" is non-adaptive.

Basically, yeah. Most times I see abduction discussed, it's less about drawing conclusions and more about hypothesis generation. That implies different permissible levels of making and breaking assumptions, choosing and changing models. It's more fluid, less rule-bound, more willing to accept being knowingly wrong in some ways, less tied to formalisms and precise methods.

Mostly my wife handles our blog. We wrote about the trivia example in this post.

We don't tend to write about anyone's kids, for obvious reasons, and I don't have kids of my own. But there are a lot of full-time families with youtube channels and blogs that roadschool their kids. See here for a perspective similar to what I said above. I can say, in general, kids who spend months or more on the road exploring with their families seem to be more independent, more aware, more curious, and more able to interact with people and things around them, than others t... (read more)

Just inducting probabilities and then deducting the most likely outcome.

I find it's good practice to be deeply suspicious of the word "just." Small words in arguments are often load-bearing in ways that hide much of the meaning from casual readers. E.g. LLMs are just applied arithmetic, biology is just applied chemistry, chemistry is just applied physics, etc. There is a sense in which this is 'true' in each case, but that does not make the less-fundamental concepts useless or unnecessary, and straightforwardly 'believing' such 'just' statements tends to c... (read more)

3sd
Apologies, I am not sure I understood. Is it that: - Induction/Deduction fit data to existing models - Abduction is about proposing new models?
Load More