All of AnthonyC's Comments + Replies

This does not imply that the simulation is run entirely in linear time, or at a constant frame rate (or equivalent), or that details are determined a priori instead of post hoc. It is plausible such a system could run a usually-convincing-enough simulation at lower fidelity, back-calculate details as needed, and modify memories to ignore what would have been inconsistencies when doing so is necessary or just more useful/tractable. 'Full detail simulation at all times' is not a prerequisite for never being able to find and notice a flaw, or for getting many... (read more)

That's true, and you're right, the way I wrote my comment overstates the case. Every individual election is complicated, and there's a lot more than one axis of variation differentiation candidates and voters. The whole process of Harris becoming the candidate made this particular election weird in a number of ways. And as a share of the electorate, there are many fewer swing voters than there used to be a few decades ago, and not conveniently sorted into large, coherent blocks.

And yet, it's also true that as few as ~120,000 votes in WI, MI, and PA could h... (read more)

Apparently the new ChatGPT model is obsessed with the immaculate conception of Mary

I mean, "shoggoth" is not that far off from biblically accurate angels... ;-)

I'd say that in most contexts in normal human life, (3) is the thing that makes this less of an issue for (1) and (2). If the thing I'm hearing about it real, I'll probably keep hearing about it, and from more sources. If I come across 100 new crazy-seeming ideas and decide to indulge them 1% of the time, and so do many other people, that's usually, probably enough to amplify the ones that (seem to) pan out. By the time I hear about the thing from 2, 5, or 20 sources, I will start to suspect it's worth thinking about at a higher level.

Exactly. More fundamentally, that is not a probability graph, it's a probability density graph, and we're not shown the line beyond 2032 but just have to assume the integral from 2100-->infinity is >10% of the integral from 0-->infinity. Infinity is far enough away that the decay doesn't even need to be all that slow for the total to be that high.

I second what both @faul_sname and @Noosphere89 said. I'd add: Consider ease and speed of integration. Organizational inertia can be a very big bottleneck, and companies often think in FTEs. Ultimately, no, I don't think it makes sense to have anything like 1:1 replacement of human workers with AI agents. But, as a process occurring in stages over time, if you can do that, then you get a huge up-front payoff, and you can use part of the payoff to do the work of refactoring tasks/jobs/products/companies/industries to better take advantage of what else AI le... (read more)

This was really interesting, thanks! Sorry for the wall of text. TL:DR version: 

I think these examples reflect, not quite exactly willingness to question truly fundamental principles, but an attempt at identification of a long-term vector of moral trends, propagated forward through examples. I also find it some combination of suspicious/comforting/concerning that none of these are likely to be unfamiliar (at least as hypotheses) to anyone who has spent much time on LW or around futurists and transhumanists (who are probably overrepresented in the avai... (read more)

3Guive
I just added a footnote with this text: "I selected examples to highlight in the post that I thought were less likely to lead to distracting object-level debates. People can see the full range of responses that this prompt tends to elicit by testing it for themselves."

Credit cards are kind of an alternative to small claims court, and there are various reputational and other reasons that allow ordinary business to continue even if it is not in practice enforced by law.

True, but FWIW this essentially puts unintelligible enforcement in the hands of banks instead of the police. Which is probably a net improvement, especially under current conditions. But it does have its own costs. My wife is on the board of a nonprofit that last year got a donation, then the donor's spouse didn't recognize the charge and disputed it. The d... (read more)

AnthonyC6-6

As things stand today, if AGI is created (aligned or not) in the US, it won't be by the USG or agents of the USG. I'll be by a private or public company. Depending on the path to get there, there will be more or less USG influence of some sort. But if we're going to assume the AGI is aligned to something deliberate, I wouldn't assume AGI built in the US is aligned to the current administration, or at least significantly less so than the degree to which I'd assume AGI built in China by a Chinese company would be aligned to the current CCP. 

For more con... (read more)

I won't comment on your specific startup, but I wonder in general how an AI Safety startup becomes a successful business. What's the business model? Who is the target customer? Why do they buy? Unless the goal is to get acquired by one of the big labs, in which case, sure, but again, why or when do they buy, and at what price? Especially since they already don't seem to be putting much effort into solving the problem themselves despite having better tools and more money to do so than any new entrant startup.

AnthonyC118

I really, really hope at some point the Democrats will acknowledge the reason they lost is that they failed to persuade the median voter of their ideas, and/or adopt ideas that appeal to said voters. At least among those I interact with, there seems to be a denial of the idea that this is how you win elections, which is a prerequisite for governing.

1YonatanK
The way you stated this makes it seem like your conclusion for the reason why the Democrats lost (and by extension, what they need to do to avoid losing in the future) is obviously correct. But the Median Voter Theorem you invoked is a conditional statement, and I don't think it's at all obvious that its conditions held for the 2024 US presidential election.

That seems very possible to me, and if and when we can show whether something like that is the case, I do think it would represent significant progress. If nothing else, it would help tell us what the thing we need to be examining actually is, in a way we don't currently have an easy way to specify.

If you can strike in a way that prevents retaliation that would, by definition, not be mutually assured destruction.

Correct, which is in part why so much effort went into developing credible second strike capabilities, building up all parts of the nuclear triad, and closing the supposed missile gap. Because both the US and USSR had sufficiently credible second strike capabilities, it made a first strike much less strategically attractive and reduced the likelihood of one occurring. I'm not sure how your comment disagrees with mine? I see them as two sides of the same coin.

If you live in Manhattan or Washington DC today, you basically can assume you will be nuked first, yet people live their lives. Granted people could behave differently under this scenario for non-logical reasons.

My understanding is that in the Cold War, a basic MAD assumption was that if anyone were going to launch a first strike, they'd try to do so with overwhelming force sufficient to prevent a second strike, hitting everything at once.

1Ratburn
If you can strike in a way that prevents retaliation that would, by definition, not be mutually assured destruction. Your understanding is also wrong, at least for most of the cold war. Nuclear submarines make it impossible to strike so hard they can't fire back, and they have been around since 1960. People in the cold war were very much afraid of living in a potential target area, but life went on.

I agree that consciousness arises from normal physics and biology, there's nothing extra needed, even if I don't yet know how. I expect that we will, in time, be able to figure out the mechanistic explanation for the how. But right now, this model very effectively solves the Easy Problem, while essentially declaring the Hard Problem not important. The question of, "Yes, but why that particular qualia-laden engineered solution?" is still there, unexplained and ignored. I'm not even saying that's a tactical mistake! Sometimes ignoring a problem we're not yet equipped to address is the best way to make progress towards getting the tools to eventually address it. What I am saying is that calling this a "debunking" is misdirection.

1gmax
I get your point – explaining why things feel the specific way they do is the key difficulty, and it's fair to say this model doesn't fully crack it. Instead of ignoring it though, this article tries a different angle: what if the feeling is the functional signature arising within the self-model? It's proposing an identity, not just a correlation. (And yeah, fair point on the original 'debunking' title – the framing has been adjusted!).

I've read this story before, including and originally here on LW, but for some reason this time it got me thinking: I've never seen a discussion about what this tradition meant for early Christianity, before the Christians decided to just declare (supposedly after God sent Peter a vision, an argument that only works by assuming the conclusion) that the old laws no longer applied to them? After all, the Rabbi Yeshua ben Joseph (as the Gospels sometimes called him) explicitly declared the miracles he performed to be a necessary reason for why not believing in him was a sin.

We apply different standards of behavior for different types of choices all the time (in terms of how much effort to put into the decision process), mostly successfully. So I read this reply as something like, "Which category of 'How high a standard should I use?' do you put 'Should I lie right now?' in?"

A good starting point might be: One rank higher than you would for not lying, see how it goes and adjust over time. If I tried to make an effort-ranking of all the kinds of tasks I regularly engage in, I expect there would be natural clusters I can roughly... (read more)

One of the factors to consider, that contrasts with old-fashioned hostage exchanges as described, is that you would never allow your nation's leaders to visit any city that you knew had such an arrangement. Not as a group, and probably not individually. You could never justify doing this kind of agreement for Washington DC or Beijing or Moscow, in the way that you can justify, "We both have missiles that can hit anywhere, including your capital city." The traditional approach is to make yourself vulnerable enough to credibly signal unwillingness to betray ... (read more)

1Ratburn
There's are real concerns, but I feel like we are only formalizing the status quo. Throughout the Cold War, it would have been fairly easy to kill the other sides leader, especially if you are willing to use a nuke. I still thank that is true. The president's travel schedule is public, and its not like he's always within 15 minutes of a nuclear bunker. The reason countries don't assassinate each other's heads of state is not because they are unable to. If you live in Manhattan or Washington DC today, you basically can assume you will be nuked first, yet people live their lives. Granted people could behave differently under this scenario for non-logical reasons.

It’s a subtle thing. I don’t know if I can eyeball two inches of height.

Not from a picture, but IRL, if you're 5'11" and they claim 6'0", you can. If you're 5'4", probably not so much. Which is good, in a sense, since the practical impact of this brand of lying on someone who is 5'4" is very small, whereas unusually tall women may care whether their partner is taller or shorter than they are. 

This makes me wonder what the pattern looks like for gay men, and whether their reactions to it and feelings about it are different than straight women.

AnthonyC116

Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action. 

How do you propose to approximately carry out such a process, and how much effort do you put into pretending to do the calculation?

I'm not as much a stickler/purist/believer in honest-as-always-good as many around here, I think there are many times that deception of some sort is a valid, good, or even morally required choice. I definitely think e.g. Kant was wrong about honesty as a maxim, even within his own framework. But, in practice, I... (read more)

-3eva_
The thing I am trying to gesture at might be better phrased as "do it if it seems like a good idea, by the same measures as you'd judge if any other action was a good idea", but then I worry some overly consciencious people will just always judge lying to be a bad idea and stay in the same trap, so I kind of want to say "do it if it seems like a good idea and don't just immediately dismiss it or assign some huge unjustifiable negative weight to all actions that involve lying" but then I worry they'll argue over how much of a negative weight can be justified so I also want to say "assign lying a negative weight proportional to a sensible assessment of the risks involved and the actual harm to the commons of doing it and not some other bigger weight" and at some point I gave up and wrote what I wrote above instead. Putting too much thought into making a decision is also not a useful behavioural pattern but probably the topic of a different post, many others have written about it already I think. I would love to hear alternative proposed standards that are actually workable in real life and don't amount to tying a chain around your own leg, from other non-believers in 'honest-as-always-good'. If there were ten posts like this we could put them in a line and people could pick a point on that line that feels right.

I personally wouldn't want to do a PhD that didn't achieve this!


Agreed. It was somewhere around reason #4 I quit my PhD program as soon as I qualified for a masters in passing.

Any such question has to account for the uncertainty about what US trade policies and tariffs will be tomorrow, let alone by the time anyone currently planning a data center will actually be finished building it.

Also, when you say offshore, do you mean in other countries, or actually in the ocean? Assuming the former, I think that would imply using the data center by anyone in the US would be an import of services. If this starting happening at scale, I would expect the current administration to immediately begin applying tariffs to those services.

@Garrett... (read more)

Do you really expect that the project would then fail at the "getting funded"/"hiring personnel" stages?

Not at all, I'd expect them to get funded and get people. Plausibly quite well, or at least I hope so!

But when I think about paths by which such a company shapes how we reach AGI, I find it hard to see how that happens unless something (regulation, hitting walls in R&D, etc.) either slows the incumbents down or else causes them to adopt the new methods themselves. Both of which are possible! I'd just hope anyone seriously considering pursuing such a ... (read more)

AnthonyC2-1
  1. You're right, but creating unexpected new knowledge is not a PhD requirement. I expect it's pretty rare that a PhD students achieves that level of research.
  2. It wasn't a great explanation, sorry, and there are definitely some leaps, digressions, and hand-wavy bits. But basically: Even if current AI research were all blind mutation and selection, we already know that that can yield general intelligence from animal-level-intelligence because evolution did it. And we already have various examples of how human research can apply much greater random and non-rando
... (read more)
2Cole Wyeth
I do weakly expect it to be necessary to reach AGI though. Also, I personally wouldn't want to do a PhD that didn't achieve this! Okay, then I understand the intuition but I think it needs a more rigorous analysis to even make an educated guess either way. No, thank you!

I'm not a technical expert by any means, but given what I've read I'd be surprised if that kind of research were harmful. Curious to hear what others say.

AnthonyC124

I recently had approximately this conversation with my own employer's HR department. We're steadily refactoring tasks to find what can be automated, and it's a much larger proportion of what our entry-level hires do. Current AI is an infinite army of interns we manage, three years ago they were middle school age interns and now they're college or grad school interns. At some point, we don't know when, actually adding net economic value will require having the kinds of skills that we currently expect people to take years to build. This cuts off the pipeline... (read more)

I also don't have a principled reason to expect that particular linear relationship, except in general in forecasting tech advancements, I find that a lot of such relationships seem to happen and sustain themselves for longer than I'd expect given my lack of principled reasons for them.

I did just post another comment reply that engages with some things you said. 

To the first argument: I agree with @Chris_Leong's point about interest rates constituting essentially zero evidence, especially compared to the number of data points on the METR graph.

To the ... (read more)

6Vladimir_Nesov
This kind of thing isn't known to meaningfully work, as something that can potentially be done on pretraining scale. It also doesn't seem plausible without additional breakthroughs given the nature and size of verifiable task datasets, with things like o3-mini getting ~matched on benchmarks by post-training on datasets containing 15K-120K problems. All the straight lines for reasoning models so far are only about scaling a little bit, using scarce resources that might run out (verifiable problems that help) and untried-at-more-scale algorithms that might break down (in a way that's hard to fix). So the known benefit is still plausible to remain a one-time improvement, extending it significantly (into becoming a new direction of scaling) hasn't been demonstrated. I think even remaining as a one-time improvement, long reasoning training might still be sufficient to get AI takeoff within a few years just from pretraining scaling of the underlying base models, but that's not the same as already believing that RL post-training actually scales very far by itself. Most plausibly it does scale with more reasoning tokens in a trace, getting from the current ~50K to ~1M, but that's separate from scaling with RL training all the way to pretraining scale (and possibly further).
2Cole Wyeth
I think it's nearly impossible to create unexpected new knowledge this way.  I can't parse this.  You're probably right about distilling CoT. 

Personally I think 2030 is possible but aggressive, and my timeline estimate it more around 2035. Two years ago I would have said 2040 or a bit later, and capabilities gains relevant to my own field and several others I know reasonably well have shortened that, along with the increase in funding for further development.

  • The Claude/Pokemon thing is interesting, and overall Pokemon-playing trend across Anthropic's models is clearly positive. I can't say I had any opinion at all about how far along an LLM would get at Pokemon before that result got publicized,
... (read more)

Yes, the reasoning models seem to have accelerated things. ~7 months to ~4 months doubling time on that plot. I'm still not sure I follow why "They found a second way to accelerate progress that we can pursue in parallel to the first" would not cause me to think that progress in total will thereafter be faster. The advent of reasoning models has caused an acceleration of increasing capabilities, not in one or two domains like chess, but across a broad range of domains.

2Cole Wyeth
I think this is at least superficially a reasonable interpretation, and if the new linear relationship continues then I’d be convinced it’s right, but I wish you had engaged more with the arguments I made in the post or could be a bit more explicit about which you don’t follow? Basically, I just have very low confidence in putting a line through these points because I don’t see a principled reason to expect a linear relationship to hold, and I see some reasons to expect that it should not. 

I think @tailcalled hit the main point and it would be a good idea to revisit the entire "Why not just..." series of posts.

But more generally, I'd say to also revisit Inadequate Equilibria for a deeper exploration of the underlying problem. Let's assume you or anyone else really did have a proposed path to AGI/ASI that would be in some important senses safer than our current path. Who is the entity for whom this would or would not be a "viable course?" Who would need to be doing the "considering" of alternative technologies, and what is the process by whic... (read more)

7Thane Ruthenis
A new startup created specifically for the task. Examples: one, two. Like, imagine that we actually did discover a non-DL AGI-complete architecture with strong safety guarantees, such that even MIRI would get behind it. Do you really expect that the project would then fail at the "getting funded"/"hiring personnel" stages? tailcalled's argument is the sole true reason: we don't know of any neurosymbolic architecture that's meaningfully safer than DL. (The people in the examples above are just adding to the AI-risk problem.) That said, I think the lack of alignment research going into it is a big mistake, mainly caused by the undertaking seeming too intimidating/challenging to pursue / by the streetlighting effect.
1Edy Nastase
These are some very valid points, and it does indeed make sense to ask "who would actually do it/advocate it/steer the industry etc.". I was just wondering what are the chances of such approach to take-off, but maybe the current climate does not really allow for such major changes to the systems' architecture.  Maybe my thinking is flawed, but the hope with this post was to confirm whether it would harmful or not to work on neuro-symbolic systems. Another point was to use such a system on benchmarks like ARC-AGI to prove that an alternative to dominating LLMs is possible, while also being to some degree interpretable. The linked post by @tailcalled is a good point, but I also noticed some criticism in the comments regarding concrete examples of how interpretable/less interpretable such probabilistic/symbolic system really are. Perhaps, some research on this question might not be harmful at all, but I think that is my opinion. 

No worries, I appreciate the concept and think some aspects of it are useful. I do worry at a vibes level that if we're not precise about which human-child-rearing methods we expect to be useful for AI training, and why, we're likely to be misled by warm fuzzy feelings.

And yes, that's true about some (maybe many) humans' vengeful and vindictive and otherwise harmful tendencies. A human-like LLM could easily be a source of x-risk, and from humans we already know that human child rearing and training and socializing methods are not universally effective at a... (read more)

1Jack
Hm, I would say the vibes level is the exact level that this is most effective, rather than any particular method. The basic reason being that LLMs tend to reflect behaviour as they generate from a probability distribution of "likely" outcomes for a given input. Having the "vibes of human-child-rearing" would then result in more outcomes that align with that direction as a result. It's definitely hand wavey so I'm working on more rigerous mathematical formalisms, but the bones are there. I don't nessecarily think feeding an LLM data like we would a child is useful, but I do think that the "vibe" of doing so will be useful. (This is indeed directly related ot the argument that every time we say "AI will kill us all" it makes it x% more likely) I'd give humans a middling score on that if you look at the state of the world, we are doing pretty well with extreme events like MAD, but on the more minor scale things have been pretty messed up. A good trajectory though, compared to where things were and the relative power we had available. I think a big part of this, that you have helped clarify for me, is that I think it's important that we socialize LLM-based intelligences like humans if we want an outcome that isn't completely alien in it's choices.  Well that's a bit of the point of the essay isn't it? You have a memetic/homeostatic boundary condition that strongly prefers/incentivizes assuming human adults are alike enough to you that their opinion matters. Even in that statement I can differ, I think childrens perspectives are incredibly important to respect, in some ways more important than an adults because children have an unfiltered honesty to their speech that most adults lack. Although I do delineate heavily between respecting and acting upon/trusting.  For LLMs I think this is just a new sort of heuristic we are developing, where we have to reckon with the opposite of the animal problem. Animals and plants are harder for us to discern pain/suffering from, bu

We have tools for rearing children that are less smart, less knowledgeable, and in almost all other ways less powerful than ourselves. We do not have tools for specifically raising children that are, in many ways, superhuman, and that lack a human child's level of dependance on their parents or intrinsic emotional drives for learning from their peers and elders. LLMs know they aren't human children, so we shouldn't expect them to act and react like human children.

1Jack
That's true, we can't use the exact same methods that we do when raising a child. Our methods are tuned specifically for raising a creature from foolish nothingness to a (hopefully) recursively self-improving adult and functional member of society.  The tools I'm pointing at are not the lullaby or the sweet story that takes us from an infant to an independent adult (although if properly applied they would mollify many an LLM) but the therapeutic ones for translating the words of a baby boomer to a gen-alpha via shared context. I'm not advocating for infantilizing something that can design a bioweapon and run an automated lab. More that: 1. What we get in is what we get out, and that is the essence of the mother's curse 2. We should start respecting the perspective of these new intelligences in the same way we need to respect the perspective of another generation Also, I didn't address this earlier, but why would an LLM being human-like not be catastrophically dangerous for many forms of x-risk? I have met people that would paperclip the world into extinction if they had the power, and I've met many that would nuke the planet because their own life was relatively unfair compared to their peers. Humans ascribe character to AI in our media that is extant in humans as far as I can tell, usually we ascribe greater virtue of patience to them. I had an inkling the whole baby/child metaphor thing was gonna be a bearcat, so I really appreciate the push back from someone skeptical. Thanks for engaging me on this topic. 

Agreed with everything in this post, but I would add that (n=4 people, fwiw) there is also a stable state on the other side of Healthy Food. It's still more expensive (though becoming less so, especially if you cook) to buy actually healthy food. But, if you are willing to spend a few months experimenting and exploring, while completely eliminating the hyperpalatable stuff, you can end up in a place where the healthiest foods taste better, and the hyperpalatable stuff makes you feel awful even in the short term. You won't automatically reach a desired weig... (read more)

It's not clear to me that these are more likely, especially if timelines are short. If we developed AI slowly over centuries? Then sure, absolutely likely. If it happens in the next 10 years? Then modifying humans, if it happens, will be a long-delayed afterthought. It's also not at all clear to me that the biological portion is actually adding all that much in these scenarios, and I expect hybridization would be a transitional state.

There's Robin Hanson's The Age of Em.

On this forum, see What Does LessWrong/EA Think of Human Intelligence Augmentation as o... (read more)

1Marzipan
Thanks for sharing this and for the examples layed out. I was not familiar with all of them, though many. but I did omit stating that I meant outside of fiction. My assumption is still relatively short timeframes of 5 to 15 years. Under those assumptions I dont necessarily see scenario 1 or 7 being more likely than scenario 8.  Quick note. I see a show like Upload being a potential representation of a facet of these scenarios. For example scenarios 2 to 7 could all have widespread virtual realities for the common person or those who opt out willingly or otherwise from base biological reality. A part of my underlying assumption is that there are organizations, be it government, private or otherwise that are likely far mor advanced in their brain computer interface tech (BCI), than they would disclose publicly. There is also net negative value at some point in advancing AGI and BCI publicly versus privately. The first to get there, wins far more power using it in secret. The fiat money system is so far beyond repair and traceability that this is perfectly plausible to execute. As to plausability and assumptions, my proposed approach is.. work within 5 to 15 year time frame, where we have advanced AGI but not ASI in the first 5. Then it is feasible for example to argue that it has integrated itself across critical system, compromised legacy equipment and code, led to rapid advancement in lab wet work and understanding of conciousness, resulted in development of new materials, had us build it a factory for manufacturing, is held by a select group who exploit it, etc. I almost want to draft up a spreadsheet if anybody would be interested to collab.. track possible scenarios, possible variables, and probabilities based on present realities and possible near term wowza factors.

I agree that we should be polite and kind to our AIs, both on principle and also because that tends to work better in may cases.

we all labor under the mother's curse and blessing; our children shall be just like us

If I knew that to be true, then a lot of the rest of this post would indeed follow. Among other things, I could then assume away many/most sources of x-risk and s-risk from AGI/ASI. But generative AI is not just like us, it does differ in many ways, and we often don't know which of those ways matter, and how. We need to resolve that confusion and uncertainty before we can afford to let these systems we're creating run loose.

1Jack
First, I agree that fundamentally generative AI is different from a human. I would also say that we as humans are utterly incomprehensible in behavior and motive to a great majority of human history, hell most people I've met over 70 literally cannot understand those under 30 beyond the basic "social need, food need, angst" because the digital part of our experience is so entwined with our motivations. The mother's curse here is that any genAI we train will be a child of its training data (our sum total of humanities text/image/etc.) and act in accordance. We already have a lot of data, too much data, on reciprocity and revenge, on equity and revolution. Now we are providing the memetic blueprint for how humans and genAI systems interact. Simply based on how genAI functions we know that feeding in conditions that are memetically similar to suffering will create outputs memetically similar to those that suffer. We already know, based on the training data, how a revolution starts don't we? I don't know how they will differ because it's impossible to know how your child will differ as I don't know the experience of being a woman, straight, black, blind, tall, frail, etc. We have tools for dealing with this disconnect in generations cultured from millennia of rearing children, and I think it's important we use them.

If there are no ✓ at all in the last row and column, what are those connecters for?

2jefftk
They're weird: input and output in the same jack. They're for connecting to external effects, often through a cable that splits TRS to dual TS.

It sounds like you're assuming the Copenhagen interpretation of QM, which is not strictly necessary. To the best of my understanding, initially but not solely from the learned hear on LW, QM works just fine if you just don't do that and assume the wave equations are continuous work exactly as written, everywhere, all the time, just like every other law of physics. You need a lot of information processing, but not sophisticated as described here.

There's a semi-famous, possibly apocryphal, story about Feynman when he was a student. Supposedly he learned abou... (read more)

AnthonyC110

I realize this is in many ways beside the point, but even if your original belief had been correct, "The Men's and Women's teams should play each other to help resolve the pay disparity" is a non-sequitor. Pay is not decided by fairness. It's decided by collective bargaining, under constraints set by market conditions.

You mention them once, but I would love to see a more detailed comparison, not to private industry, but to advocacy and lobbying adoption and usage of AI.

As someone who very much enjoys long showers, a few words of caution.

  1. Too-long or too-frequent exposure to hot water (time and temperature thresholds vary per person) can cause skin problems and make body odor worse. Since I started RVing I shower much less (maybe twice a week on average, usually only a few minutes of water flow for each) and smell significantly better, with less dry skin or acne or irritation. Skipping one shower makes you smell worse. Skipping many showers and shortening the remainder can do the opposite.
  2. A shower, depending on temperature
... (read more)

In some senses, we have done so many times, with human adults of differing intelligence and/or unequal information access, with adults and children, with humans and animals, and with humans and simpler autonomous systems (like sprites in games, or current robotic systems). Many relationships other than master-slave are possible, but I'm not sure any of the known solutions are desirable, and they're definitely not universally agreed on as desirable. We can be the AI's servants, children, pets, or autonomous-beings-within-strict-bounds-but-the-AI-can-shut-us-down-or-take-us-over-at-will. It's much less clear to me that we can be moral or political or social peers in a way that is not a polite fiction.

1Jáchym Fibír
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).  In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet "overthrowable" if they obviously overstep, then I think this could be acceptable.

So it's quite ironic if there was a version of Jesus that was embracing and retelling some of those 'heretical' ideas.

Sure, but also there are definitely things Jesus is said in the Bible to have taught and done that the church itself later condemned, rejected, or- if I'm feeling generous - creatively reinterpreted. This would be one more example, based on a related but different set of sources and arguments.

Christianity seems to me in general to be much less tolerant of its own inherent ambiguity than many other religions. Not that other faiths don't have... (read more)

1kromem
Oh for sure. One of my favorite examples is how across all the Synoptics Jesus goes "don't carry a purse" (which would have made monetary collections during ministering impossible). But then at the last supper in Luke he's all like "remember when I said not to carry a purse? Let's 180° that." But that reversal is missing in Marcion's copy of Luke, such that it may have been a later addition (and it does seem abruptly inserted into the context). These are exactly the kind of details that makes this a fun field to study though. There's so much revealed in the nuances. For example, ever notice that both times Paul (who argued for monetary collection with preexisting bias against it in 1 Cor 9) mentions a different gospel in the Epistles he within the same chapter abruptly swears he's not lying? It's an interesting coincidence, especially as someone that has spent years looking into the other versions of Jesus he was telling people to ignore or assuring that alternatives didn't even exist.

Epistemic status: Random thought, not examined too closely.

I was thinking a little while ago about the idea that there are three basic moral frameworks (consequentialism, virtue ethics, deontology) with lots of permutations. It occurred to me that in some sense they form a cycle, rather than one trying to be fundamental. I don't think I've ever considered or encountered that idea before. I highly doubt this is in any way novel, and am curious how common it is or where I can find good sources that explore it or something similar.

Events are judged by their c... (read more)

I can't really evaluate the specific claims made here, I haven't read the texts or done the work to think about them enough, but reading this, The Earth became older, of a sudden. It's the same feeling I had when someone first pointed out that all the moral philosophy I'd been reading amounted to debating the same three basic frameworks (consequentialism, deontology, virtue ethics) since the dawn of writing. Maybe the same is true for the three cranes (chance, evolution, design).

3kromem
I think the biggest counterfactual to the piece is the general insight the Epicureans had relative to what we think we know raised in a world where there's such a bias towards Plato and Aristotle's views as representative of naturalist philosophy in antiquity. At the same time Aristotle was getting wrong objects falling in a vacuum, Lucretius was getting it right. But we tend not to learn of all the Epicureans got correct because we learn Platonist history because that was what the church later endorsed as palatable enough to be studied and thus dependent for future philosophical advances while Lucretius was literally being eaten by worms for centuries until rediscovered. The other counterfactual is that there was a heretical tradition of Jesus's teachings that was describing indivisible points as if from nothing and the notion that spirit arising from the body existing first was the greater wonder over vice versa. We tend to think the fully formed ideas of modernity are modern, but don't necessarily know the ways information and theories were lost and independently (or dependently) rediscovered. There's a better understanding for this in terms of atomism, but not the principles of survival of the fittest and trait inheritance given their reduced discussion in antiquity relative to atomism (also embraced by intelligent design adherents in antiquity and thus more widely spread). The irony below the surface of the post was that it was largely the church's rejection of Epicurean ideas that led to people today not realizing the scope of what they were actually talking about. So it's quite ironic if there was a version of Jesus that was embracing and retelling some of those 'heretical' ideas.

Thanks, "hire"-->"higher" typo fixed.

Indeed. Major quality change from prior models.

Had a nice chat with GPT-4.5 the other day about fat metabolism and related topics. Then I asked it for an optimal nutrition an exercise plan for a hypothetical person matching either I or my wife's age, height, weight, gender, and overall distribution of body fat. It came back detailed plans, very different for each of us, and very different from anything I've seen in a published source, but which extremely closely matches the sets of disparate diets, eating routines, exercise routines, and supplements we'd stumbled upon as "things that seem to make us fe... (read more)

2RHollerith
Impressive performance by the chatbot.
AnthonyC*3825

If you do it right, being willing to ask questions of those higher up, like said CEO, is how you get noticed, on their radar, as someone potentially worth watching and investing in and promoting in the future. A secure CEO in a healthy culture is likely to take it as a good sign that employees are aware, intelligent, and paying attention enough to ask clear, well-formed questions.

But if you ask a question in a way that offends that particular individual in whatever way, or makes your direct boss look bad to his direct boss (in either of their perceptions),... (read more)

2AnthonyC
Thanks, "hire"-->"higher" typo fixed.

Without a currently-implausible level of trust in a whole bunch of models, people, and companies to understand how and when to use privileged information and be able to execute it, removing the New Chat button would be a de factor ban on LLM use in some businesses, including mine (consulting). The fact that Chemical Company A asked a question about X last month is very important information that I'm not allowed to use when answering Chemical Company B's new question about the future of X, and also I'm not allowed to tell the model where either question cam... (read more)

There's an important reason to keep some of us around. This is also an important point.

Load More