On average I think people suffer more from the opposite mistake. Refusing to go all in on something and commit, because they want to keep optionality open.
It could be drifting from one relationship to another, pushing off having children (but freezing eggs just in case), never buying a house and settling down in a community you like, never giving up everything to get that job you've always dreamed of, whatever it is that matters to you.
Life is often much richer and more fulfilling when you give up optionality for the sake of having your best shot on the things that are most important to you.
That said the extent these things remove your optionality is overstated. You can always get divorced, sell your house, move locations, find a new job, go back home, put your kid up for adoption, etc. Scrap that last one, having a child really does pretty permanently limit your optionality. But they go better when your mindset is one where making this work is your only option, there are no other alternatives.
For example marriage goes best when:
Doing so requires a kind of doublethink, but most people are capable of it fairly easily.
Quick thoughts on Gemini 3 pro:
It's a good model sir. Whilst it doesn't beat every other model on everything, it's definitely pushed the pareto frontier a step further out.
It hallucinates pretty badly. ChatGPT 5 did too when it was released, hopefully they can fix this in future patches and it's not inherent to the model.
To those who were hoping/expecting to have hit a wall. Clearly hasn't happened yet (although neither have we proved that LLMs can take us all the way to AGI).
Costs are slightly higher than 2.5-pro, much higher than gpt 5.1, and none of googles models have seen any price reduction in the last couple of years. This suggests that it's not quickly getting cheaper to run a given model, and that pushing the pareto frontier forward is costing ever more in inference. (However we are learning how to get more intelligence out of a fixed size with newer small models).
I would say Google currently has the best image models and best LLM, but that doesn't prove they're in the lead. I expect openai and anthropic to drop new models in the next few months, and Google won't release a new one for another 6 months at best. It's lead is not strong enough to last that long.
However we can firmly say that Google is capable of creating SOTA models that give openai and anthropic a run for their money, something many were doubting just a year ago.
Google has some tremendous structural advantages:
Now that they've proven they can execute, they should likely be considered frontrunners for the AI race.
On the other hand ChatGPT has much greater brand recognition, and LLM usage is sticky. Things aren't looking great for anthropic though with neither deep pockets or high usage.
In terms of existential risk: this is likely to make the race more desperate, which is unlikely to lead to good things.
95%+ of all studies of the human body study living bodies. Surgeons cut into living flesh umpteen times a day, and biologists do horrible things do living lab rats in a million different ways. Every study that comes out of today's universities on behaviour, medicine, optics, or what have you not, is performed on living volunteers.
Many of the most important fields in biology focus on dynamic systems, such as biology, neurology, and yes, anatomy.
I'm not sure what justification there is for saying that biology is to focused on the dead, or static systems.
Hi and welcome to LessWrong.
Please see the policy on AI generated content: https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong
In particular:
Prompting a language model to write an essay and copy-pasting the result will not typically meet LessWrong's standards. Please do not submit unedited or lightly-edited LLM content. You can use AI as a writing or research assistant when writing content for LessWrong, but you must have added significant value beyond what the AI produced, the result must meet a high quality standard, and you must vouch for everything in the result.
I'm not claiming that we need any extra laws of physics to explain consciousness. I'm saying that even if you showed me the equations that proved I would behave like a conscious being, I still wouldn't feel like the problem was solved satisfactorily, until you explained why that would also make me feel like a conscious being.
I think that's fairly limited evidence, would want to see more data than that before claiming anything is vindicated.
Yes, that sounds right (minus the word metaphysical in camp 2).
To be precise: If you were to explain why, based on the laws of physics, I say the words "I Am Conscious" and otherwise act the way I do, I would still not feel like the mystery of consciousness has been explained, because there still doesn't seem to be any reason why there is something experiencing saying those words.
But experience itself isn't instantaneous, it's something that happens over time.
My claim is no nuclear bomb incident would have killed more than 25% of the population, or 500 million people in 1950, one billion 1970.
Reasoning is trivial - a single nuclear bomb can only kill a maximum of a few hundred thousand people at a time. At the height of the cold war there were a few thousand bombs on each side, most of which weren't aimed at people but second strike capabilities in rural areas. Knock on effects like famines could kill more, but I doubt they would be worse than WW2, since number of direct deaths would be smaller. It would likely lead to war, but again WW2 is your ballpark here for number of deaths from an all out global war.
Making an anthropic update from something that at worse would have reduced world population by 25 percent is basically identical to reading tealeaves, especially if you don't update the other way from WW1s and WW2s and other assorted disasters which majorly reduced world population.
Maybe we are the luckiest timeline. But the evidence for that is not enough to update you enough to meaningfully change your plans.
We have finally solved an age old problem in philosophy:
Therefore an image is worth 11167 words, not 1000 as the classicists would have it.