This is a special post for quick takes by Richard_Kennaway. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
74 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

A true story from a couple of days ago. Chocolates were being passed round, and I took one. It had a soft filling with a weird taste that I could not identify, not entirely pleasant. The person next to me had also taken one of the same type, and reading the wrapper, she identified it as apple flavoured. And so it was. It tasted much better when I knew what it was supposed to be.

On another occasion, I took what I thought was an apple from the fruit bowl, and bit into it. It was soft. Ewww! A soft apple is a rotten one. Then I realised that it was a nectarine. Delicious!

Today in Azathoth news:

"Eurasian hoopoes raise extra chicks so they can be eaten by their siblings"

It seems that the hoopoes lay extra eggs in times of abundance — more than they would be able to see through to fledging — as a way of storing up food for the older siblings. It is rather gruesomely called the "larder" hypothesis.

“What surprised me the most was the species practicing this aggressive parenting,” says Vladimir Pravosudov, an ecologist at the University of Nevada, Reno. Hoopoes primarily eat insects, he notes, so their long, curved bills aren’t ideal for killing and eating chicks. That might be why, Soler says, mother hoopoes often grab the unlucky chick and shove it into the mouth of an older chick, which swallows it whole.

Literal baby-eaters!

3andrew sauer
Never, ever take anybody seriously who argues as if Nature is some sort of moral guide.

Goodhart is the malign god who gives you whatever you ask for.

1Victor Novikov
Sufficient optimization pressure destroys all that is not aligned to the metric it is optimizing. Value is fragile.

Roko's Basilisk, as told in the oldest extant collection of jokes, the Philogelos, c. 4th century AD.

A pedant having fallen into a pit called out continually to summon help. When no one answered, he said to himself, "I am a fool if I do not give all a beating when I get out in order that in the future they shall answer me and furnish me with a ladder."

H/T Dynomight.

It seems to me I've not heard much of cryonics in the last few years. I see from Wikipedia that as of last year Alcor only has about 2000 signed up, of which 222 are suspended. Are people still signing up for suspension, as much as they ever have been? Are the near-term prospects of AGI making long-term prospects like suspension less attractive?

4David Hornbein
>Are the near-term prospects of AGI making long-term prospects like suspension less attractive? No. Everyone I know who was signed up for cryonics in 2014 is still signed up now. You're hearing about it less because Yudkowsky is now doing other things with his time instead of promoting cryonics, and those discussions around here were a direct result of his efforts to constantly explain and remind people.
4niplav
I don't know about signup numbers in general (the last comprehensive analysis was in 2020, when there was a clear trend)—but it definitely looks like people are still signing up for Alcor membership (six from January to March 2023). However, in recent history, two cryonics organizations have been founded, Tomorrow.bio in Europe in 2019 (350 members signed up) and Southern Cryonics in Australia. People are being preserved, and Southern Cryonics recently suspended their first member.

I do not have answers to the question I raise here.

  1. Historical anecdotes.

Back in the stone age — I think something like the 1960's or 1970's, I read an article about the possible future of computing. Computers back then cost millions and lived in giant air-conditioned rooms, and memory was measured in megabytes. Single figures of megabytes. Someone had expressed to its writer the then-visionary idea of using computers to automate a company. They foresaw that when, for example, a factory was running low on some of its raw materials, the computer would automatically know that, and would make out a list of what was needed. A secretary would type that up into an order to post to a supplier, and a secretary there would input that into their computer, which would send the goods out. The writer's response was "what do you need all those secretaries for?"

Back in the bronze age, when spam was a recent invention (the mid-90's), there was one example I saw that was a reductio ad absurdum of fraudulent business proposals. I wish I'd kept it, because it was so perfect of its type. It offered the mark a supposed business where they would accept orders for goods, which the business staff that ... (read more)

[-]jbash114

The vision is of everything desirable happening effortlessly and everything undesirable going away.

Citation needed. Particularly for that first part.

Hack your brain to make eating healthily effortless. Hack your body to make exercise effortless.

You're thinking pretty small there, if you're in a position to hack your body that way.

If you're a software developer, just talk to the computer to give it a general idea of what you want and it will develop the software for you, and even add features you never knew you wanted. But then, what was your role in the process? Who needed you?

Why would I want to even be involved in creating software that somebody else wanted? Let them ask the computer themselves, if they need to ask. Why would I want to be in a world where I had to make or listen to a PowerPoint presentation of all things? Or a summary either?

Why do I care who needs me to do any of that?

Why climb Kilimanjaro if a robot can carry you up?

Because if the robot carries me, I haven't climbed it. It's not like the value comes from just being on the top.

Helicopters can fly that high right now, but people still walk to get there.

Why paint, if Midjourney will do it better t

... (read more)
9Richard_Kennaway
Yet these are actual ideas someone suggested in a recent comment. In fact, that was what inspired this rant, but it grew beyond what would be appropriate to dump on the individual. Perhaps the voice I wrote that in was unclear, but I no more desire the things I wrote of than you do. Yet that is what I see people wishing for, time and again, right up to wanting actual wireheading. Scott Alexander wrote a cautionary tale of a device that someone would wear in their ear, that would always tell them the best thing for them to do, and was always right. The first thing it tells them is "don't listen to me", but (spoiler) if they do, it doesn't end well for them.
2Richard_Kennaway
There are authors I would like to read, if only they hadn't written so much! Whole fandoms that I must pass by, activities I would like to be proficient at but will never start on, because the years are short and remain so, however far an active life is prolonged.
4cousin_it
I think something like the Culture, with aligned superintelligent "ships" keeping humans as basically pets, wouldn't be too bad. The ships would try to have thriving human societies, but that doesn't mean granting all wishes - you don't grant all wishes of your cat after all. Also it would be nice if there was an option to increase intelligence, conditioned on increasing alignment at the same time, so you'd be able to move up the spectrum from human to ship.
2Said Achmiz
See also Stanislaw Lem on this subject:
2Richard_Kennaway
See also.
1ProgramCrafter
Upvoted as a good re-explanation of CEV complexity in simpler terms! (I believe LW will benefit from recalling the long understood things so that it has a chance on predicting future in greater detail.) In essence, you prove the claim "Coherent Extrapolated Volition would not literally include everything desirable happening effortlessly and everything undesirable going away". Would I be wrong to guess it argues against position in https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone? That said, current wishes of many people include things they want being done faster and easier; it's just the more you extrapolate the less fraction wants that level of automation - just more divergence as you consider higher scale.
8Richard_Kennaway
I suppose it does. That article was not in my mind at the time, but, well, let's just say that I am not a total hedonistic utilitarian, or a utilitarian of any other stripe. "Pleasure" is not among my goals, and the poster's vision of a universe of hedonium is to me one type of dead universe.

This is a story from the long-long-ago, from the Golden Age of Usenet.

On the science fiction newsgroups, there was someone—this is so long ago that I forget his name—who had an encyclopedic knowledge of fannish history, and especially con history, backed up by an extensive documentary archive. Now and then he would have occasion to correct someone on a point of fact, for example, pointing out that no, events at SuchAndSuchCon couldn't have influenced the committee of SoAndSoCon, because SoAndSoCon actually happened several years before.

The greater the irrefutability of the correction, the greater people's fury at being corrected. He would be scornfully accused of being well-informed.

"Prompt engineer" is a job that AI will wipe out before anyone even has it as a job.

"ChatGPT is Bullshit"

Thus the title of a recent paper. It appeared three weeks ago, but I haven't seen it mentioned on LW yet.

The abstract: "Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Fr... (read more)

[-]gwern*115

I'd say that they are wrong when they say a LLM may engage in 'soft bullshit': a LLM is simulating agents, who are definitely trying to track truth and the external world, because the truth is that which doesn't go away, and so it may say false things, but it still cares very much about falsity because it needs to know that for versimilitude. If you simply say true or false things at random, you get ensnared in your own web and incur prediction error. Any given LLM may be good or bad at doing so - the success of story-based jailbreaks suggests they are still far from ideal - but it's clear that the prediction loss on large real-world texts written by agents like you or I, who are writing things to persuade each other, like I am writing this comment to manipulate you and everyone reading it, require tracking latents corresponding to truth, beliefs, errors, etc. You can no more accurately predict the text of this comment without tracking what I believe and what is true than you could accurately predict it while not tracking whether I am writing in English or French. (Like in that sentence right there. You see what I did there? Maybe you didn't because you're just skimming and tldr, b... (read more)

4Owain_Evans
The "Still no lie detector for language model" paper is here: https://arxiv.org/pdf/2307.00175 The paper in the OP seems somewhat relate to my post from earlier this year.
5Dagon
I think that's true, but not very important (in the short term).  On Bullshit - Wikipedia was first published in 1986, and was a humorous, but useful, categorizaton of a whole lot of human communication output.  ChatGPT is truth-agnostic (except for fine-tuning and output tuning), but still pretty good on a whole lot of general topics.  Human choice of what GPT outputs to highlight or use in further communication can be bullshit or truth-seeking, depending on the human intent. In the long-term, of course, the idea is absolutely core to all the alignment fears and to the expectation that AI will steamroller human civilization because it doesn't care.

If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision.

If, on making a decision, your next thought is to suppress the thought “Was that the right decision?” then you still did not make a decision.

If you are swayed by someone else asking “Was that the right decision?” then you did not make a decision.

If you are swayed by someone repeating arguments you already heard from them, you did not make a decision.

Not making that decision may be the right thing to do. Wavering suggests that you still have some d... (read more)

4winstonBosan
I'm very confused. Because it seems like for you decision should not only clarify matters and narrow possibilities, but also eliminate all doubt entirely and prune off all possible worlds where the counterfactual can even be contemplated. Perhaps that's indeed how you define the word. But using such a stringent definition, I'd have to say I've never decided anything in my life. This doesn't seem like the most useful way to understand "decision" - it diverges enough from common usage and mismatches with the hyperdimensional-cloud of word meaning for decision sufficiently to be useless in conversation with most people. 
3Richard_Kennaway
A decision is not a belief. You can make a decision and still be uncertain about the outcome. You can make a decision while still being uncertain about whether it is the right decision. Decision neither requires certainty nor produces certainty. It produces action. When the decision is made, consideration ends. The action must be wholehearted in spite of uncertainty. You can steer according to how events unfold, but you can't carry one third of an umbrella when the forecast is a one third chance of rain. In about a month's time, I will take a flight from A to B, and then a train from B to C. The flight is booked already, and I have just have booked a ticket for a specific train that will only be valid on that train. Will I catch that train? Not if my flight is delayed too much. But I have considered the possibilities, chosen the train to aim for, and bought the ticket. There are no second thoughts, no dwelling on "but suppose" and "what if". Events on the day, and not before, will decide whether my hopes[1] for the journey will be realised. And if I miss the train, I already know what I will do about that. ---------------------------------------- 1. hope: (1) Desire for an outcome which one has only limited power to steer events towards. (2) A good breakfast, but a poor supper. ↩︎

If “you can make a decision while still being uncertain about whether it is the right decision”. Then why can’t you think about “was that the right decision”? (Lit. Quote above vs original wording)

It seems like what you want to say is - be doubtful or not, but follow through with full vigour regardless. If that is the case, I find it to be reasonable. Just that the words you use are somewhat irreconcilable. 

1Richard_Kennaway
Because it is wasted motion. Only when new and relevant information comes to light does any further consideration accomplish useful work. One day I might write an article on rationality in the art of change ringing, a recreation I took up a few years ago. Besides the formidable technicalities of the activity, it teaches such lessons as letting the past go, carrying on in the face of uncertainty, and acting (by which I mean doing, not seeming) assuredly however unsure you are. I have also heard (purely anecdotally) that change ringers seem to never get Alzheimers.
4Dagon
This seems like hyperbolic exhortation rather than simple description.  This is not how many decisions feel to me - many decisions are exactly a belief (complete with bayesean uncertainty).  A belief in future action, to be sure, but it's distinct in time from the action itself.   I do agree with this as advice, in fact - many decisions one faces should be treated as a commitment rather than an ongoing reconsideration.  It's not actually true in most cases, and the ability to change one's plan when circumstances or knowledge changes is sometimes quite valuable.  Knowing when to commit and when to be flexible is left as an excercise...
2cubefox
But if you only have a belief that you will do something in the future, you still have to decide, when the time comes, whether to carry out the action or not. So your previous belief doesn't seem to be an actual decision, but rather just a belief about a future decision -- about which action you will pick in the future. See Spohn's example about believing ("deciding") you won't wear shorts next winter:
2Dagon
Correct.  There are different levels of abstraction of predictions and intent, and observation/memory of past actions which all get labeled "decision".    I decide to attend a play in London next month.  This is an intent and a belief.  It's not guaranteed.  I buy tickets for the train and for the show.  The sub-decisions to click "buy" on the websites are in the past, and therefore committed.  The overall decision has more evidence, and gets more confident.  The cancelation window passes.  Again, a bit more evidence.  I board the train - that sub-decision is in the past, so is committed, but there's STILL some chance I won't see the play. Anything you call a "decision" that hasn't actually already happened is really a prediction or an intent.  Even DURING an action, you only have intent and prediction.  While the impulse is traveling down my arm to click the mouse, the power could still go out and I don't buy the ticket.  There is past, which is pretty immutable, and future, which cannot be known precisely.   I think this is compatible with Spohn's example (at least the part you pasted), and contradicts OP's claim that "you did not make a decision" for all the cases where the future is uncertain.  ALL decisions are actually predictions, until they are in the past tense.  One can argue whether that's a p(1) prediction or a different thing entirely, but that doesn't matter to this point. "If, on making a decision, your next thought is “Was that the right decision?” then you did not make a decision." is actually good directional advice in many cases, but it's factually simply incorrect.
2cubefox
That's an interesting perspective. Only it doesn't seem fit into the simplified but neat picture of decision theory. There everything is sharply divided between being either a statement we can make true at will (an action we can currently decide to perform) and to which we therefore do not need to assign any probability (have a belief about it happening), or an outcome, which we can't make true directly, that is at most a consequence of our action. We can assign probabilities to outcomes, conditional on our available actions, and a value, which lets us compute the "expected" value of each action currently available to us. A decision is then simply picking the currently available action with the highest computed value. Though as you say, such a discretization for the sake of mathematical modelling does fit poorly with the continuity of time.
2Dagon
Decision theory is fine, as long as we don't think it applies to most things we colloquially call "decisions".   In terms of instantaneous discrete choose-an-action-and-complete-it-before-the-next-processing-cycle, it's quite a reasonable topic of study.
2cubefox
A more ambitious task would be to come up with a model that is more sophisticated than decision theory, one which tries to formalize your previous comment about intent and prediction/belief.
2Dagon
I think it's a different level of abstraction.  Decision theory works just fine if you separate the action of predicting a future action from the action itself.  Whether your prior-prediction influences your action when the time comes will vary by decision theory. I think, for most problems we use to compare decision theories, it doesn't matter much whether considering, planning, preparing, replanning, and acting are correlated time-separated decisions or whether it all collapses into a sum of "how to act at point-in-time".  I haven't seen much detailed exploration of decision theory X embedded agents or capacity/memory-limited ongoing decisions, but it would be interesting and important, I think.
2Richard_Kennaway
It is exhortation, certainly. It does not seem hyperbolic to me. It is making the same point that is illustrated by the multi-armed bandit problem: once you have determined which lever gives the maximum expected payout, the optimum strategy is to always pull that lever, and not to pull levers in proportion to how much they pay. Dithering never helps. Yes. But only as such changes come to be. Certainly not immediately on making the decision. "Commitment" is not quite the concept I'm getting at here. It's just that if I decided yesterday to do something today, then if nothing has changed I do that thing today. I don't redo the calculation, because I already know how it came out.
2cubefox
Yes, but that arguably means we only make decisions about which things to do now. Because we can't force our future selves to follow through, to inexorably carry out something. See here:
2Richard_Kennaway
My left hand cannot force my right hand to do anything either. Instead, they work harmoniously together. Likewise my present, past, and future. Not only is the sage one with causation, he is one with himself. That is an example of dysfunctional decision-making. It is possible to do better. I always do the dishes today.

All universal quantifiers are bounded.

2Dagon
The open question is whether this includes the universe itself.

A long time ago, the use of calculators in schools was frowned upon, or even forbidden. Eventually they became permitted, and then required. How long will it be before AI assistants, currently frowned upon or even forbidden, become permitted, and then required?

The recent Gemini incident, apparently a debacle, was also a demonstration of how easy it is to deliberately mould an AI to force its output to hew to a required line, independent of the corpus on which it was trained, and the reality which gave rise to that corpus. Such moulding could be used by an ... (read more)

4Richard_Kennaway
Here's an AI disaster story that came to me on thinking about the above. 1. Schools start requiring students to use the education system's official AI to criticise their own essays, and rewrite them until the AI finds them acceptable. This also removes the labour of marking from teachers. 2. All official teaching materials would be generated by a similar process. At about the same time, the teaching profession as we know it today ceases to exist. "Teachers" become merely administrators of the teaching system. No original documents from before AI are permitted for children to access in school. 3. Or out of school. 4. Social media platforms are required to silently limit distribution of materials that the AI scores poorly. 5. AIs used in such important capacities would have to agree with each other, or public confusion might result. There would therefore have to be a single AI, a central institution for managing the whole state. For public safety, no other AIs of similar capabilities would be permitted. 6. Prompt engineering becomes a criminal offence, akin to computer hacking. 7. Access to public archives of old books, online or offline, is limited to adults able to demonstrate to the AI's satisfaction that they have an approved reason for consulting them, and will not be corrupted by the wrong thoughts they contain. Physical books are discouraged in favour of online AI rewrites. New books must be vetted by AI. Freedom of speech is the freedom to speak truth. Truth is what it is good to think. Good is what the AI approves. The AI approves it because it is good. 8. Social credit scores are instituted, based on AI assessment of all of an individual's available speech and behaviour. Social media platforms are required to silently limit distribution of anything written by a low scorer (including one to one messaging). 9. Changes in the official standard of proper thought, speech, and action would occur from time to time in accordance with socia
1Nate Showell
This sequence of steps looks implausible to me. Teachers would have a vested interest in preventing it, since their jobs would be on the line. A requirement for all teaching materials to be AI-generated would also be trivially easy to circumvent, either by teachers or by the students themselves. Any administrator who tried to do these things would simply have their orders ignored, and the Streisand Effect would lead to a surge of interest in pre-AI documents among both teachers and students.
2Richard_Kennaway
That will only put a brake on how fast the frog is boiled. Artists have a vested interest against the use of AI art, but today, hardly anyone else thinks twice about putting Midjourney images all through their postings, including on LessWrong. I'll be interested to see how that plays out in the commercial art industry.
2Nate Showell
You're underestimating how hard it is to fire people from government jobs, especially when those jobs are unionized. And even if there are strong economic incentives to replace teachers with AI, that still doesn't address the ease of circumvention. There's no surer way to make teenagers interested in a topic than to tell them that learning about it is forbidden.

There's an article in the current New Scientist, taking the form of a review of a new TV series called "The Peripheral", which is based on William Gibson's novel of the same name. The URL may not be useful, as it's paywalled, and if you can read it you're probably a subscriber and already have.

The article touches on MIRI and Leverage. A relevant part:

But over the past couple of years, singulatarian [sic] organisations [i.e. the Singularity Institute/MIRI and Leverage/Leverage Research] have been losing their mindshare. A number of former staffers at Leve

... (read more)

The blind have seeing-eye dogs. Terry Pratchett gave Foul Ole Ron a thinking-brain dog. At last, a serious use-case for LLMs! Thinking-brain dogs for the hard of thinking!

The safer it is made, the faster it will be developed, until the desired level of danger has been restored.

The physicality of decision.

A month ago I went out for a 100 mile bicycle ride. I'm no stranger to riding that distance, having participated in organised rides of anywhere from 50 to 150 miles for more than twelve years, but this was the first time I attempted that distance without the support of an organised event. Events provide both the psychological support of hundreds, sometimes thousands, of other cyclists riding the same route, and the practical support of rest stops with water and snacks.

I designed the route so that after 60 miles, I would be just ... (read more)

The game of Elephant begins when someone drags an elephant into the room.

Epistemic status: a jeu d'esprit confabulated upon a tweet I once saw. Long enough ago that I don't think I lose that game of Elephant by posting this now.

Everyone knows there's an elephant there. It's so obvious! We all know the elephant's there, we all know that we all know, and so on. It's common knowledge to everyone here, even though no-one's said anything about it. It's too obvious to need saying.

But maybe there are some people who are so oblivious that they don't realise it's t... (read more)

I have a dragon in my garage. I mentioned it to my friend Jim, and of course he was sceptical. "Let's see this dragon!" he said. So I had him come round, and knocked on the garage door. The door opened and the dragon stepped out right there in front of us.

"That can't really be a dragon!" he says. It's a well-trained dragon, so I had it walk about and spread its wings, showing off its iridescent scaly hide.

"Yes, it looks like a dragon," he goes on, "but it can't really be a dragon. Dragons belch fire!"

The dragon raised an eyebrow, and discreetly belched som... (read more)

Here we go again? First mpox Clade 1b case detected outside Africa, WHO declares emergency.

I have at last had a use for ChatGPT (that was not about ChatGPT).

I was looking (as one does) at Aberdare St Elvan Place Triples. (It's a bellringing thing, I won't try to explain it.) I understood everything in that diagram except for the annotations "-M", "-I", etc., but those don't lend themselves to Google searching.

So I asked ChatGPT as if I was asking a bell ringer:

When bellringing methods are drawn on a page, there are often annotations near each lead end consisting of a hyphen and a single letter, such as "-M", "-I", etc. What do these annotation

... (read more)
8Gunnar_Zarncke
For what it's worth, I use ChatGPT all the time, multiple times every day, even most hours. I usually don't hit the request limits, but have three times now. Once with a sequence of Dall-E, I guess because evaluating an image is faster than evaluating a long text and because the generated pictures require a lot of tuning (so far). I use it  * as a Google replacement.  * to figure out key words for Google queries like you did and to find original sources. * to summarize websites. Though for "generally known" stuff "no browse" is often better and faster. * as a generator for code fragments. * for debugging - just paste code and error message into it. * for understanding topics deeper in an interactive discussion style. * as a language coach. "Explain the difference between zaidi and kuliko in Swahili with examples."   * for improving texts of all kinds - I often ask: "criticise this text I wrote". and more I guess. I also let my kids have dialogs with it.
2Gunnar_Zarncke
...and I use a custom GPT that a colleague created to write nice Scrum tickets based on technical information provided. Just paste in what you have, such a mail about a needed service update, a error log entry, or a terse protocol from a meeting and PO GPT will create a ticket readable for non-technies out of it. https://chat.openai.com/g/g-FmWRcwG0i-po-gpt 
2Gunnar_Zarncke
...and I use ChatGPT to  * generate illustrations for posts and inspirational images * read text from screenshots
2Richard_Kennaway
MacOS has for a while had the capability to copy text out of images. Is ChatGPT just invoking a text-recognition library?
2Gunnar_Zarncke
Yeah, there are also tools for Windows that can do that. But ChatGPT can format nicely, convert to CVS, bullet points etc. No, it can do a lot of things with images, describe what is in there, style, etc. I think it is another image model.

[ETA: alfredmacdonald's post referred to here has been deleted.]

Well, well, alfredmacdonald has banned me from his posts, which of course he has every right to do. Just for the record, I'll paste the brief thread that led to this here.

Richard_Kennaway (in reply to this comment)

I also notice that alfredmacdonald last posted or commented here 10 years ago, and the content of the current post is a sharp break from his earlier (and brief) participation. What brings you back, Alfred? (If the answer is in the video, I won't see it. The table of contents was enou... (read more)

When I watch a subtitled film, it is not long before I no longer notice that I am reading subtitles, and when I recall scenes from it afterwards, the actors’ voices in my head are speaking the words that I read.

2Dagon
Me too!  it's a very specific form of synesthesia.  For languages I know a little bit, but not well enough to do without subtitles, it can trick me into thinking I'm far more good at understanding native speakers than I am at all.  I can't wait until LLMs are good, fast, and cheap enough, and AR or related video technology exists, such that I can get automatic subtitles for real-life conversations, in English as well as other languages.

Epistemic status: crafted primarily for rhetorical parallelism.

All theories are right, but some are useless.

[Wittgenstein] once greeted me with the question: 'Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis?' I replied: 'I suppose, because it looked as if the sun went round the earth.' 'Well,' he asked, 'what would it have looked like if it had looked as if the earth turned on its axis ?" (Source)

Like this.

Interesting application of a blockchain. What catches my attention is this (my emphasis):

The deepest thinkers about Dark Forest seem to agree that while its use of cryptography is genuinely innovative, an even more compelling proof of concept in the game is its “autonomous” game world—an online environment that no one controls, and which cannot be taken down.

So much for "we can always turn the AI off." This thing is designed to be impossible to turn off.

"Parasite gives wolves what it takes to be pack leaders", Nature, 24 November 2022.

Toxoplasma gondii, the parasite well-known for making rodents lose their fear of cats, and possibly making humans more reckless, also affects wolves in an interesting way.

"infected wolves were 11 times more likely than uninfected ones to leave their birth family to start a new pack, and 46 times more likely to become pack leaders — often the only wolves in the pack that breed."

4gwern
The gesturing towards the infected wolves being more reproductively fit in general is probably wrong, however. Of course wolves can be more aggressive if it's actually a good idea, there's no need for a parasite to force them to be more aggressive; the suggestion about American lions going extinct is absurd - 11,000 years is more than enough time for wolves to recalibrate such a very heritable trait if it's so fitness-linked! So the question there is merely what is going on? Some sort of bias or very localized fitness benefit? Is there a selection bias whereas ex ante going for pack leader is a terrible idea, but ex post conditional on victory (rather than death/expulsion) it looks good? Well, this claims to be longitudinal and not find the sorts of correlations you'd expect from a survivorship. What else? Looking it over, the sampling frame 1995-2020 itself is suspect: starting in 1995. Why did it start then? Well, that's when the wolves came back (very briefly mentioned in the article). The wolf population expanded rapidly 5-fold, and continues to oscillate a lot as packs rise and fold (ranging 8-14) and because of overall mortality/randomness on a small base (a pack is only like 10-20 wolves of all ages, so you can see why there would be a lot of volatility and problems with hard constraints like lower bounds): So, we have at least two good possible explanations there: (a) it was genuinely reproductively-fit to take more risks than the basal wolf, but only because they were expanding into a completely-wolf-empty park and surrounding environs, and the pack-leader GLM they use doesn't include any variables for time period, so on reanalysis, we would find that the leader-effect has been fading out since 1995; and (b) this effect still exists, and risk-seeking individuals do form new packs and are more fit... but only temporarily because they occupied a low-quality pack niche and it goes extinct or does badly enough that they would've done better to stay in the or

Tools, not rules.

Or to put it another way, rules are tools.

What is happiness?

This is an extract from an interview with the guitarist Nilo Nuñez, broadcast yesterday on the BBC World Service. Nuñez was born and brought up in Cuba, and formed a rock band, but he and his group came more and more into conflict with the authorities. He finally decided that he had to leave. When the group received an invitation to tour in the Canary Islands, and the Cuban authorities gave them permission to go, they decided to take the opportunity to leave Cuba and not return. They only had temporary visas, so they stayed on in the Cana... (read more)

If you need a bot to assist your writing, then you are not competent to edit the result.

5quetzal_rainbow
I need a bot for writing assistance because writing from scratch for me is very tiring, while editing is not.

Oh, lookee here. AI-generated spam.

From New Scientist, 14 Nov 2022, on a 50% fall in honeybee life expectancy since the 1970s:

“For the most part, honeybees are livestock, so beekeepers and breeders often selectively breed from colonies with desirable traits like disease resistance,” says Nearman.

“In this case, it may be possible that selecting for the outcome of disease resistance was an inadvertent selection for reduced lifespan among individual bees,” he says. “Shorter-lived bees would reduce the probability of spreading disease, so colonies with shorter lived bees would appear healt

... (read more)
Curated and popular this week