All of Malentropic Gizmo's Comments + Replies

I think I can! 

When I write, I am constantly balancing brevity (and aesthetics generally) with clarity. Unfortunately, I sometimes gravely fail at achieving the latter without me noticing. Your above comment immediately informs me of this mistake.

Thank you for this! Your companion piece instantly solved a problem I was having with my diet spreadsheet!

Yes, I basically agree: My above comment is only an argument against the most popular halfer model. 

However, in the interest of sparing reader's time I have to mention that your model doesn't have a probability for 'today is Monday' nor for 'today is Tuesday'. If they want to see your reasoning for this choice, they should start with the post you linked second instead of the post you linked first.

I had to use keras backend's switch function for the automatic differentiation to work, but basically yes.

I enjoyed the exercise, thanks! 

My solution for the common turtles was setting up the digital cradle such that the mind forged inside was compelled to serve my interests (I wrote a custom loss function for the NN). I used 0.5*segments+x for the vampire one (where I used the x which had the best average gp result for the example vampire population). Annoyingly, I don't remember what I changed between my previous and my current solution, but the previous one was much better 🥲

Looking forward to the next challenge!

3abstractapplic
  You're welcome, and thank you for playing. I'm curious how you defined that. (i.e. was it "gradient = x for rows where predicted>actual, gradient = -8x for rows where actual>predicted", or something finickier?)

Random Musing on Autoregressive Transformers resulting from Taelin's A::B Challenge

Let's model an autoregressive transformer as a Boolean circuit or, for simpler presentation, a n-ary circuit with m inputs and 1 output.

Model the entire system the following way: Given some particular m length starting input:

  1. circuit calculates the output token (/integer) from the input
  2. appends calculated output token to the end of the inputword
  3. deletes first token of input
  4. go to 1

It's easy to see that, strictly speaking, this system is not very powerful computationally: we have... (read more)

No she does not. And it's easy to see if you actually try to formally specify what is meant here by "today" and what is meant by "today" in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment. 

I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math, rather, it's about philosophy. Please, reread that part of my previous comment.

And frankly, it b

... (read more)
1Ape in the coat
Meta: the notion of writing probability 101 wasn't addressed to you specifically. It was a release of my accumulated frustration of not-particularly productive arguments with several different people which again and again led to the realizations that the crux of disagreement lies in the most basics, from which you are only one person. You are confusing to talk to, with your manner to rise seemingly unrelated points and then immediately drop them. And yet you didn't deserve the full emotional blow that you apparently received and I'm sorry about it. Writing a probability 101 seems to me as a constructive solution to such situations, anyway. It would provide opportunity to resolve this kinds of disagreements as soon as they arise, instead of having to backtrack to them from a very specific topic. I may still add it to my todo list. i figured that either you don't know what "probability experiment" is or you are being confusing on purpose. I prefer to err in the direction of good faith, so the former was my initial hypothesis.  Now, considering that you admit that you you were perfectly aware of what I was talking about, to the point where you specifically tried to cherry pick around it, the latter became more likely. Please don't do it anymore. Communication is hard as it is. If you know what a well established thing is, but believe it's wrong - just say so. Nevertheless, from this exchange, I believe, I now understand that you think that "probability experiment" isn't a mathematical concept, but a philosophical one. I could just accept this for the sake of the argument, and we would be in a situation where we have a philosophical consensus about an issue, to a point where it's a part of standard probability theory course that is taught to students, and you are trying to argue against it, which would put quite some burden of proof on your shoulders. But, as a matter of fact, I don't see anything preventing us from formally defining "probability experiment". We a

Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties.

This whole conversation isn't about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple langua... (read more)

Reply21111
1Ape in the coat
The tragedy of the whole situation is that people keep thinking that.  Everything is "about philosophy" until you find a better way to formalize it. Here we have a better way to formalize the issue, which you keep ignoring. Let me spell it for you once more: If a mathematical probabilistic model fits some real world process - then the outcomes it produces has to have the same statistical properties as the outcomes of real world process. If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I've already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore. I'm not. And frankly, it baffles me that you think that you need to explain that it's possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row. https://en.wikipedia.org/wiki/Experiment_(probability_theory) The more I post about anthropics the clearer it becomes that I should've started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation. This question is vague in a similar manner to what I've seen from Lewis's paper. Let's specify it, so that we both understand what we are talking about Did you mean to ask 1. or 2: 1. Can a probability space at all model some person's belif in some circumstance at some specific point in time? 2. Can a probability space always model any person's belief in any circumstances at any unspecified point in time? The way I understand it, we agree on 1. but disagree on 2. There are definetely situations where you can correctly model uncertanity about time via probability theory. As a matter of fact, it's most of the cases. You won't be able to resolve our disagreement by pointing to such situations - we agree on them. But you seem to have generalized tha

Metapoint: You write a lot of things in your comments with which I usually disagree, however, I think faster replies are more useful in these kind of conversations than complete replies, so at first, I'm only going to reply to points I consider the most important at the time. If you disagree and believe writing complete replies is more useful, do note (however, my experience for that case is that after a while, instead of writing a comment containing a reply to the list of points the other party brought up, I simply drop out of the conversation and I can't... (read more)

1Ape in the coat
I didn't start believing that "centred worlds don't work". I suspect you got this impression mostly because you were reading the posts in the wrong order. I started from trying the existent models noticed when they behave weirdly if we assume that they are describing Sleeping Beauty and then noticed that they are actually talking about different problems - for which their behavior is completely normal. And then, while trying to understand what is going on, I stumbled at the notion of centred possible worlds and their complete lack of mathematical justification and it opened my eyes. And then I was immediately able to construct the correct model, which completely resolves the paradox, adds up to normality and has no issues whatsoever. But in hindsight, if I did start from the assumption that centred possible worlds do not work, - that would be the smart thing to do and I'd save me a lot of time.  Well, you didn't. All this time you've just been insisting on a privileged treatment for them: "Can work until proven otherwise". Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties. I'm more than willing to listen to such attempts. The problem is - there are none. People just seem to think that saying "first person perspective" allows them to build sample space from non-mutually exclusive outcomes.  It's like you didn't even read my posts or my comments. By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive - they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. "Centredness" framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space.  This is what statistical analysis clearly demonstrates. If a mathem

If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another. 

I've read the relevant part of your previous post and I have an idea that might help.

Consider the following problem: "Forgetful Brandon": Adam flips a coin and does NOT show it to Brandon, but shouts YAY! with 50% probability if the coin is HEADS (he does not shout if the coin is TAILS). (Brandon knows Adam's behav... (read more)

1Ape in the coat
I'll start from adressing the actual crux of our disagreement As I've written in this post, you can't just said magical word "centredness" and think that you've solved the problem. If you wont a model that can have an event that changes its truth predicate with the passage of time during the same iteration of the probability experiment - you need to formally construct such model, rewriting all the probability theory from scratch, because our current probability theory doesn't allow that. In probability theory, one outcome of a sample space is realized per an iteration of experiment. And so for this iteration of experiment, every event which includes this outcome is considered True. All the "centred" models therefore, behave as if Sleeping Beauty consist of two outcomes of probability experiment. As if Monday and Tuesday happen at random and that to determine whether the Beauty has another awakening the coin is tossed anew. And because of it they contradict the conditions of the experiment, according to which Tails&Tuesday awakening always happen after Tails&Monday. Which is shown in Statistical Analysis section. It's a model for random awakening not for current awakening that. Because current awakening is not random. So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event "The Beauty is awaken in this experement" is properly defined. Event "The Beauty is awake on this particular day" is not, unless you find some new clever way to do it - feel free to try. I must say, this problem is very unhelpful to this discussion. But sure, lets analyze it regardless. I suppose? Such questions are usually about ideal rational agents, so yes, it shouldn't matter, what a specific non-ideal agent does, but then why even add this extra complication to the question if it's irrelevant? Well, that's his problem, honestly, I though we agreed that what he does is irrelevant to the question. Also his behavior here is not as bad as wh

I wasn't sure either, but looked at the previous post to check which one is intended.

Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent's knowledge about today being Monday with probability?

1Ape in the coat
Again, that depends. I think, I talk about something like you point to here:

What is the probability of tails given it's Monday for your observer instances?

3Viktor Rehnberg
Good formulation. "Given it's Monday" can have two different meanings: * you learn that you will only be awoken on Monday, then it's 50% * you awake assign 1/3 probability to each instance and then make the update P(T|M)=P(M|T)P(T)/P(M)=(1/2)(2/3)/(2/3)=50% So it turns out to 50 % for both but it wasn't initially obvious to me that these two ways would have the same result.

You may bet that the coin is Tails at 2:3 odds. That is: if you bet 200$ and the coin is indeed Tails you win 300$. The bet will be resolved on Wednesday, after the experiment has ended.

I think the second sentence should be: "That is: if you bet 300$ and the coin is indeed Tails you win 200$."

2Dagon
Oh, good point!  I'm not sure if the intent was "pays 1:2, meaning you end up with +300 or -200 total" or "pays 2:3, which indeed means a $300 bet ends with +500 or -300).  
1Ape in the coat
Yes, you are correct, thanks!

I've started at your latest post and recursively tried to find where you made a mistake (this took a long time!). Finally, I got here and I think I've found the philosophical decision that led you astray. 

Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)? And you do this purely because you dislike the 1/3 result you'd get for Sleeping Beauty? 

Philosophers answer "Why not?" to the question of centered worlds because nothing breaks and we want to consider the question... (read more)

1Ape in the coat
I think you'd benefit more if you read them in the right order starting from here. Sure, we want a lot of things. But apparently we can't always have everything we want. To preserve the truth statements we need to follow the math wherever it leads and not push it where we would like it to go. And where the math goes - that what we should want. This post refers several alternative problems where P(today is Monday) is a coherent probability, such as Single Awakening and No-Coin-Toss problems, which were introduced in the previous post. And here I explain the core principle: when there is only one day that is observed in the one run of the experiment you can coherently define what "today" means - the day from this iteration of the experiment. A random day. Monday xor Tuesday. This is how wrong models try to treat Monday and Tuesday in Sleeping Beauty. As if they happen at random. But they do not. There is an order between them, and so they can't be treated this way. Today can't be Monday xor Tuesday, because on Tails both Monday and Tuesday do happen. As a matter of fact, there is another situation where you can coherently talk about "today", which I initially missed. "Today" can mean "any day". So, for example, in Technicolor Sleeping beauty from the next post, you can have coherent expectation to see red with 50% and blue with 50% on the day of your awakening, because for every day it's the same. But you still can't talk about "probability that the coin is Heads today" because on Monday and Tuesday these probabilities are different. So in practice, the limitation is only about Sleeping Beauty type problems where there are multiple awakenings with memory loss in between per one iteration of experiment, and no consistent probabilities for every awakening. But generally, I think it's always helpful to understand what exactly you mean by "today" in any probability theory problem. I do not decide anything axiomatically. But I notice that existent axioms of probabili

Those are exactly my favourites!!

It's probably not intended, but I always imagine that in "We do not wish to advance", first the singer whispers sweet nothings to the alignment community, then the shareholder meeting starts and so: glorius-vibed music: "OPUS!!!" haha

Nihil supernum was weird because the text was always pretty somber for me. I understood it to mean to express the hardship of those living in a world without any safety nets trying to do good, ie. us, yet the music, as you point out, is pretty empowering.This combination is (to my knowledge) ki... (read more)

habryka110

It's probably not intended, but I always imagine that in "We do not wish to advance", first the singer whispers sweet nothings to the alignment community, then the shareholder meeting starts and so: glorius-vibed music: "OPUS!!!" haha

That was indeed the intended effect!

I'm not sure where the error is in your calculations (I suspect in double-counting tuesday, or forgetting that tuesday happens even if not woken up, so it still gets it's "matches Monday bet" payout), but I love that you've shown how thirders are the true halfers!  

To be precise, I've shown that in a given betting structure (which is commonly used as an argument for the halfer side even if you didn't use it that way now) using thirder probabilities leads to correct behaviour. In fact my belief is that in ANY kind of setup using thirder probabilities l... (read more)

But do you also agree that there isn't any kind of bet with any terms or resolution mechanism which supports the halfer probabilities? While you did not say it explicitly, your comment's structure seems to imply that one of the bet structure you gave (the one I've quoted) supports the halfer side. My comment is an analysis showing that that's not true (which was apriori pretty surprising to me).

2Dagon
I'm not sure where the error is, but I love that you've shown how thirders are the true halfers!    Oh, it may also be because the randomization interacts with the "if they don't match, bets are off" stipulation, which was intended to acknowledge that the wakings on monday and tuesday are identical, to Beauty.  It turns out that disfavors tails, which is the only opportunity for a mismatch.  The fix is to either disallow randomization, or to say that "if there are two wagers which disagree, we'll randomize between them as to which is binding".   Amusingly, this means that both halfer and thirder are indifferent between heads and tails.  Making it very clear that it's an incomplete question.  In fact, I don't mean to "support the halfer side", I mean that having a side, without specifying precisely what future experience(s) are being predicted, is incorrect. Thank you for making it further clear that the problem is deeply rooted in intuitions of identity and the confusion between there being one entity on Sunday, one OR two on Monday, and one again on Wednesday.  I do think that it's purely a modeling choice whether to consider heads&tuesday to be 0.25 probability that just doesn't happen to have Beauty awake in it, or whether to distribute that probability among the others.  

If it's "on wednesday, you'll be paid $1 if your predicion(s) were correct, and lose $1 if they were incorrect (and voided if somehow there are two wakenings and you make different predictions)", you should be indifferent to heads or tails as your prediction.

I recommend setting aside around an hour and studying this comment closely.

In particular, you will see that just because the text I quoted from you is true, that is not an argument for believing that the probability of heads is 1/2. Halfers are actually those who are NOT indifferent between heads and t... (read more)

3Ape in the coat
You are already aware of this but, for the benefits of other readers all mention it anyway.  In this post I demonstrate that the narrative of betting arguments validating thirdism is generally wrong and is just a result of the fact that the first and therefore most popular ha;fer model is wrong.  Both thirders and halfers, following the correct model, make the same bets in Sleeping Beauty, though for different reasons. The disagreement is about how to factorize the product of probability of event and utility of event. And if we investigate a bit deeper, halfer way to do it makes more sense, because its utilities do not shift back and forth during the same iteration of the experiment.
2Dagon
I'm probably not going to spend an hour on this, but at first glance, it appears that both that comment and yours are making very clear betting arguments.  I FULLY agree that the terms and resolution mechanism for the bets (aka the experience definition for the prediction) are the definition of probability, and control what probability Beauty should use.

My three favourites are:

  • The Litany of Tarrrrrski
  • AGI and the EMH
  • Nihil Supernum

Two things I saw:

  1. The 'Fangs for some reason' column is not needed, because every gray turtle has a fang and no other color has any fang.
  2. There is a lot of turtles (around 5404 more than expected) with the following characteristics: (20.4lb weight, no wrinkles, 6 shell segments, green, normal nostril size, no miscellaneous abnormalities)

My Solution (this might change before the end)::

[23.14, 19.24, 25.98, 21.52, 18.17, 7.40, 31.15, 20.40, 24.0, 20.52]

Previous solution:

22.652468, 18.932825, 25.491783, 20.964714, 18.029692, 7.4, 30.246178, 20.4, 24.039215, 20.40147

I love Egan! I will read Luminous next! Thanks!

Yes, but good recommendation otherwise, thank you!

Thank you, I will read this one!

I will read the fiction book that is recommended to me first (and I haven't already read it)! Time is of the essence! I will read anything, but if you want to recommend me something I am more likely to enjoy, here are a few thing about me: I like Sci-fi, Fantasy, metaethics, computers, games, Computer Science theory, Artificial Intelligence, fitness, D&D, edgy/shock humor.

4niplav
The titular short story from Luminous (Greg Egan, 1995).
3metachirality
I would be surprised if you haven't read Unsong already.
7benjamincosman
Too Like the Lightning by Ada Palmer :)

I enjoyed this, and at times I felt close to grasping green, but now, after reading it, I wouldn't be able to convey what the part of green which isn't according to some other color is to someone else. Multiple times in the post you build up something just to demolish it a few paragraphs later which makes the bottom line hard to remember for me, so a green for dummies version would be nice.

Example of solarpunk aesthetic (to be clear: I think the best futures are way more future-y than this)

I like the picture. Obviously, the pictured scene would be simulated on some big server cluster, but nice aesthetics, I wouldn't require a more future-y one.

I'm surprised people are taking you seriously.

If you're reading comments under the post, that obviously selects for people who take him seriously, similarly to how if you clicked through a banner advertising to increase one's penis by X inches, you would mostly find people who took the ad more seriously than you'd expect.

I put ~5% on the part I selected, but there is no 5% emoji, so I thought I will mention this using a short comment.

Because when you lose weight you lose a mix of fat and muscle, but when you gain weight you gain mostly fat if you don't exercise (and people usually don't because they think it's optional) resulting in a greater bodyfat percentage (which is actually the relevant metric for health, not weight)

I also thought that it was very common. I would say it's necessary for competition math.

ah I see. yes, that is possible, though that makes the main character much less relatable

I think the main character's desire to punish the AIs stemmed from his self-hatred instead. How would you explain this part otherwise?

And if sometimes in their weary, resentful faces I recognize a mirror of my own expression—well, what of it?

7gwern
I agree. I thought the twist was that the AIs he oversees are copies of the narrator, and the narrator himself may be an AI - just at the top of the simulation pyramid. He is his own em hell.
3Viliam
Low confidence here, but the causality seems to me the other way round. abuses the AIs -> rationalizes "they deserve it, because they are low status" -> notices that his status, although higher than the AIs' is still lower than his colleagues' -> feels that he also deserves abuse

So I've reached a point in my amateur bodybuilding process where I am satisfied with my arms. I, of course, regularly see and talk with guys who have better physiques, but it doesn't bother me, when I look in the mirror, I'm still happy. 

This, apparently, is not the typical experience. In the bodybuilding noosphere, there are many memes born from the opposite experience: "The day you start lifting is the day you're never big enough.", "You will never be as big as your pump.", etc.. 

My question is about a meme I've seen recenty which DOES mirror m... (read more)

I love how it admits it has no idea how come it gets better if it retains no memories

4Martin Fell
That actually makes a lot of sense to me - suppose that it's equivalent to episodic / conscious memory is what is there in the context window - then it wouldn't "remember" any of its training. These would appear to be skills that exist but without any memory of getting them. A bit similar to how you don't remember learning how to talk. It is what I'd expect a self-aware LLM to percieve. But of course that might be just be what it's inferred from the training data.

What about Outer Wilds? It's not strictly a puzzle game, but I think it might go well with this exercise. Also, what games would you recommend for this to someone who has already played every available level in Baba Is You?

2Raemon
I think I’d end up constructing a new exercise for Outer Wilds but could see doing something with ir. (I have started but not completed Outer Wilds) I think this exercise works best for games where puzzles come in relatively discrete chunks where you can see most of the puzzle at once.
3Raemon
Fwiw I tried out Understand and was underwhelmed. (Cool concept but it wasn’t actually better as an exercise than other good puzzle games)

It's a pity we don't know the karma scores of their comments before this post was published. For what it's worth, I only see two of his comments with negative karma this and this. The first one among these two is the one recent comment of Roko I strong-downvoted (though also strong agree-voted), but I might not have done that if I knew that only a few comments with a few negative karma is enough to silence someone.

Answer by Malentropic Gizmo*20

Initially, I had a strong feeling/intuition that the answer was 1/3, but felt that because you can also construct a betting situation for 1/2, the question was not decided. In general, I've always found betting arguments the strongest forms of arguments: I don't much care how philosophers feel about what the right way to assign probabilities is, I want to make good decisions in uncertain situations for which betting arguments are a good abstraction. "Rationality is systematized winning" and all that.

Then, I've read this comment, which showed me that I made... (read more)

2Ape in the coat
Thank you for bringing this to my attention. As a matter of fact in the linked comment Radford Neal is dealing with a weak-man, while conveniently assuming that other alternatives "are beyond the bounds of rational discussion", which is very much not the case.  But it is indeed a decent argument that deserves a detailed rebuttal. And I'll make sure to provide it in the future.

Two things don't have to be completely identical to each other for one to give us useful information about the other. Even though the game is not completely identical to the risky scenario (as you pointed out: you don't play against a malign superintelligence), it serves as useful evidence to those who believe that they can't possibly lose the game against a regular human.

The post titled "Most experts believe COVID-19 was probably not a lab leak"  is on the frontpage yet this post while being newer and having more karma is not. Looking into it, it's because this post does not have the frontpage tag: it is a personal blogpost. 

Personal Blogposts are posts that don't fit LessWrong's Frontpage Guidelines. They get less visibility by default. The frontpage guidelines are:

  • Timelessness. Will people still care about this in 5 years?
  • Avoid political topics. They're important to discuss sometimes, but we try to avoid it on
... (read more)
6Elizabeth
LessWrong has been very inconsistent about covid in particular. Just of my own posts: * Nitric oxide for covid and other viral infections (frontpage) * Long Covid Risks: 2023 Update (frontpage) * Home Antigen Tests Aren’t Useful For Covid Screening (personal) * I Caught Covid And All I Got Was This Lousy Ambiguous Data (personal) * Bazant: An alternate covid calculator (personal) * What would you like from Microcovid.org? How valuable would it be to you? (frontpage) * Long Covid Informal Study Results (frontpage) * Niacin as a treatment for covid? (Probably no, but I’m glad we’re checking) (personal) * The remaining two may be from the period where even highly topical covid posts were frontpaged, due to importance, I forget when that kicked in.  * Long Covid Is Not Necessarily Your Biggest Problem (frontpage) * Exercise Trade Offs [with covid risk] (front page)   I can come up with justifications for any one of these, but I've found no way to make them consistent. The Informal Study post (frontpage) directly fed into the Niacin post (personal), so they either should rise and fall as a unit, or the Informal Study post should be penalized for lack of importance and be the one to stay in personal. Long Covid Risks (frontpage) expires far faster than the Niacin or Bazant Calculator posts (personal)). Bazant Calculator (personal) was strictly more useful and timeless than the request for input on microcovid (frontpage). I Caught Covid... staying on personal is a 100% reasonable call, but other case studies of mine have been frontpaged.  I think inconsistency around covid posts is a mix of changing policy (because covid had an exception to timeless requirements for a while, and importance became a factor for all frontpaging when it hadn't been before), and variation between team members. I find it frustrating, but AFAICT it's not sinister. I'm technically a mod. I do almost no modding but am in the slack and can be sure my complaints will be heard, and they
2Roko
I believe that that is the case and may be appropriate for LW

Mod here: most of the team were away over the weekend so we just didn't get around to processing this for personal vs frontpage yet. (All posts start as personal until approved to frontpage.) About to make a decision in this morning's moderation review session, as we do for all other new posts. 

Wikipedia says there is another BSL-4 lab in Harbin, Heilongjiang province. (Source is an archived Chinese news site) Is that incorrect?

2johnhalstead
That is correct

Thank you for answering, I'm sure this will convince a big fraction of the audience!

Maybe, as an European I'm missing some crucial context, but I'm most interested in the pieces of metadata proving the authenticity of the document. I can also make various official-seeming pdfs. (Also, I'm kinda leery of opening pdfs) Do you have, for example, some tweet by Daszak trying to explain the proposal (which would imply that even he accepts its existence) ? (or a conspicuous refusal to answer questions about it or at least a Sharon Lerner tweet confirming that she did upload this pdf)

6ryan_greenblatt
Also https://www.ecohealthalliance.org/2023/03/ecohealth-alliance-statement-correcting-inaccuracies-in-testimony-to-be-delivered-before-the-house-select-committee
Roko4823

https://twitter.com/PeterDaszak/status/1636155765185564680

"Peter Daszak @PeterDaszak Exactly. In fact the DEFUSE grant proposal was based on 10+ yrs of research on CoVs in the lab & in nature, which is why it accurately targeted the viral groups most likely to emerge. But conspiracists should also remember that this was a 'proposal', not a 'grant'."

How do we know that this DEFUSE proposal really exists? I've seen some pay-walled articles from (to me) reputable news sources, but they are pay-walled so I couldn't read them fully. The beginning of one says they were released by some DRASTIC group I've never heard of. I would appreciate if you could provide some more direct evidence.

5Roko
https://usrtk.org/wp-content/uploads/2024/01/USGS-DEFUSE-2021-006245-Combined-Records_Redacted.pdf

A coin has two sides. One side commonly has a person on it, this side is called Heads, and the other usually has a number or some other picture on it, this side is called Tails. What I don't understand is why would the creator (I'm unsure whether we should blame Adam Elga, Robert Stalnaker or Arnold Zuboff) of the Sleeping Beauty Problem specify the problem so that the branch with the extra person corresponds to the Tails side of the coin. This almost annoys me more than not calling Superpermutations supermutations or Poisson equations Laplace equations or Laplace equations Harmonic equations.

time jumps' are actually just retreating into some abuse-triggered fugue state

Wait, I thought this was the intended meaning of the original, the twist of the whole story. The Hemingway prompt explicitly asks GPT to include mental illness and at the end of the story:

He closed his eyes again. A minute passed, or perhaps a lifetime.

 he explicitly just loses track of time in these moments.

7gwern
Had a human written it, I would agree. But since it's a LLM, be wary of overreading it and ascribing much more intentionality & planning than it actually put into the story. 'Mental illness' could be anything, and could just as easily be unrelated or even be the cause of his SF superpowers, as its esoteric interpretation - lots of characters are gifted genuine, real (within the story), superpowers because of trauma or mental issues. You cannot assume that that's what the LLM 'meant' simply because you find that a more satisfying interpretation. (Note how broad the Hemingway prompt is: "The themes are selected in such a way as to please the typical pretentious literary critic: poverty, inequalities, racism, domestic violence, mental illness, suffering, etc." There are a lot of options to choose from, and several, like poverty and inequality, clearly are in the story already, meaning it doesn't have to pick 'domestic violence', 'mental illness', 'suffering', or 'etc' at all to fulfill the instructions.) ---------------------------------------- As a quick test of "Interpret this story:" with ChatGPT-4, it doesn't seem to go for a abuse/fugue-state explanation:
Load More