Thank you for this! Your companion piece instantly solved a problem I was having with my diet spreadsheet!
Yes, I basically agree: My above comment is only an argument against the most popular halfer model.
However, in the interest of sparing reader's time I have to mention that your model doesn't have a probability for 'today is Monday' nor for 'today is Tuesday'. If they want to see your reasoning for this choice, they should start with the post you linked second instead of the post you linked first.
I had to use keras backend's switch function for the automatic differentiation to work, but basically yes.
I enjoyed the exercise, thanks!
My solution for the common turtles was setting up the digital cradle such that the mind forged inside was compelled to serve my interests (I wrote a custom loss function for the NN). I used 0.5*segments+x for the vampire one (where I used the x which had the best average gp result for the example vampire population). Annoyingly, I don't remember what I changed between my previous and my current solution, but the previous one was much better 🥲
Looking forward to the next challenge!
Random Musing on Autoregressive Transformers resulting from Taelin's A::B Challenge
Let's model an autoregressive transformer as a Boolean circuit or, for simpler presentation, a n-ary circuit with m inputs and 1 output.
Model the entire system the following way: Given some particular m length starting input:
It's easy to see that, strictly speaking, this system is not very powerful computationally: we have...
No she does not. And it's easy to see if you actually try to formally specify what is meant here by "today" and what is meant by "today" in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment.
I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math, rather, it's about philosophy. Please, reread that part of my previous comment.
...And frankly, it b
Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties.
This whole conversation isn't about math. It is about philosophy. Math is proving theorems in various formal systems. If you are a layman, I imagine you might find it confusing that you can encounter mathematicians who seem to have conversations about math in common English. I can assure you that every mathematician in that conversation is able to translate their comments into the simple langua...
Metapoint: You write a lot of things in your comments with which I usually disagree, however, I think faster replies are more useful in these kind of conversations than complete replies, so at first, I'm only going to reply to points I consider the most important at the time. If you disagree and believe writing complete replies is more useful, do note (however, my experience for that case is that after a while, instead of writing a comment containing a reply to the list of points the other party brought up, I simply drop out of the conversation and I can't...
B follows from B
Typo
If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another.
I've read the relevant part of your previous post and I have an idea that might help.
Consider the following problem: "Forgetful Brandon": Adam flips a coin and does NOT show it to Brandon, but shouts YAY! with 50% probability if the coin is HEADS (he does not shout if the coin is TAILS). (Brandon knows Adam's behav...
I wasn't sure either, but looked at the previous post to check which one is intended.
Consider that in the real world Tuesday always happens after Monday. Do you agree or disagree: It is incorrect to model a real world agent's knowledge about today being Monday with probability?
What is the probability of tails given it's Monday for your observer instances?
You may bet that the coin is Tails at 2:3 odds. That is: if you bet 200$ and the coin is indeed Tails you win 300$. The bet will be resolved on Wednesday, after the experiment has ended.
I think the second sentence should be: "That is: if you bet 300$ and the coin is indeed Tails you win 200$."
I've started at your latest post and recursively tried to find where you made a mistake (this took a long time!). Finally, I got here and I think I've found the philosophical decision that led you astray.
Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)? And you do this purely because you dislike the 1/3 result you'd get for Sleeping Beauty?
Philosophers answer "Why not?" to the question of centered worlds because nothing breaks and we want to consider the question...
Those are exactly my favourites!!
It's probably not intended, but I always imagine that in "We do not wish to advance", first the singer whispers sweet nothings to the alignment community, then the shareholder meeting starts and so: glorius-vibed music: "OPUS!!!" haha
Nihil supernum was weird because the text was always pretty somber for me. I understood it to mean to express the hardship of those living in a world without any safety nets trying to do good, ie. us, yet the music, as you point out, is pretty empowering.This combination is (to my knowledge) ki...
It's probably not intended, but I always imagine that in "We do not wish to advance", first the singer whispers sweet nothings to the alignment community, then the shareholder meeting starts and so: glorius-vibed music: "OPUS!!!" haha
That was indeed the intended effect!
I'm not sure where the error is in your calculations (I suspect in double-counting tuesday, or forgetting that tuesday happens even if not woken up, so it still gets it's "matches Monday bet" payout), but I love that you've shown how thirders are the true halfers!
To be precise, I've shown that in a given betting structure (which is commonly used as an argument for the halfer side even if you didn't use it that way now) using thirder probabilities leads to correct behaviour. In fact my belief is that in ANY kind of setup using thirder probabilities l...
But do you also agree that there isn't any kind of bet with any terms or resolution mechanism which supports the halfer probabilities? While you did not say it explicitly, your comment's structure seems to imply that one of the bet structure you gave (the one I've quoted) supports the halfer side. My comment is an analysis showing that that's not true (which was apriori pretty surprising to me).
If it's "on wednesday, you'll be paid $1 if your predicion(s) were correct, and lose $1 if they were incorrect (and voided if somehow there are two wakenings and you make different predictions)", you should be indifferent to heads or tails as your prediction.
I recommend setting aside around an hour and studying this comment closely.
In particular, you will see that just because the text I quoted from you is true, that is not an argument for believing that the probability of heads is 1/2. Halfers are actually those who are NOT indifferent between heads and t...
My three favourites are:
Two things I saw:
My Solution (this might change before the end)::
[23.14, 19.24, 25.98, 21.52, 18.17, 7.40, 31.15, 20.40, 24.0, 20.52]
Previous solution:
22.652468, 18.932825, 25.491783, 20.964714, 18.029692, 7.4, 30.246178, 20.4, 24.039215, 20.40147
I love Egan! I will read Luminous next! Thanks!
Yes, but good recommendation otherwise, thank you!
Thank you, I will read this one!
I will read the fiction book that is recommended to me first (and I haven't already read it)! Time is of the essence! I will read anything, but if you want to recommend me something I am more likely to enjoy, here are a few thing about me: I like Sci-fi, Fantasy, metaethics, computers, games, Computer Science theory, Artificial Intelligence, fitness, D&D, edgy/shock humor.
I enjoyed this, and at times I felt close to grasping green, but now, after reading it, I wouldn't be able to convey what the part of green which isn't according to some other color is to someone else. Multiple times in the post you build up something just to demolish it a few paragraphs later which makes the bottom line hard to remember for me, so a green for dummies version would be nice.
Example of solarpunk aesthetic (to be clear: I think the best futures are way more future-y than this)
I like the picture. Obviously, the pictured scene would be simulated on some big server cluster, but nice aesthetics, I wouldn't require a more future-y one.
I'm surprised people are taking you seriously.
If you're reading comments under the post, that obviously selects for people who take him seriously, similarly to how if you clicked through a banner advertising to increase one's penis by X inches, you would mostly find people who took the ad more seriously than you'd expect.
I put ~5% on the part I selected, but there is no 5% emoji, so I thought I will mention this using a short comment.
Because when you lose weight you lose a mix of fat and muscle, but when you gain weight you gain mostly fat if you don't exercise (and people usually don't because they think it's optional) resulting in a greater bodyfat percentage (which is actually the relevant metric for health, not weight)
I also thought that it was very common. I would say it's necessary for competition math.
ah I see. yes, that is possible, though that makes the main character much less relatable
I think the main character's desire to punish the AIs stemmed from his self-hatred instead. How would you explain this part otherwise?
And if sometimes in their weary, resentful faces I recognize a mirror of my own expression—well, what of it?
So I've reached a point in my amateur bodybuilding process where I am satisfied with my arms. I, of course, regularly see and talk with guys who have better physiques, but it doesn't bother me, when I look in the mirror, I'm still happy.
This, apparently, is not the typical experience. In the bodybuilding noosphere, there are many memes born from the opposite experience: "The day you start lifting is the day you're never big enough.", "You will never be as big as your pump.", etc..
My question is about a meme I've seen recenty which DOES mirror m...
I love how it admits it has no idea how come it gets better if it retains no memories
What about Outer Wilds? It's not strictly a puzzle game, but I think it might go well with this exercise. Also, what games would you recommend for this to someone who has already played every available level in Baba Is You?
It's a pity we don't know the karma scores of their comments before this post was published. For what it's worth, I only see two of his comments with negative karma this and this. The first one among these two is the one recent comment of Roko I strong-downvoted (though also strong agree-voted), but I might not have done that if I knew that only a few comments with a few negative karma is enough to silence someone.
Please do so in a post, I subscribed to those
Initially, I had a strong feeling/intuition that the answer was 1/3, but felt that because you can also construct a betting situation for 1/2, the question was not decided. In general, I've always found betting arguments the strongest forms of arguments: I don't much care how philosophers feel about what the right way to assign probabilities is, I want to make good decisions in uncertain situations for which betting arguments are a good abstraction. "Rationality is systematized winning" and all that.
Then, I've read this comment, which showed me that I made...
Two things don't have to be completely identical to each other for one to give us useful information about the other. Even though the game is not completely identical to the risky scenario (as you pointed out: you don't play against a malign superintelligence), it serves as useful evidence to those who believe that they can't possibly lose the game against a regular human.
I see, I didn't consider that. Sorry.
The post titled "Most experts believe COVID-19 was probably not a lab leak" is on the frontpage yet this post while being newer and having more karma is not. Looking into it, it's because this post does not have the frontpage tag: it is a personal blogpost.
...Personal Blogposts are posts that don't fit LessWrong's Frontpage Guidelines. They get less visibility by default. The frontpage guidelines are:
- Timelessness. Will people still care about this in 5 years?
- Avoid political topics. They're important to discuss sometimes, but we try to avoid it on
Mod here: most of the team were away over the weekend so we just didn't get around to processing this for personal vs frontpage yet. (All posts start as personal until approved to frontpage.) About to make a decision in this morning's moderation review session, as we do for all other new posts.
Wikipedia says there is another BSL-4 lab in Harbin, Heilongjiang province. (Source is an archived Chinese news site) Is that incorrect?
Ah, very nice, thank you!
Thank you for answering, I'm sure this will convince a big fraction of the audience!
Maybe, as an European I'm missing some crucial context, but I'm most interested in the pieces of metadata proving the authenticity of the document. I can also make various official-seeming pdfs. (Also, I'm kinda leery of opening pdfs) Do you have, for example, some tweet by Daszak trying to explain the proposal (which would imply that even he accepts its existence) ? (or a conspicuous refusal to answer questions about it or at least a Sharon Lerner tweet confirming that she did upload this pdf)
https://twitter.com/PeterDaszak/status/1636155765185564680
"Peter Daszak @PeterDaszak Exactly. In fact the DEFUSE grant proposal was based on 10+ yrs of research on CoVs in the lab & in nature, which is why it accurately targeted the viral groups most likely to emerge. But conspiracists should also remember that this was a 'proposal', not a 'grant'."
How do we know that this DEFUSE proposal really exists? I've seen some pay-walled articles from (to me) reputable news sources, but they are pay-walled so I couldn't read them fully. The beginning of one says they were released by some DRASTIC group I've never heard of. I would appreciate if you could provide some more direct evidence.
A coin has two sides. One side commonly has a person on it, this side is called Heads, and the other usually has a number or some other picture on it, this side is called Tails. What I don't understand is why would the creator (I'm unsure whether we should blame Adam Elga, Robert Stalnaker or Arnold Zuboff) of the Sleeping Beauty Problem specify the problem so that the branch with the extra person corresponds to the Tails side of the coin. This almost annoys me more than not calling Superpermutations supermutations or Poisson equations Laplace equations or Laplace equations Harmonic equations.
time jumps' are actually just retreating into some abuse-triggered fugue state
Wait, I thought this was the intended meaning of the original, the twist of the whole story. The Hemingway prompt explicitly asks GPT to include mental illness and at the end of the story:
He closed his eyes again. A minute passed, or perhaps a lifetime.
he explicitly just loses track of time in these moments.
I think I can!
When I write, I am constantly balancing brevity (and aesthetics generally) with clarity. Unfortunately, I sometimes gravely fail at achieving the latter without me noticing. Your above comment immediately informs me of this mistake.