I think the way to learn any skill is to basically:
And the time spent in each iteration of item 1 is capped in usefulness or at least has diminishing returns. I think this has nothing to do with frustration. Also, I think reminding yourself of the experience is not that important and I think there is no cap of 1 thing a day.
Ah, ok, I didn't know when exactly Milei has started being the president. I didn't pay attention to the jump. The original post said "1 year" so I counted off one year (right after the jump) and saw that the slope was smaller than before. But you're right, yeah. But I must also point out that this is the official rate and idk of anyone actually uses it.
Through conversations with locals, I understood why. President Milei's initial action was severely devaluing the Argentine Peso, making dollar-denominated goods more expensive.
Not true. A year ago blue dollar rate was approximately the same as now [1], and the official USD-Peso rate has been rising more slowly than before Milei. [2]
At first I disbelieved. I thought A > B. Then I wrote code myself and checked, and got that B > A. I believed this result. Then I thought about it and realized why my reason for A > B was wrong. But I still didn't understand (and now I don't understand either) why the described random process is not equivalent to randomly choosing 2, 4, or 6 every roll. I thought some more and now I have some doubts. My first doubt is whether there exists some kind of standard way of describing random processes and conditioning on them, and whether the problem as stated by notfnofn. Perhaps the problem is just underspecified? Anyway, this is very interesting.
If you think you might be in a solipsist simulation, you might try to add some chaotic randomness to your decisions. For example, go outside under some trees and wait till any kind of tree leaf or seed or anything hits your left half of the face, choose one course of action. If it hits the other half of your face, choose another course of action. If you do this multiple times in your life, each of your decisions will depend on the state of the whole earth and on all your previous decisions, since weather is chaotic. And thus the simulators will be unable t...
Instead of inspecting all programs in the UP, just inspect all programs with length less than n. As n becomes larger and larger, this covers more and more of the total probability mass in the up and the total probability mass covered this way approaches 1. What to do about the non-halting programs? Well, just run all the programs for m steps, I guess. I think this is the approximation of UP that is implied.
In the very beginning of the post, I read: "Quick psychology experiment". Then, I read: "Right now, if I offered you a bet ...". Because of this, I thought about a potential real life situation, not a platonic ideal situation, that the author is offering me this bet. I declined both bets. Not because they are bad bets in an abstract world, but because I don't trust the author in the first bet and I trust them even less in the second bet.
...If you rejected the first bet and accepted the second bet, just that is enough to rule you out from having any utility
So you say humans don't reason about the space and objects around them by keeping 3d representations. You think that instead the human brain collects a bunch of heuristics what the response should be to a 2d projection of 3d space, given different angles - an incomprehhensible mishmash of neurons like in an artificial neural network that doesn't have any CNN layers for identifying the digit by image, and just memorizes all rules for all types of pictures with all types of angle like a fully connected layer.
I guess I was not clear enough. In your original post, you wrote "On one hand, there are countably many definitions ..." and "On the other hand, Cantor's diagonal argument applies here, too. ...". So, you talked about two statements - "On one hand, (1)", "On the other hand, (2)". I would expect that when someone says "One one hand, ..., but on the other hand, ...", what they say in those ellipses should contradict each other. So, in my previous comment, I just wanted to point out that (2) does not contradict (1) because countable infinity + 1 is still coun...
Ok, so let's say you've been able to find a countably infinite amount of real numbers and you now call them "definable". You apply the Cantor's argument to generate one more number that's not in this set (and you go from the language to the meta language when doing this). Countably infinite + 1 is still only countably infinite. How would you go to a higher cardinality of "definable" objects? I don't see an easy way.
They commit to not using your data to train their models without explicit permission.
I've just registered on their website because of this article. During registration, I was told that conversations marked by their automated system that overlooks if you are following their terms of use are regularly overlooked by humans and used to train their models.
In Anthropic's support page for "I want to opt out of my prompts and results being used for training" they say:
...We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3
I would like to make a recommendation to Johannes that he should try to write and post content in a way that invokes less feelings of cringe in people. I know it does invoke that because I personally feel cringe.
Still, I think that there isn’t much objectively bad about this post. I’m not saying the post is very good or convincing. I think its style is super weird but that should be considered to be okay in this community. These thoughts remind me of something Scott Alexander once wrote - that sometimes he hears someone say true but low status things - and...
I've been thinking of buying an M1 MacBook because everyone says that Apple's sound system is great and works out of the box correctly with low latency and no problems, unlike Windows+Wasapi, Windows+ASIO, and Linux. I want to use it for music stuff without an external audio interface. How true is this and would you recommend it?
You says Vast.AI is the "most reliable provider". In my experience, it's an unreliable mess with sometimes buggy not properly working servers and non-existent support service. I will also say the same about runpod.io. On the other hand, lambdalabs had been very reliable in my experience and has a much better UX. The main problem with LambdaLabs is that nowadays it happens pretty often that it has no available servers.
Let be the initial state of a Gibbs sampler on an undirected probabilistic graphical model, and be the final state. Assume the sampler is initialized in equilibrium, so the distribution of both and is the distribution given by the graphical model.
Take any subsets of , such that the variables in each subset are at least a distance away from the variables in the other subsets (with distance given by shortest path length in the graph). Then ...
I have recently read The Little Typer by Friedman and Christiansen. I suspect that this book can serve as an introduction similarly to this (planned, so far) sequence of posts. However, the book is not concise at all.
I am (was) an X% researcher, where X<Y. I wish I had given up on AI safety earlier. I suspect it would've been better for me if AI safety resources explicitly said things like "if you're less than Y, don't even try", although I'm not sure if I would've believed them. Now, I'm glad that I'm not trying to do AI safety anymore and instead I just work at a well paying relaxed job doing practical machine learning. So, I think pushing too many EAs into AI safety will lead to those EAs suffering much more, which happened to me, so I don't want that to happen a...
In 2017, I remember reading 80K and thinking I was obviously unqualified for AI alignment work. I am glad that I did not heed that first impression. The best way to test goodness-of-fit is to try thinking about alignment and see if you're any good at it.
That said, I apparently am the only person of whom [community-respected friend of mine] initially had an unfavorable impression, which later became strongly positive.
Sorry to hear that you didn't make it as an AI Safety researcher, but thank you for trying.
You shouldn't feel any pressure, but have you considered trying to be involved in another way such as a) helping to train people trying to break into the field b) providing feedback on people's alignment proposals c) assisting in outreach (this one is more dependent on personal fit and is easier to do net harm)?
I think it's a shame how training up in AI Safety is often seen as an all-or-nothing bet, when many people have something valuable to contribute even if that's not through direct research.
I still take these zinc lozenges when I suspect that I might fall with a common cold. I feel like they help me somewhat. Maybe my colds have been shorter since I've started taking Zinc but I'm not sure. I haven't been tracking any data explicitly. I guess I'm gonna be taking Zinc for common cold as long as I don't get further evidence about it not working.
I don't know how to square that with the idea that one shouldn't ignore their crying kids. I have no idea how kids' crying at night works. Is it possible that a parent should just suck it up and come and comfort the baby every time they cry? Maybe you can comfort her since she's crying but not give her the reward of soothing her until she falls asleep? Is it possible that she cries at night because she's doesn't get enough cuddles during the day or because the room looks scary or something like that? I don't know enough about the situation and I don't have...
I'm not sure how to read this; where are you on the continuum from "I heard it's bad" to "I read all the papers and came to a deep considered view"?
I also thought so when I read your post. I'm at the "The book 'The Boy Who Was Raised as a Dog' says so" point. The book is not about sleep in particular, it's about psychological trauma in childhood, especially the one obtained from neglect.
Also, I think this might cause the child to develop either an avoidant attachment style (there's no point in crying or asking others for help, they won't come anyway).
What benefits does raising VO2Max give one in normal life? I think we'll probably either die from AI in 2 to 20 years, or transform ourselves in the singularity, hence prolonging my life via improving health is not important. Thus I ask: what benefits does raising VO2Max give me now?