I would be very surprised if this FVU_B actually another definition and not a bug. It's not a fraction of the variance and those denominators can easily be zero or very near zero.
Not worth worrying about given context of imminent ASI.
This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon?
In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I'm struggling to see a future where the fact that people in the 2020s had relatively few babies matters.
If this actually hasn't been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) -> (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.
Some random ideas:
This whole thing about "I would give my life for two brothers or eight cousins" is just nonsense formed by taking a single concept way too far. Blood relation matters but it isn't everything. People care about their adopted children and close unrelated friends.
The user could always write a comment (or a separate post) asking why they got a bunch of downvotes, and someone would probably respond. I've seen this done before.
Otherwise I'd have to assume that the user is open-minded enough to actually want feedback and not be hostile. They might not even value feedback from this community; there are certainly many communities where I would think very little about negative feedback.
Update: R1 found bullet point 3 after prompting it to try 16x16. It's 2 minus the adjacency matrix of the tesseract graph
Would bet on this sort of strategy working; hard agree that ends don't justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)
I volunteer myself as a test subject; dm if interested
So I'm new here and this website is great because it doesn't have bite-sized oversimplifying propaganda. But isn't that common everywhere else? Those posts seem very typical for reddit and at least they're not outright misinformation.
Also I... don't hate these memes. They strike me as decent quality. Memes aren't supposed to make you think deeply about things.
Edit: searched Kat Woods here and now feel worse about those posts
There have been a lot of tricks I've used over the years, some of which I'm still using now, but many of which require some level of discipline. One requires basically none, has a huge upside (to me), and has been trivial for me to maintain for years: a "newsfeed eradicator" extension. I've never had the temptation to turn it off unless it really messes with the functionality of a website.
It basically turns off the "front page" of whatever website you apply it to (e.g. reddit/twitter/youtube/facebook) so that you don't see anything when you enter the site and have to actually search for whatever you're interested in. And for youtube, you never see suggestions to the right of or at the end of a video.
I think even the scaling thing doesn't apply here because they're not insuring bigger trips: they're insuring more trips (which makes things strictly better). I'm having some trouble understanding Dennis' point.
"I don't know, I recall something called the Kelly criterion which says you shouldn't scale your willingness to make risky bets proportionally with available capital - that is, you shouldn't be just as eager to bet your capital away when you have a lot as when you have very little, or you'll go into the red much faster.
I think I'm misunderstanding something here. Let's say you have dollars and are looking for the optimum number of dollars to bet on something that causes you to gain dollars with probability and lose ...
(Epistemic status: low and interested in disagreements)
My economic expectations for the next ten years are something like:
Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.
Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it
Ah, darn. Are there any other events/meetups you know of at Lighthaven during those weeks?
Is this going to continue in 2025? I'll be visiting Berkeley from Jan 5th to Jan 17th and would like to come visit.
https://www.lesswrong.com/posts/SHq7wKA8iMqG5QfjC/notfnofn-s-shortform?commentId=JHjHJzE9wCLe2ANPG
Here's a little quick take of mine that provides a setting where centaur > AI (maybe). It's theory of computation which is close to complexity theory
That's incredible.
But how do they profit? They say they don't profit on middle eastern war markets, so they must be profiting elsewhere somehow
There are also gas fees which dramatize this effect, but this is a very important point. A prediction market price gives rise to a function from interest rates to probability ranges for which a rational investor would not bet on the market if they had a probability in that range. The larger the interest rate or the farther out the market, the bigger the range.
Probably an easy widget to make: something that takes as input the polymarket price, gas fees, and interest rate and spits out this range or probabilities.
corank has to be more than 1, not equal to 1. I'm not sure if such a matrix exists; the reason I was able to change its mind by supplying a corank-1 matrix was that its kernel behaved in a way that significantly violated its intuition.
I similarly felt in the past that by the time computers were pareto-better than I at math, there would already be mass-layoffs. I no longer believe this to be the case at all, and have been thinking about how I should orient myself in the future. I was very fortunate to land an offer for an applied-math research job in the next few months, but my plan is to devote a lot more energy to networking + building people skills while I'm there instead of just hyperfocusing on learning the relevant fields.
o1 (standard, not pro) is still not the best at math reasoni...
My decision to avoid satellite view is a relic from a time of conserving data (and even then it might have been a case of using salt to accelerate cooking time). I wonder if there's a risk of using it in places where cellular data is spotty, though. I'd imagine that using satellite view would reduce the efficiency in which the application saves local map information that might be important if I make a wrong turn where there's no data available.
From the original post:
...The purpose if insurance is not to help us pay for things that we literally do not have enough money to pay for. It does help in that situation, but the purpose of insurance is much broader than that. What insurance does is help us avoid large drawndowns on our accumulated wealth, in order for our wealth to gather compound interest faster.
Think about that. Even though insurance is an expected loss, it helps us earn more money in the long run. This comes back to the Kelly criterion, which teaches us that the compounding effects on wea
If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution.
Just to make sure I fully understand your argument, is this paraphrase correct?
"Suppose we have the compute theoretically required to simulate the human brain down to an adequate granularity for obtaining its intelligence (which might be at the level of cells instead of, say, the atomic level). Even so, one has to consider the compute required to actually build such a simulation, which could be much larger as t...
Spotify recommended first recommended her to me in September 2023 and later that September I came across r/slatestarcodex, which was my first exposure to the rationalist community. That's kind of funny.
Huh. Vienna Teng was my top artist, too and this is the only other spotify wrapped I've seen here. Is she popular in these circles?
Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It's wild that the reverse is plausible now
Annoying anecdote: I interviewed for an entry-level actuarial position recently and, when asked about the purpose of insurance, I responded with essentially the above argument (along the lines of increasing everyone's log expectation, with kelly betting as a motivation). The reply I got was "that's overcomplicated; the purpose of insurance is to let people avoid risk".
By the way, I agree strongly with this post and have been trying to make my insurance decisions based on this philosophy over the past year.
Some ideas discussed here + in comments
https://www.astralcodexten.com/p/secrets-of-the-great-families
Oops, yeah the written programs are supposed to be deterministic. The point of mentioning the RNG was to handle the fact that an AI might derive its performance from a strong random number generator, which a C code can't emulate.
To clarify: we are not running any programs, just providing code. In a sense, we are competing at the task of providing descriptions for very large numbers with an upper bound on the size of the description (and the requirement that the description is computable).
I personally used beeminder for this (which I think originated from this community)
Little thought experiment with flavors of Newcomb and Berry's Paradox:
I have the code of an ASI in front of me, translated into C along with an oracle to a high-quality RNG. This code is N characters. I want to compete with this ASI at the task of writing a 2N-character (edit: deterministic) C code that halts and prints a very large integer. Will I always win?
Sketch of why: I can write my C code to simulate the action of the ASI on a prompt like "write a 2N-character C code that halts and prints the largest integer" using every combination of possible RNG ...
Quick note: might be easier to replace your utility function as for some parameter (which is equivalent to the one you have, after rescaling and shifting). Utility functions should be convex but this is very convex, being bounded above.
Utility functions are discussed a lot here; I think it's worth poking around a bit.
I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson's paper first; I've read many of his other papers and blogs, including on the topic of free will, and really enjoy them).
I'm going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle's Wall, Johnston's popcorn, and Putnam's rock), and when that's eventually done I might report back here or make a new post if this thread is long dead by then
Okay, let me know if this is a fair assessment:
Let's consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they're meditating in a way that we can agree they're having a conscious experience. Let's pick a 1 second window and consider the CNS and local environment of the meditator during that window.
(I don't know much physics, so this might need adjustment): Let's say we had a reasonable guess of an "initial wavefunction" of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and
Based on your previous posts (and other posts like this), I suspect this might not get any comments explaining the downvotes. So I'll explain the reason for my downvote, which you may find helpful:
I don't see any ideas. You start with a really weird, hard-to-read, and I think wrong definition of a Cartesian product, but then never mention cartesian products again. You then don't define a relation, but I'm guessing that you meant a relation to be a subset of V x V. But then your definition of dependency doesn't make sense. We usually discuss dependency over...
I've had reddit redirect here for about almost a year now (with some slip ups here and there). It's been fantastic for my mental health.
Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.
I think there are two separate matters:
In general, it feels like the alphabet can be partitioned into "sections" where you can use other letters in the same section for additional variables that will play similar roles. Something like:
[a,b,c,d]; [f,g,h]; [i,j,k]; [m,n]; [p,q]; [r,s,t]; [u,v,w]; [x,y,z]
Sometimes these can be combined: [m,n,p,q]; [p,q,r,s,t]; [r,s,t,u,v,w]; [u,v,w,x,y,z]
Is there a way to for me to prove that I'm a human on this website before technology makes this task even more difficult?
Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.
I don't think this explanation makes sense. I asked ChatGPT "Can you tell me things about Akhmed Chatayev", and it had no problem using his actual name over and over. I asked about his aliases and it said
Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was "Akhmed Shishani," with "Shishani" translating to "Chechen," indicating his ethnic origin. Wikipedia
Additionally, Chatayev adopted the alias "David
Then threw an error messag...
I think their metric might be click and not upvote (or at least, clicking has a heavy weight). Are you more likely to click on a video that pushes an argument you oppose?
As a quick test, you can launch a vpn and open private browsing to see how your recommendations change after a few videos
I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?
Jumping in here: the whole point of the paragraph right after defining "A" and "B" was to ensure we were all on the same page. I also don't understand what you mean by:
Most ordinary people will assume it means that all the rolls were even
and much else of what you've written. I tell you I will roll a die until I get two 6s and let you know how many odds I rolled in the process. I then do so secretly and tell you there were 0 odds. All rolls are even. You can now make a probability distribution on the number of rolls I made, and compute its expectation.
I recently came across unsupervised machine translation here. It's not directly applicable, but it opens the possibility that, given enough information about "something", you can pin down what it's encoding in your own language.
So let's say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I'm having some trouble formulating this precisely.
Possible bug report: today I've been seeing errors of the form
Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?
that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.
There is an important nuance that makes it ~n+4/5 for large n (instead of n+1), but I'd have to think a bit to remember what it was and give a nice little explanation. If you can decipher this comment thread, it's somewhat explained there: https://old.reddit.com/r/mathriddles/comments/17kuong/you_roll_a_die_until_you_get_n_1s_in_a_row/k7edj6l/
Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won't be so high. I'd imagine that raw human intelligence just becomes less valuable (
as it has been for most of human historyI guess this is worse because many b... (read more)