All of notfnofn's Comments + Replies

Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won't be so high. I'd imagine that raw human intelligence just becomes less valuable (as it has been for most of human history I guess this is worse because many b... (read more)

I would be very surprised if this FVU_B actually another definition and not a bug. It's not a fraction of the variance and those denominators can easily be zero or very near zero.

Not worth worrying about given context of imminent ASI.

This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon?

In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I'm struggling to see a future where the fact that people in the 2020s had relatively few babies matters.

If this actually hasn't been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) -> (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.

Some random ideas:

  • Before doing any sort of positional analysis: What does the (ELO_1,ELO_2,engine eval) -> Probability of win/draw function look like? What happens when choosing an engine near those ELO ratings vs. the strongest en
... (read more)
notfnofn149

This whole thing about "I would give my life for two brothers or eight cousins" is just nonsense formed by taking a single concept way too far. Blood relation matters but it isn't everything. People care about their adopted children and close unrelated friends.

The user could always write a comment (or a separate post) asking why they got a bunch of downvotes, and someone would probably respond. I've seen this done before.

Otherwise I'd have to assume that the user is open-minded enough to actually want feedback and not be hostile. They might not even value feedback from this community; there are certainly many communities where I would think very little about negative feedback.

Update: R1 found bullet point 3 after prompting it to try 16x16. It's 2 minus the adjacency matrix of the tesseract graph

notfnofn3-1

Would bet on this sort of strategy working; hard agree that ends don't justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)

I volunteer myself as a test subject; dm if interested

notfnofn*168

So I'm new here and this website is great because it doesn't have bite-sized oversimplifying propaganda. But isn't that common everywhere else? Those posts seem very typical for reddit and at least they're not outright misinformation.

Also I... don't hate these memes. They strike me as decent quality. Memes aren't supposed to make you think deeply about things.

Edit: searched Kat Woods here and now feel worse about those posts

There have been a lot of tricks I've used over the years, some of which I'm still using now, but many of which require some level of discipline. One requires basically none, has a huge upside (to me), and has been trivial for me to maintain for years: a "newsfeed eradicator" extension. I've never had the temptation to turn it off unless it really messes with the functionality of a website. 

It basically turns off the "front page" of whatever website you apply it to (e.g. reddit/twitter/youtube/facebook) so that you don't see anything when you enter the site and have to actually search for whatever you're interested in. And for youtube, you never see suggestions to the right of or at the end of a video.

I think even the scaling thing doesn't apply here because they're not insuring bigger trips: they're insuring more trips (which makes things strictly better). I'm having some trouble understanding Dennis' point.

1Lorec
0 trips -> 1 trip is an addition to the number of games played, but it's also an addition to the percentage of income bet on that one game - right? Dennis is also having trouble understanding his own point, FWIW. That's how the dialogue came out; both people in that part are thinking in loose/sketchy terms and missing important points. The thing Dennis was trying to get at by bringing up the concrete example of an optimal Kelly fraction is that it doesn't make sense for willingness to make a risky bet to have no dependence on available capital; he perceives Jill as suggesting that this is the case.

"I don't know, I recall something called the Kelly criterion which says you shouldn't scale your willingness to make risky bets proportionally with available capital - that is, you shouldn't be just as eager to bet your capital away when you have a lot as when you have very little, or you'll go into the red much faster.

I think I'm misunderstanding something here. Let's say you have  dollars and are looking for the optimum number of dollars to bet on something that causes you to gain  dollars with probability  and lose ... (read more)

1Lorec
I think you're right, I had misunderstood! Kind of an egregious misunderstanding, too. I'm curious how it seems to you to be fundamental to the post, maybe I missed something on that count. I'm planning on replacing 'shouldn't scale . . . proportionally' with 'shouldn't scale more than proportionally', and I don't see how that substantially changes anything, given that Jill is right, and the concept of a Kelly fraction isn't applicable when you're starting out with zero capital of your own to gamble.

(Epistemic status: low and interested in disagreements)

My economic expectations for the next ten years are something like:

  • Examples of powerful AI misanswering basic questions continue for a while. For this and other reasons, trust in humans over AI persists in many domains for a long time after ASI is achieved.

  • Jobs become scarcer gradually. Humans remain at the helm for a while but the willingness to replace ones workers with AI slowly creeps its way up the chain. There is a general belief that Human + AI > AI + extra compute in many roles, and it

... (read more)
1[anonymous]
it may be that we're just using the term superintelligence to mark different points, but if you mean strong superintelligence, the kind that could - after just being instantiated on earth, with no extra resources or help - find a route to transforming the sun if it wanted to: then i disagree for the reasons/background beliefs here.[1] 1. ^ the relevant quote:
2Dagon
I think this matches my modal expectations - this is most likely, in my mind.  I do give substantial minority probability (say, 20%) to more extreme and/or accelerated cases within a decade, and it becomes a minority of likelihood (say, 20% the other direction) over 2 or 3 decades.   My next-most-likely case is that there is enough middle- and upper-middle class disruption in employment and human-capital value that human currencies and capital ownership structures (stocks, and to a lesser extent, titles and court/police-enforced rulings) become confused.  Food and necessities become scarce because the human systems of distribution break.  Riots and looting destroy civilization.  Possibly taking AI with it, possibly with the exception of some big data centers whose (human, with AI efficiency) staffers have managed to secure against the unrest - perhaps in cooperation with military units.
4Carl Feynman
In my limited experience of phone contact with AIs, this is only true for distinctly subhuman AIs.  Then I emotionally react like I am talking to someone who is being deliberately obtuse, and become enraged.  I'm not entirely clear on why I have this emotional reaction, but it's very strong.  Perhaps it is related to the Uncanny Valley effect.  On the other hand, I've dealt with phone AIs that (acted like they) understood me, and we've concluded a pleasant and businesslike interaction.  I may be typical-minding here, but I suspect that most people will only take offense if they run into the first kind of AI. Perhaps this is related: I felt a visceral uneasiness dealing with chat-mode LLMs, until I tried Claude, which I found agreeable and helpful.  Now I have a claude.ai subscription.  Once again, I don't understand the emotional difference. I'm 62 years old, which may have something to do with it.  I can feel myself being less mentally flexible than I was decades ago, and I notice myself slipping into crotchety-old-man mode more often.  It's a problem that requires deliberate effort to overcome.

Ah, darn. Are there any other events/meetups you know of at Lighthaven during those weeks?

4Ben Pace
Actually I now assign ~80% to running one on Tuesday 14th. Will post to confirm.
2Ben Pace
MATS and then Vitalism are largely taking over campus for the next 5 months (until LessOnline and Manifest from May 30 to June 8), so not likely.

Is this going to continue in 2025? I'll be visiting Berkeley from Jan 5th to Jan 17th and would like to come visit.

3Ben Pace
Unfortunately for your plans, my current plan is to resume on Tuesday 21st Jan. Nonetheless there's some chance we'll host one on the two Tuesdays in-between. If you subscribe (the button is at the bottom of the post) you'll get notified of new events.

https://www.lesswrong.com/posts/SHq7wKA8iMqG5QfjC/notfnofn-s-shortform?commentId=JHjHJzE9wCLe2ANPG

Here's a little quick take of mine that provides a setting where centaur > AI (maybe). It's theory of computation which is close to complexity theory

That's incredible.

But how do they profit? They say they don't profit on middle eastern war markets, so they must be profiting elsewhere somehow

3interstice
VC money. That disclaimer was misleading, they don't have fees on any markets.
notfnofn103

There are also gas fees which dramatize this effect, but this is a very important point. A prediction market price gives rise to a function from interest rates to probability ranges for which a rational investor would not bet on the market if they had a probability in that range. The larger the interest rate or the farther out the market, the bigger the range.

Probably an easy widget to make: something that takes as input the polymarket price, gas fees, and interest rate and spits out this range or probabilities.

5interstice
Polymarket pays for the gas fees themselves, users don't have to pay any.

corank has to be more than 1, not equal to 1. I'm not sure if such a matrix exists; the reason I was able to change its mind by supplying a corank-1 matrix was that its kernel behaved in a way that significantly violated its intuition.

notfnofn*40

I similarly felt in the past that by the time computers were pareto-better than I at math, there would already be mass-layoffs. I no longer believe this to be the case at all, and have been thinking about how I should orient myself in the future. I was very fortunate to land an offer for an applied-math research job in the next few months, but my plan is to devote a lot more energy to networking + building people skills while I'm there instead of just hyperfocusing on learning the relevant fields.

o1 (standard, not pro) is still not the best at math reasoni... (read more)

4notfnofn
Update: R1 found bullet point 3 after prompting it to try 16x16. It's 2 minus the adjacency matrix of the tesseract graph
4AlphaAndOmega
Thank you for your insight. Out of idle curiosity, I tried putting your last query into Gemini 2 Flash Thinking Experimental and it told me yes first-shot. Here's the final output, it's absolutely beyond my ability to evaluate, so I'm curious if you think it went about it correctly. I can also share the full COT if you'd like, but it's lengthy: https://ibb.co/album/rx5Dy1 (Image since even copying the markdown renders it ugly here)

My decision to avoid satellite view is a relic from a time of conserving data (and even then it might have been a case of using salt to accelerate cooking time). I wonder if there's a risk of using it in places where cellular data is spotty, though. I'd imagine that using satellite view would reduce the efficiency in which the application saves local map information that might be important if I make a wrong turn where there's no data available.

From the original post:

The purpose if insurance is not to help us pay for things that we literally do not have enough money to pay for. It does help in that situation, but the purpose of insurance is much broader than that. What insurance does is help us avoid large drawndowns on our accumulated wealth, in order for our wealth to gather compound interest faster.

Think about that. Even though insurance is an expected loss, it helps us earn more money in the long run. This comes back to the Kelly criterion, which teaches us that the compounding effects on wea

... (read more)

If you are making an argument on how much compute can find an intelligent mind, you have to look at how much compute used by all of evolution.

Just to make sure I fully understand your argument, is this paraphrase correct?

 

"Suppose we have the compute theoretically required to simulate the human brain down to an adequate granularity for obtaining its intelligence (which might be at the level of cells instead of, say, the atomic level). Even so, one has to consider the compute required to actually build such a simulation, which could be much larger as t... (read more)

1samuelshadrach
Yes your paraphrase is not bad. I think we can assume things outside of Earth don’t need to be simulated, it would be surprising to me if events outside of Earth made the difference between evolution producing Homo sapiens versus some other less intelligent species. (Maybe a few basic things like temperature of the Earth being shifted slowly) For the most part the Earth is causally isolated from the rest of the universe.  Now which parts of the Earth can we safely omit simulating is a harder question as there’s more causal interactions going on. I can make some guesses around parts of the earths environment that can be ignored by the simulation, but they’ll be guesses only. Yes gradient descent is likely a faster search algorithm, but IMO you’re still using it to search the big search space that evolution searched through, not the smaller one a human brain searches through after being born.   

Spotify recommended first recommended her to me in September 2023 and later that September I came across r/slatestarcodex, which was my first exposure to the rationalist community. That's kind of funny.

Huh. Vienna Teng was my top artist, too and this is the only other spotify wrapped I've seen here. Is she popular in these circles?

2Eric Neyman
Yes, very popular in these circles! At the Bay Area Secular Solstice, the Bayesian Choir (the rationalist community's choir) performed Level Up in 2023 and Landsailor this year.

Even a year ago, I would have bet extremely high odds that data analyst-type jobs would be replaced well before postdocs in math and theoretical physics. It's wild that the reverse is plausible now

notfnofn255

Annoying anecdote: I interviewed for an entry-level actuarial position recently and, when asked about the purpose of insurance, I responded with essentially the above argument (along the lines of increasing everyone's log expectation, with kelly betting as a motivation). The reply I got was "that's overcomplicated; the purpose of insurance is to let people avoid risk".

By the way, I agree strongly with this post and have been trying to make my insurance decisions based on this philosophy over the past year.

Oops, yeah the written programs are supposed to be deterministic. The point of mentioning the RNG was to handle the fact that an AI might derive its performance from a strong random number generator, which a C code can't emulate.

To clarify: we are not running any programs, just providing code. In a sense, we are competing at the task of providing descriptions for very large numbers with an upper bound on the size of the description (and the requirement that the description is computable).

2JBlack
Oh, I see that I misread. One problem is that "every possible RNG call" may be an infinite set. For a really simple example, a binary {0,1} RNG with program "add 1 to your count if you roll 1 and repeat until you roll 0" has infinitely many possible rolls and no maximum output. It halts with probability 1, though. If you allow the RNG to be configured for arbitrary distributions then you can have it always return a number from such a distribution in a single call, still with no maximum.

I personally used beeminder for this (which I think originated from this community)

notfnofn*30

Little thought experiment with flavors of Newcomb and Berry's Paradox:

I have the code of an ASI in front of me, translated into C along with an oracle to a high-quality RNG. This code is N characters. I want to compete with this ASI at the task of writing a 2N-character (edit: deterministic) C code that halts and prints a very large integer. Will I always win?

Sketch of why: I can write my C code to simulate the action of the ASI on a prompt like "write a 2N-character C code that halts and prints the largest integer" using every combination of possible RNG ... (read more)

1[anonymous]
if both participants are superintelligent and can simulate each other before submitting answers[1], and if the value on outcomes is something like: loss 0, draw 0.5, win 1, (game never begins 0.5), then i think one of these happens: * the game ends in a draw as you say * you collaborate to either win or lose 50% of the time (same EV) * it fails to begin because you're both programs that try to simulate the other and this is infinitely recursive / itself non-terminating. 1. ^ even if the relevant code which describes the ASI's competitor's policy is >2N, it's not stated that the ASI is not able execute code of that length prior to its submission. there's technically an asymmetry where if the competitor's policy's code is >2N, then the ASI can't include it in their submission, but i don't see how this would effect the outcome
2JBlack
My guess is "no" because both of you would die first. In the context of "largest numbers" 10^10^100 is baby's first step, but is still a number with more digits than you will ever succeed in printing. In principle the "you" in this scenario could be immortal with unbounded resources and perfect reliability, but then we may as well just suppose you are a superintelligence smarter than the AI in the problem (which isn't looking so 'S' anymore).

Quick note: might be easier to replace your utility function as for some parameter (which is equivalent to the one you have, after rescaling and shifting). Utility functions should be convex but this is very convex, being bounded above.

Utility functions are discussed a lot here; I think it's worth poking around a bit.

I just read through the sequence. Eliezer is a fantastic writer and surprisingly well-versed in many areas, but he generally writes to convince a broad audience of his perspective. I personally prefer writing that gets into the technical weeds and focuses on convincing the reader of the plausibility of their perspective, instead of the absolute truth of it (which is why I listed Scott Aaronson's paper first; I've read many of his other papers and blogs, including on the topic of free will, and really enjoy them).

I'm going to read https://www.scottaaronson.com/papers/philos.pdf, https://philpapers.org/rec/PERAAA-7, and the appendix here: https://www.lesswrong.com/posts/dkCdMWLZb5GhkR7MG/ (as well as the actual original statements of Searle's Wall, Johnston's popcorn, and Putnam's rock), and when that's eventually done I might report back here or make a new post if this thread is long dead by then

2Davidmanheim
You should also read the relevant sequence about dissolving the problem of free will: https://www.lesswrong.com/s/p3TndjYbdYaiWwm9x

Okay, let me know if this is a fair assessment:

  1. Let's consider someone meditating in a dark and mostly-sealed room with minimal sensory inputs, and they're meditating in a way that we can agree they're having a conscious experience. Let's pick a 1 second window and consider the CNS and local environment of the meditator during that window.

  2. (I don't know much physics, so this might need adjustment): Let's say we had a reasonable guess of an "initial wavefunction" of the meditator in that window. Maybe this hypothetical is unreasonable in a deep way and

... (read more)
2Davidmanheim
That seems like a reasonable idea. It seems not at all related to what any of the philosophers proposed. For their proposals, it seems like the computational process is more like: 1. Extract a specific string of 1s and zeros from the sandstorm's initial position, and another from it's final position, with the some length as the length of the full description of the mind. 2. Calculate the bitwise sum of the initial mind state and the initial sand position. 3. Calculate the  bitwise sum of the final mind state and the final sand position. 4. Take the output of state 2 and replace it with the output of state 3. 5. Declare that the sandstorm is doing something isomorphic to what the mind did. Ignore the fact that the internal process is completely unrelated, and all of the computation was done inside of the mind, and you're just copying answers.

Based on your previous posts (and other posts like this), I suspect this might not get any comments explaining the downvotes. So I'll explain the reason for my downvote, which you may find helpful:

I don't see any ideas. You start with a really weird, hard-to-read, and I think wrong definition of a Cartesian product, but then never mention cartesian products again. You then don't define a relation, but I'm guessing that you meant a relation to be a subset of V x V. But then your definition of dependency doesn't make sense. We usually discuss dependency over... (read more)

I've had reddit redirect here for about almost a year now (with some slip ups here and there). It's been fantastic for my mental health.

notfnofn*80

Epistemic status: very new to philosophy and theory of mind, but has taken a couple graduate courses in subjects related to the theory of computation.

I think there are two separate matters:

  1. I have a physical object that has a means to receive inputs and will do something based on those inputs. Suppose I now create two machines: one that takes 0s and 1s and converts it into something the object receives, and one that observes the actions of the physical object then spits out an output. Both of these machines operate in time that is simultaneously at most q
... (read more)
2Davidmanheim
OK, so this is helpful, but if I understood you correctly, I think it's assuming too much about the setup. For #1, in the examples we're discussing, the states of the object aren't predictably changing in complex ways - just that it will change "states" in ways that can be predicted to follow a specific path, which can be mapped to some set of states. The states are arbitrary, and per the argument don't vary in some way that does any work - and so as I argued, they can be mapped to some set of consecutive integers. But this means that the actions of the physical object are predetermined in the mapping. And the difference between that situation and the CNS is that we know he neural circuitry is doing work - the exact features are complex and only partly understood, but the result is clearly capable of doing computation in the sense of Turing machines. 
notfnofn135

In general, it feels like the alphabet can be partitioned into "sections" where you can use other letters in the same section for additional variables that will play similar roles. Something like:

[a,b,c,d]; [f,g,h]; [i,j,k]; [m,n]; [p,q]; [r,s,t]; [u,v,w]; [x,y,z]

Sometimes these can be combined: [m,n,p,q]; [p,q,r,s,t]; [r,s,t,u,v,w]; [u,v,w,x,y,z]

4Yair Halberstadt
Yep, and when you run out of letters in a section you use the core letter from the section with a subscript.

Is there a way to for me to prove that I'm a human on this website before technology makes this task even more difficult?

3gilch
I don't know of any officially sanctioned way. But, hypothetically, meeting a publicly-known real human person in person and giving them your public pgp key might work. Said real human could vouch for you and your public key, and no one else could fake a message signed by you, assuming you protect your private key. It's probably sufficient to sign and post one message proving this is your account (profile bio, probably), and then we just have to trust you to keep your account password secure.
2Yoav Ravid
Sounds like a question a non-human would ask :P

Just commenting to say that this is convincing enough (and the application sufficiently low-effort) for me to apply later this month, conditional on being in a position where I could theoretically accept such an offer.

I don't think this explanation makes sense. I asked ChatGPT "Can you tell me things about Akhmed Chatayev", and it had no problem using his actual name over and over. I asked about his aliases and it said

Akhmed Chatayev, a Chechen Islamist and leader within the Islamic State (IS), was known to use several aliases throughout his militant activities. One of his primary aliases was "Akhmed Shishani," with "Shishani" translating to "Chechen," indicating his ethnic origin. Wikipedia

Additionally, Chatayev adopted the alias "David

Then threw an error messag... (read more)

4Viliam
Maybe ChatGPT is recently more likely to stop mid-sentence. Something like that happened to me recently on a completely different topic (I wanted to find an author of a poem based on a few lines I remembered), and the first answer just stopped in the middle; then I clicked refresh and received a full answer (factually wrong though). Can't link the chat because I have already deleted it.

I think their metric might be click and not upvote (or at least, clicking has a heavy weight). Are you more likely to click on a video that pushes an argument you oppose?

As a quick test, you can launch a vpn and open private browsing to see how your recommendations change after a few videos

I notice this is downvoted and by a new user. On the surface, it looks like something I would strongly consider applying to, depending on what happens in my personal life over the next month. Can anyone let me know (either here or privately) if this is reputable?

2Aditya_SK
Hi, It was quite strange to see it downvoted, and I’m not sure what the issue was. My guess is that the initial username might have played a role, especially since this is my first post on LessWrong, it might have caused some concern maybe? As for the credibility, you can see that this fellowship has been shared by individuals from the organizations themselves on Twitter, as seen here, here and here. If you’d like, I’m happy to discuss this further on the call to help alleviate any concerns you may have.

Jumping in here: the whole point of the paragraph right after defining "A" and "B" was to ensure we were all on the same page. I also don't understand what you mean by:

Most ordinary people will assume it means that all the rolls were even

and much else of what you've written. I tell you I will roll a die until I get two 6s and let you know how many odds I rolled in the process. I then do so secretly and tell you there were 0 odds. All rolls are even. You can now make a probability distribution on the number of rolls I made, and compute its expectation.

I recently came across unsupervised machine translation here. It's not directly applicable, but it opens the possibility that, given enough information about "something", you can pin down what it's encoding in your own language.

So let's say now that we have a computer that simulates a human brain in a manner that we understand. Perhaps there really could be a sense in which it simulates a human brain that is independent of our interpretation of it. I'm having some trouble formulating this precisely.

2Matt Goldenberg
Right, and per the second part of my comment - insofar as consciousness is a real phenomenon, there's an empirical question of if whatever frame invariant definition of computation you're using is the correct one.

Possible bug report: today I've been seeing errors of the form

Error: Cannot query field "givingSeason2024VotedFlair" on type "User". Did you mean "givingSeason2024DonatedFlair"?

that tend to go away when the page is refreshed. I don't remember if all errors said this same thing.

There is an important nuance that makes it ~n+4/5 for large n (instead of n+1), but I'd have to think a bit to remember what it was and give a nice little explanation. If you can decipher this comment thread, it's somewhat explained there: https://old.reddit.com/r/mathriddles/comments/17kuong/you_roll_a_die_until_you_get_n_1s_in_a_row/k7edj6l/

2WilliamKiely
I thought of the reason independently: it's that if the number before 66 is not odd, but even instead, it must be either 2 or 4, since if it was 6 then the sequence would have had a double 6 one digit earlier.
Load More