notfnofn

Wikitag Contributions

Comments

Sorted by

Gonna be in Berkeley on the 14th and Princeton on the 16th :')

Discussions about possible economic future should account for the (imo high) possibility that everyone might have inexpensive access to sufficient intelligence to accomplish basically any task they would need intelligence for. There are some exceptions like quant trading where you have a use case for arbitrarily high intelligence, but for most businesses, the marginal gains for SOTA intelligence won't be so high. I'd imagine that raw human intelligence just becomes less valuable (as it has been for most of human history I guess this is worse because many businesses would also not need employees for physical tasks. But the point is that many such non-tech businesses might be fine).

Separately: Is AI safety at all feasible to tackle in the likely scenario that many people will be able to build extremely powerful but non-SOTA AI without safety mechanisms in place? Will the hope be that a strong enough gap exists between aligned AI and everyone else's non-aligned AI?

I would be very surprised if this FVU_B actually another definition and not a bug. It's not a fraction of the variance and those denominators can easily be zero or very near zero.

Not worth worrying about given context of imminent ASI.

This is something that confuses me as well: why do a lot of people in these circles seem care about the fertility crisis while also believing that ASI is coming very soon?

In both optimistic and pessimistic scenarios about what a post-ASI world looks like, I'm struggling to see a future where the fact that people in the 2020s had relatively few babies matters.

If this actually hasn't been explored, this is a really cool idea! So you want to learn a function (Player 1, Player 2, position) -> (probability Player 1 wins, probability of a draw)? Sounds like there are a lot of naive architectures to try and you have a ton of data since professional chess players play a lot of games.

Some random ideas:

  • Before doing any sort of positional analysis: What does the (ELO_1,ELO_2,engine eval) -> Probability of win/draw function look like? What happens when choosing an engine near those ELO ratings vs. the strongest engines?
  • Observing how rapidly the eval changes when given to a weak engine might give a somwhat automatable metric on the "sharpness" of a chess position (so you don't have to label everything yourself)
notfnofn149

This whole thing about "I would give my life for two brothers or eight cousins" is just nonsense formed by taking a single concept way too far. Blood relation matters but it isn't everything. People care about their adopted children and close unrelated friends.

The user could always write a comment (or a separate post) asking why they got a bunch of downvotes, and someone would probably respond. I've seen this done before.

Otherwise I'd have to assume that the user is open-minded enough to actually want feedback and not be hostile. They might not even value feedback from this community; there are certainly many communities where I would think very little about negative feedback.

Update: R1 found bullet point 3 after prompting it to try 16x16. It's 2 minus the adjacency matrix of the tesseract graph

notfnofn3-1

Would bet on this sort of strategy working; hard agree that ends don't justify the means and see that kind of justification for misinformation/propaganda a lot amongst highly political people. (But above examples are pretty tame.)

I volunteer myself as a test subject; dm if interested

Load More