25Hour

Wiki Contributions

Comments

Sorted by
25Hour10

oh wait you mean the email invite.  Yeah, that's a great point, i'll kick that off again.

25Hour10

sorry, didn't see this previously!  No, we actually still have those roughly weekly.  We post them on the discord at https://discord.gg/m2xJcuC937


 

25Hour10

Primarily people come to this on the discord, so I just have this on lw for visibility

25Hour10

Hey people! Sorry, due to uber related issues going to be a few minutes late. Shouldn't be more than 10 though.

25Hour10

So this all makes sense and I appreciate you all writing it!  Just a couple notes:

(1) I think it makes sense to put a sum of money into hedging against disaster e.g. with either short term treasuries, commodities, or gold.  Futures in which AGI is delayed by a big war or similar disaster are futures where your tech investments will perform poorly (and depending on your p(doom) + views on anthropics, they are disproportionately futures you can expect to experience as a living human).

(2)  I would caution against either shorting or investing in cryptocurrency as a long-term AI play; as patio11 in his Bits About Money has discussed (most recently in A review of Number Go Up, on crypto shenanigans (bitsaboutmoney.com) ), cryptocurrency is absolutely rife with market manipulation and other skullduggery; shorting it can therefore easily result in losing your shirt even in a situation where cryptocurrencies otherwise ought to be cratering.

25Hour95

Worth considering that humans are basically just fleshy robots, and we do our own basic maintenance and reproduction tasks just fine.  If you had a sufficiently intelligent AI, it would be able to:

(1) persuade humans to make itself a general robot chassis which can do complex manipulation tasks, such as Google's experiments with SayCan

(2) use instances of itself that control that chassis to perform its own maintenance and power generation functions

(2.1) use instances of itself to build a factory, also controlled by itself, to build further instances of the robot as necessary.

(3) kill all humans once it can do without them.

I will also point out that humans' dependence on plants and animals has resulted in the vast majority of animals on earth being livestock, which isn't exactly "good end".

25Hour1310

This seems doubtful to me; if Yan truly believed that AI was an imminent extinction risk, or even thought it was credible, what would Yann be hoping to do or gain by ridiculing people who are similarly worried?

25Hour30

Hey, I really appreciated this series, particularly in that it introduced me to the fact that leveraged etfs (1) exist and (2) can function well as a fixed proportion of overall holdings over long periods.

Is the lesswrong investing seminar still around/open to new participants, by any chance? I've been doing lots of research on this topic (though more for long-term than short-term strategies) and am curious about how deep the unconventional investing rabbit hole goes.

25Hour21

It's a beautiful dream, but I dunno, man.  Have you ever seen Timnit engage charitably and in-good-faith with anyone she's ever disagreed publicly with?

And absent such charity and good faith, what good could come of any interaction whatsoever?

25Hour-1-1

This is a tiny corner of the internet (Timnit Gebru and friends) and probably not worth engaging with, since they consider themselves diametrically opposed to techies/rationalists/etc and will not engage with them in good faith.  They are also probably a single-digit number of people, albeit a group really good at getting under techies' skin.

Load More