All of jsnider3's Comments + Replies

(a plurality said it means sufficient hardware for human-level AI already exists, which is not a useful concept)

 

That seems like a useful concept to me. What's your argument it isn't?

2Zach Stein-Perlman
Briefly: with arbitrarily good methods, we could train human-level AI with very little hardware. Assertions about hardware are only relevant in the context of the relevant level of algorithmic progress. Or: nothing depends on whether sufficient hardware for human-level AI already exists given arbitrarily good methods. (Also note that what's relevant for forecasting or decisionmaking is facts about how much hardware is being used and how much a lab could use if it wanted, not the global supply of hardware.)

From 2023's perspective, people should have been encouraged (not discouraged) from building AI like this.

This is too much of a bare assertion to be a good rationality quote.

"Who wants to live forever when love must die?"

Yes, the average human is dangerously easy to manipulate, but imagine how bad the situation would be if they didn't spend a hundred thousand years evolving to not be easily manipulated.

2Hastings
Yeah. I suspect this links to a pattern I've noticed- in stories, especially rationalist stories, people who are successful at manipulation or highly resistant to manipulation are also highly generally intelligent. In real life, people who I know who are extremely successful at manipulation and scheming seem otherwise dumb as rocks. My suspicion is that we have a 20 watt, 2 exaflop skullduggery engine that can be hacked to run logic the same way we can hack a pregnancy test to run doom