Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

The True Believers hypothesis rings false because that would be a frankly ridiculous belief to hold. Sometimes people profess ridiculous things, but very few of them put their money where their mouth is on prediction markets. [1]

  1. I’ve seen some pretty mispriced markets. At one point in 2019, PredictIt had Andrew Yang at 16% to win the Democratic presidential primary. And in 2020, Donald Trump was about 16% to become president even after he had lost the election. But the sorts of people who bet on prediction markets are not the sorts of fundamentalist Christians who think that Jesus Christ has a high chance of returning this year.

yes, no one would put a large amount of money (say, $10,000) on let's say a 1-year time horizon, "joe biden going to prison", "barack obama going to prison", "nancy pelosi, bill clinton, and hillary clinton going to prison", "trump being put in office prior to the 2024 election", and if someone did make such a bet, they wouldn't be motivated by listening to a christian minister who regularly makes political / religious prophecies. surely no one would do that.

i don't know why anyone who posts on a forum devoted to outright fringe beliefs and atypical personality traits (i say with all love and kindness, not to say that any of us are bad or incorrect for these beliefs, merely that they are objectively abnormal) is going to come out and make bold claims that there exists no such weirdo who is willing to do X for Y reasons.

the main point about the time value of money is interesting enough on it's own, but the interesting, nerd-crack explanation is probably just not true. there are probably just crazy people who bet on the return of jesus christ.

this is a fair response, and to be honest i was skimming your post a bit. i do think my point somewhat holds, that there is no "intelligence skill tree" where you must unlock the level 1 skills before you progress to level 2.

i think a more fair response to your post is:

  1. companies are trying to make software engineer agents, not bloggers, so the optimization is towards the former.
  2. making a blog that's actually worth reading is hard. no one reads 99% of blogs.
  3. i wouldn't act so confident that we aren't surrounded by LLM comments and posts. are you really sure that everything you're reading is from a human? all the random comments and posts you see on social media, do you check every single one of them to gauge if they're human?
  4. lots of dumb bots can just copy posts and content written by other people and still make an impact. scammers and propagandists can just pay an indian or philipino $2/hr and get pretty good. writing original text is not a bottleneck.

Surely it would be exceptionally good at those kinds of writing, too, right?

 

surely an LLM capable of writing A+ freshman college papers would correctly add two 2-digit numbers?  surely an AI capable of beating grandmasters in chess would be able to tutor a 1000 elo player to a 1500 elo or beyond?  surely an AI capable of answering questions at a university level in diverse subjects such as math, coding, science, law, would be able to recursively improve itself and cause an intelligence explosion?  surely such an AI would at least be able to do a simple task like unload a dishwasher without breaking a dish?

i think surely it should be obvious to anyone who's able to mull it through for a few seconds that intelligence does not need to progress along the same paths as it does for human civilization over centuries or for human beings through child development, or even among proto-intelligent animals on earth.  it is surely obvious to me that AI can exhibit surprising mixes of general and non-general intelligence, and that we're not really sure why it works.  there is really no requirement i have left, that before the AI turns us into paperclips, it must 100% be able to beat poker players at the WSOP or generate an oscar-winning feature film, or be able to make nobel-winning science discoveries.  some of these requirements seem more plausible than others, but none seem totally certain.

In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees.

unless i'm misunderstanding you or MIRI, that's not their primary concern at all:

Another way of putting this view is that nearly all of the effort should be going into solving the technical problem, "How would you get an AI system to do some very modest concrete action requiring extremely high levels of intelligence, such as building two strawberries that are completely identical at the cellular level, without causing anything weird or disruptive to happen?"

Where obviously it's important that the system not do anything severely unethical in the process of building its strawberries; but if your strawberry-building system requires its developers to have a full understanding of meta-ethics or value aggregation in order to be safe and effective, then you've made some kind of catastrophic design mistake and should start over with a different approach.

this was posted after your comment, but i think this is close enough:

@ylecun

And the idea that intelligent systems will inevitably want to take over, dominate humans, or just destroy humanity through negligence is preposterous.
They would have to be specifically designed to do so.
Whereas we will obviously design them to not do so.