Lech Mazur

Advameg, Inc. CEO 

Founder, city-data.com 

https://twitter.com/LechMazur

Author: County-level COVID-19 machine learning case prediction model. 

Author: AI assistant for melody composition.

Wiki Contributions

Comments

I know several CEOs of small AGI startups who seem to have gone crazy and told me that they are self inserts into this world, which is a simulation of their original self's creation


Do you know if the origin of this idea for them was a psychedelic or dissociative trip? I'd give it at least even odds, with most of the remaining chances being meditation or Eastern religions...

Answer by Lech Mazur10

You can go through an archive of NYT Connections puzzles I used in my leaderboard. The scoring I use allows only one try and gives partial credit, so if you make a mistake after getting 1 line correct, that's 0.25 for the puzzle. Top humans get near 100%. Top LLMs score around 30%. Timing is not taken into account.

https://arxiv.org/abs/2404.06405

"Essentially, this classic method solves just 4 problems less than AlphaGeometry and establishes the first fully symbolic baseline strong enough to rival the performance of an IMO silver medalist. (ii) Wu's method even solves 2 of the 5 problems that AlphaGeometry failed to solve. Thus, by combining AlphaGeometry with Wu's method we set a new state-of-the-art for automated theorem proving on IMO-AG-30, solving 27 out of 30 problems, the first AI method which outperforms an IMO gold medalist."

I noticed a new paper by Tamay, Ege Erdil, and other authors: https://arxiv.org/abs/2403.05812. This time about algorithmic progress in language models.

"Using a dataset of over 200 language model evaluations on Wikitext and Penn Treebank spanning 2012-2023, we find that the compute required to reach a set performance threshold has halved approximately every 8 months, with a 95% confidence interval of around 5 to 14 months, substantially faster than hardware gains per Moore's Law."

Lech MazurΩ12288

I've just created a NYT Connections benchmark. 267 puzzles, 3 prompts for each, uppercase and lowercase.

Results:

GPT-4 Turbo: 31.0

Claude 3 Opus: 27.3

Mistral Large: 17.7

Mistral Medium: 15.3

Gemini Pro: 14.2

Qwen 1.5 72B Chat: 10.7

Claude 3 Sonnet: 7.6

GPT-3.5 Turbo: 4.2

Mixtral 8x7B Instruct: 4.2

Llama 2 70B Chat: 3.5

Nous Hermes 2 Yi 34B: 1.5

  • Partial credit is given if the puzzle is not fully solved
  • There is only one attempt allowed per puzzle, 0-shot. Humans get 4 attempts and a hint when they are one step away from solving a group
  • Gemini Advanced is not yet available through the API

(Edit: I've added bigger models from together.ai and from Mistral)

It might be informative to show the highest degree earned only for people who have completed their formal education.

I think the average age might be underestimated: the age of the respondents appeared to have a negative relationship with the response rates (link).

If we were to replace speed limit signs, it might be better to go all out and install variable speed limit signs. It's common to see people failing to adjust their speed sufficiently in poor conditions. A few days ago, there was a 35-vehicle pileup with two fatalities in California due to fog.

It's a lot of work to learn to create animations and then do them for hours of content. Creating AI images with Dall-E 3, Midjourney v6, or SDXL and then animating them with RunwayML (which in my testing worked better than Pika or Stable Video Diffusion) could be an intermediate step. The quality is already high enough for AI images, but not for video without multiple tries (it should get a lot better in 2024).

  1. Will do.

  2. Entering an extremely unlikely prediction as a strategy to maximize EV only makes sense if there's a huge number of entrants, which seems improbable unless this contest goes viral. The inclusion of an "interesting" factor in the ranking criteria should deter spamming with low-quality entries.

Load More