Wikitag Contributions

Comments

Sorted by
ThomasJ-12

This is not a great No bet at current odds even if you are certain the event will not happen. The market resolves Dec 31, which means that you have to lock up your cash for about 9 months for about a 3% rate of return. The best CDs are currently paying around 4-4.5% for 6mo-1y terms. So even for people who bought No at 96% it seems like a bad trade, since you're getting less than the effective risk-free rate, and you're not getting compensated for the additional idiosyncratic risk (e.g. Polymarket resolves to yes because shenanigans, polymarket gets hacked, etc).

Answer by ThomasJ3-1

Here are some attributes I've noticed among people who self-identify as rationalists. They are:

  • Overwhelmingly white and male. In the in-person or videoconference meetups I've attended, I don't think I've met more than a couple non-white people, and perhaps 10% were non-male.
  • Skew extremely young. I would estimate the median age is somewhere in the early to mid 20s. I don't think I've ever met a rat over the age of 50. I'm not saying that they don't exist, but they seem extremely underrepresented relative to the general population. 
  • Overweight the impact / power of rationalism, despite having life outcomes that are basically average for people with similar socioeconomic backgrounds and demographics
  • Tend to be more willing than average to admit that they're wrong if pressed on a factual issue, but have extreme confidence in subjective beliefs (e.g., values, philosophy, etc). This might just be a side effect of the age issue, since I think this describes most people in this age group. Or perhaps the overconfidence in subjective beliefs is just normal, but seems high relative to the willingness to switch beliefs on more factual matters.
  • Have a very high "writing and talking / doing" ratio. I think this is a selection bias kind of issue: people who are actually out doing stuff in the world probably don't have a lot of time to engage in a community that strongly values multi-page essays with a half-dozen subheadings. Although perhaps this is also just another side effect of the age skew.
  • Undervalue domain knowledge relative to first-principles thinking. As just one example, many rats will gladly outline what they believe are likely Ukraine / Russia outcomes despite not having any particular expertise in politics, international relations, or military strategy. Again, perhaps this is normal relative to the general population and it just seems unusual given rat values. 

Is this like "have the hackathon participants do manual neural architecture search and train with L1 loss"?

ThomasJ2-3

Ah, I misinterpreted your question. I thought you were looking for ideas for your team that was participating in the hackation, not as the organizer of the hackation. 

In my experience, most hackathons are judged qualitatively, so I wouldn't worry about ideas (mine or others') without a strong metric 

Answer by ThomasJ10

Do a literature survey for the latest techniques on detecting if a image/prose text/piece of code is computer-generated or human-generated. Apply it to a new medium (i.e. if it's an article about text, borrow techniques to apply it to images, or vice-versa). 

 

Alternatively, take the opposite approach and show AI safety risks. Can you train a system that looks very accurate, but gives incorrect output on specific examples that you choose during training? Just as one idea, some companies use face recognition as a key part of their security system. Imagine a face recognition system that labels 50 "employees" that are images of faces you pull from the internet, including images of Jeff Bezos. Train that system to correctly classify all the images, but also label anyone wearing a Guy Fawkes mask as Jeff Bezos. Think about how you would audit something like this if a malicious employee handed you a new set of weights and you were put in charge of determining if they should be deployed or not.

ThomasJ155

>75% confidence: No consistent strong play in simple game of imperfect information (e.g. battleship) for which it has not been specifically trained.

>50% confidence: No consistent "correct" play in a simple game of imperfect information (e.g. battleship) for which it has not been specifically train. Correct here means making only valid moves, and no useless moves. For example, in battleship a useless move would be attacking the same grid coordinate twice.

>60% confidence: Bad long-term sequence memory, particularly when combined with non-memorization tasks. For example, suppose A=1, B=2, etc. What is the sum of the characters in a given page of text (~500 words)?

Above 99% certainty: 

Run inference in reasonable latency (e.g. < 1 second for text completion) on a typical home gaming computer (i.e. one with a single high-powered GPU). 

Didn't this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.

Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted. 

But if I had to use the billion dollars on evil AI specifically, I'd use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.

How exactly would you do this? Lots of places market "AI powered" hedge funds, but (as someone in the finance industry) I haven't heard much about AI beyond things like regularized regression actually giving significant benefit.

Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?

+1, CLion is vastly superior to VsCode or emacs/vi for capabilities and ease of setup, particularly for C++ and Rust

Load More