All of t14n's Comments + Replies

t14n14-5

re: 1b (development likely impossible to kept secret given scale required)

I'm remind of Dylan Patel's comments (semianalysis) on a recent episode of the Dwarkesh Podcast which goes something like:

if you're Xi Jinping and you're scaling pilled, you can just centralize all the compute and build all the substations for it. You can just hide it inside one of the factories you already have that's drawing power for steel production and re-purpose it as a data center.

Given the success we've seen in training SOTA models with constrained GPU resources (Deepseek), I... (read more)

t14n20

I have Aranet4 CO2 monitors inside my apartment, one near my desk and one in the living room both at eye level and visible to me at all times when I'm in those spaces. Anecdotally, I find myself thinking "slower" @ 900+ ppm, and can even notice slightly worse thinking at levels as low as 750ppm.

I find indoor levels @ <600ppm to be ideal, but not always possible depending on if you have guests, air quality conditions of the day, etc.

I unfortunately only live in a space with 1 window, so ventilation can be difficult. However a single well placed fan facin... (read more)

t14n20

Want to know what AI will do to the labor market? Just look at male labor force participation (LFP) research.

US Male LFP has dropped since the 70s, from ~80% to ~67.5%.

There are a few dynamics driving this, and all of them interact with each other. But the primary ones seem to be:

  • increased disability [1]
  • younger men (e.g. <35 years old) pursuing education instead of working [2]
  • decline in manufacturing and increase in services as fraction of the economy
  • increased female labor force participation [1]

Our economy shifted from labor that required physical toug... (read more)

t14n11

re: public track records

I have a fairly non-assertive, non-confrontational personality, which causes me to defer to "safer" strategies (e.g. nod and smile, don't think too hard about what's being said, or at least don't vocalize counterpoints). Perhaps others here might relate. These personality traits are reflected in "lazy thinking" online -- e.g. not posting even when I feel like I'm right about X, not sharing an article or sending a message for fear of looking awkward/revealing a preference about myself that others might not agree with.

I notice that pe... (read more)

3stavros
  2 of the 3 'risks' you highlighted are things you have control over; you are an active participant in your feelings of shame and embarrassment[1], they are strategies 'parts' of you are pursuing to meet your needs, and through inner work[2][3] you can stop relying on these self-limiting strategies. The 3rd is a feature, not a bug. By and large, anyone who would shun you in this context is someone you want to be shunned by; someone who really isn't worth your time and energy. The obvious exceptions are for those who find themselves in hostile cultures where revealing certain preferences poses the risk of literal harm. Epistemic status: assertive/competitive, status blind autist who is having a great time being this way and loves convincing others to dip their toe in the water and give it a try; you might just find yourself enjoying it too :) 1. ^ https://www.goodreads.com/book/show/43306206-the-courage-to-be-disliked 2. ^ https://mindingourway.com/guilt/ 3. ^ https://neuroticgradientdescent.blogspot.com/2019/07/core-transformation.html
t14n*53

Skill ceilings across humanity is quite high. I think of super genius chess players, Terry Tao, etc.

A particular individual's skill ceiling is relatively low (compared to these maximally gifted individuals). Sure, everyone can be better at listening, but there's a high non-zero chance you have some sort of condition or life experience that makes it more difficult to develop it (hearing disability, physical/mental illness, trauma, an environment of people who are actually not great at communicating themselves, etc).

I'm reminded of what Samo Burja calls "com... (read more)

2CstineSublime
Feedback loops I think are the principle bottleneck in my skill development, aside from the fact that if you're a novice you don't even know what you should be noticing (even if you have enough awareness to be cognizant of all signs and outputs of an act). To give an example, I'm currently trying to learn how to generate client leads through video content for Instagram. Unless someone actually tells me about a video they liked and what they liked about it, figuring out how to please the algorithm to generate more engagement is hard. The only thing that "works" - tagging other people. Nothing about the type of content, the framing of the shots, the subject matter, the audio... nope... just whether or not one or more other Instagram accounts are tagged in it. (Of course since the end objective is - 'get commissioned' perhaps optimizing for Instagram engagement is not even the thing I should be optimizing at all... how would I know?) Feedback loops are hard. A desirbale metaskill to have would be developing tight feedback loops. 
t14n279

I'm giving up on working on AI safety in any capacity.

I was convinced ~2018 that working on AI safety was an Good™ and Important™ thing, and have spent a large portion of my studies and career trying to find a role to contribute to AI safety. But after several years of trying to work on both research and engineering problems, it's clear no institutions or organizations need my help.

First: yes, it's clearly a skill issue. If I was a more brilliant engineer or researcher then I'd have found a way to contribute to the field by now.

But also, it seems like the ... (read more)

[This comment is no longer endorsed by its author]Reply161
6Seth Herd
There's just not enough funding. In a better world, we'd have more people with more viewpoints and approaches working on ai safety. Brilliance is overrated; creativity, understanding the problem space carefully, and effort also play huge roles in success in most fields.
Thomas Kwa1616

First: yes, it's clearly a skill issue. If I was a more brilliant engineer or researcher then I'd have found a way to contribute to the field by now.

I am not sure about this, just because someone will pay you to work on AI safety doesn't mean you won't be stuck down some dead end. Stuart Russell is super brilliant but I don't think AI safety through probabilistic programming will work. 

Thank you for your service!

For what it's worth, I feel that the bar for being a valuable member of the AI Safety Community, is much more attainable than the bar of working in AI Safety full-time.

Amalthea1817

I think the main problem is that society-at-large doesn't significantly value AI safety research, and hence that the funding is severely constrained. I'd be surprised if the consideration you describe in the last paragraph plays a significant role.

t14n10

Morris Chang (founder of TSMC and titan in the fabrication process) had a lecture at MIT giving an overview of the history in chip design and manufacturing. [1] There's a diagram ~34:00 that outlines the chip design process, and where foundries like TSMC slot into the process.

I also recommend skimming Chip War by Chris Miller. Has a very US-centric perspective, but gives a good overview of the major companies that developed chips from the 1960s-1990s, and the key companies that are relevant/bottlenecks to the manufacturing process circa-2022. 

1: TSMC founder Morris Chang on the evolution of the semiconductor industry
 

t14n30

There's "Nothing, Forever" [1] [2], which had a few minutes of fame when it initially launched but declined in popularity after some controversy (a joke about transgenderism generated by GPT-3). It was stopped for a bit, then re-launched after some tweaking with the dialogue generation (perhaps an updated prompt? GPT 3.5? There's no devlog so I guess we'll never know). There are clips of "season 1" on YouTube prior to the updated dialogue generation.

There's also ai_sponge, which was taken down from Twitch and YouTube due to it's incredibly racy jokes (e.g.... (read more)