All of .CLI's Comments + Replies

.CLI10

How did safety engineering get invented for different disciplines, and how do their invention relate to engineering and theory?

Inspired by davidad's tweets: 1, 2, 3

It seems commonsense that a deeper (theoretical) understanding helps both engineering as well as safety engineering. Which one do you think does theory help more? And which development helped grow theory research more?

My intuition is that:

  1. First we started building something by trial-and-error, empirical results.
  2. We formulated some safety best practices. But they are all heuristics from the trial-
... (read more)
.CLI30

is this part of the reason so many AI researchers think it's cool and enlightened to not believe in highly general architectures

I do hear No Free Lunch theorem get thrown around when an architecture fails to solve some problem which its inductive bias doesn't fit. But I think it's just thrown around as a vibe.

.CLI10

Love the post! http://pragmaticaisafety.com/ is down for me right now though. Do the authors still endorse this sequence?

.CLI10

Just spent one year in academia; my experience trying to talk to researchers about AGI match what Dan wrote about.

.CLI1-3

(ramblingly) Does the No Free Lunch Theorem imply that there's no one single technique that would always work for AGI alignment? Initial thought is probably not, because the theorem states that the performance of all optimization algorithms are identical across all possible problems. However,  AGI alignment is a subset of these problems.

3Seth Herd
See Steve Byrne's take on the no free lunch theorem. No is the answer to "does the NFL theorem prove x" for x we care about I'm pretty sure.
-2mako yass
[just now learning about the no free lunch theorem] oh nooo, is this part of the reason so many AI researchers think it's cool and enlightened to not believe in highly general architectures? Because they either believe the theorem proves more than it does or because they're knowingly performing an aestheticised version of it by yowling about how LLMs can't scale to superintelligence (which is true, but also not a crux).
.CLI10

GPT-4 can do math because it has learned particular patterns associated with tokens, including heuristics for certain digits, without fully learning the abstract generalized pattern.

This finding seems consistent with some literatures, such as this where they found that if the multiplication task has an unseen computational graph, then performance deteriorates rapidly. Perhaps check out the keyword "shortcut learning" too.

.CLI21

Game Design

The videos under this category fits better the label "game development" instead. Game Design is more focused on designing rules, mechanics, sometimes narratives, instead of programming.

2Parker Conley
Edited - thanks!
.CLI10

Is the event happening on June 11th or July 9th?

.CLI12

I think there should be more effort into researching the limits of controllability for self-improving machines. That aspect of rapid self improvement seems pretty important to me since it's there regardless of which architecture we use to get to the singularity. If the singularity is dangerous no matter how we get there, or how aligned our first try is, then, [clears throat and raises sign] don't build AGI?

.CLI30

I bought the device and watched Interstellar on top of Mt. Hood with the stars as the background. It was a phenomenal experience. That said, having to bear the weight of the device for 2.5 hours, and other limits such as FOV & lens glare makes me hesitant to say movie's the one killer app right now. I don't think there is a killer app yet - Apple wants us to come in for that.

1[anonymous]
I think the crazy part is technically with the headset on, your only way to know you are on Mount Hood is the smells and cold air and lower air pressure changes, as well as the path taken to get there. I wonder how feasible faking smells and cold wind and a journey would be from a "vr arcade" or other controlled environment. Were you even outside or in your Tesla with the seat back looking through the roof?
.CLI71

The strategic awareness property would be an interesting one to measure. Which existing system would you say are more or less strategically aware? Are there examples we could point toward, like the social media algorithm one?

3Karl von Wendt
I don't think that any current AIs are strategically aware of themselves. I guess the closest analogy is an AI playing ATARI games: It will see the sprite it controls as an important element of the "world" of the game, and will try to protect it from harm. But of course AIs like MuZero have no concept of themselves as being an AI that plays a game. I think the only example of agents with strategic awareness that currently exists are we humans ourselves, and some animals maybe.