All of Noah Walton's Comments + Replies

1Kenoubi
A filmstrip (or filmgrid, etc.) each frame of which is itself a filmstrip (filmgrid, etc.)

"AlphaZero took 8x less compute to get to AlphaGoZero level performance 1 year later."

This looks like a typo -- the second algorithm should be AlphaGo?

ETA 12/19/21: Looks like I was wrong here.

2Vermillion
Actually per https://openai.com/blog/ai-and-efficiency/ it was AlphaZero vs AlphaGoZero.

I read you as saying Stag Hunt is a multi/multi game. If I'm right, why? 

3ryan_b
I don't, because as far as I understand it there is no principal/agent mechanism at work in a Stag Hunt. I can see I was powerfully vague though, so thank you for pointing that out via the question. I was comparing Stag Hunt to the Prisoner's Dilemma, and the argument is this: * Prisoner's Dilemma is one agent reasoning about another agent. This is simple, so there will be many papers on it. * Stag Hunt is multiple agents reasoning about multiple agents. This is less simple, so there will be fewer papers, corresponding to the difference in difficulty. * I expect the same to also apply to the transition from one principal and one agent to multiple principles and multiple agents. Returning to the sufficiency claim: I think I weigh the "Alignment framings from MIRIs early years" arguments more heavily than Andrew does; I estimate a mild over-commitment to the simplest-case-first-norm of approximately the same strength as the community's earlier over-commitment to modest epistemology would be sufficient to explain the collective "ugh" response. It's worth noting that the LessWrong sector is the only one referenced that has much in the way of laypeople - which is to say people like me - in it. I suspect that our presence in the community biases it more strongly towards simpler procedures, which leads me to put more weight on the over-commitment explanation. That being said, my yeoman-community-member impressions of the anti-politics bias largely agree with Andrew's, even though I only read this website, and some of the high-level discussion of papers and research agenda posts from MIRI/Open AI/Deepmind/etc. My gut feeling says there should be a way to make multi/multi AI dynamics palatable for us despite this. For example, consider the popularity of posts surrounding Voting Theory, which are all explicitly political. Multi/multi dynamics are surely less political than that, I reason.

I was excited to be reminded by this post of Louis Sachar's classic Wayside School series: in one section of the first puzzle book installment, Sideways Math from Wayside School, you read about students at various places on the social behavior curve for participating in a game of basketball, and are asked to determine who will play under the changing circumstances of that day's recess.

Something triggered in me by this response -- and maybe similar to part of what you were saying in the later part: sometimes preferences aren't affected much by the social context, within a given space of social contexts. People may just want to use chopsticks because they are fun, rather than caring about what other people think about them. 

Also, societal preferences for a given thing might actually decrease when more and more people are interested in them. For example, demand for a thing might cause the price to rise. With orchestras: if lots of people are already playing violin, that increases the relative incentive for others to learn viola.

Once you add this condition, are current state-of-the-art Starcraft-learning ANNs still getting more training data than humans?

Are there public links to natural language and / or computer code descriptions of the funding pipeline (with donors, recommenders, and donees) that Jaan described in the conversation? I don't think I got the full structure from his description.

I can feel a pressure to try to guess the other person's worldview and conform to it. Recently I have been I think better at just trying to debate things out with others. Possibly I may get uncomfortable if consensus isn't reached. I'm getting maybe a little bit more comfortable with this possibility though. Something interesting that can come up is a strong indignant feeling: "how the hell could anyone NOT believe X!!", which can cause me to change those exclamation points into a question mark and start wondering, which could potentially take a long time ... (read more)

I did identify with this. Nothing concrete to share right now.

1Noah Walton
I can feel a pressure to try to guess the other person's worldview and conform to it. Recently I have been I think better at just trying to debate things out with others. Possibly I may get uncomfortable if consensus isn't reached. I'm getting maybe a little bit more comfortable with this possibility though. Something interesting that can come up is a strong indignant feeling: "how the hell could anyone NOT believe X!!", which can cause me to change those exclamation points into a question mark and start wondering, which could potentially take a long time (currently I am confused about God beliefs/unbeliefs, after realizing that I sort of identify as an atheist but have a hard time identifying clear reasons that I should).  Another thing that I have noticed is the possibility to give silent responses rather than essentially lying. This can be very uncomfortable and sad, but may have benefits as well. I think it can feel pretty awful if I end up having to give a lot of silent responses over a period where I ALSO am not able to give myself much space to think (e.g. in a situation where I am constantly around people for a substantial period of time and not able to find a way to give myself "sufficient seclusion").

Tentatively:

Getting stuck solving a problem should ideally trigger open curiosity. I was thinking about this in the context of solving a Project Euler problem (math problems that usually require some programming). There seem to often be alternating phases in solving where you find some low-hanging fruit, and then get stuck. Stuckness can be for example conceptual (you need to speed up your algorithm; you haven't found an algorithm that works at all; you don't understand the problem) or related to code (you have a natural-language framework for yo... (read more)

When you’re communicating with people who know more than you, you have two options. You can accept their greater state of knowledge, causing you to speak more honestly about the pertinent topics. Or, you could reject their credibility, claiming that they really don’t know more than you. Many people who know less than you both may believe you over them.

A third option is to claim epistemic learned helplessness. You can believe someone knows more than you, but reject their claims because there are incentives to deceive. It's even possible to openly coord... (read more)

It's a good point.

The options are about how you talk to others, rather than how you listen to others. So if you talk with someone who knows more than you, "humble" means that you don't act overconfidently, because they could call you out on it. It does not mean that you aren't skeptical of what they have to say.

I definitely agree that you should often begin skeptical. Epistemic learned helplessness seems like a good phrase, thanks for the link.

One specific area I could see this coming up is when you have to debate someone you are s... (read more)

"Scientists with notable discoveries" might be an example of Gryffindors.

I think I agree with you. Here's what I think was going through my head at the time of writing:

"The universe is a state evolving over time according to a transition function. But sometimes I seem to confuse this with thinking I can only take one action at a time, where 'action' is defined much more narrowly. For example, I model myself as exclusively 'sleeping' or 'riding the bus' or 'writing', even though there are parts of me which I'm not consciously attending to doing other things. This seems bad.&... (read more)

If "empathy" means "ability to understand the feelings of others" or "ability to predict what others will do", then it seems straightforward that empathy is learnable. And learnability and teachability seem basically the same to me. Examples indicating that empathy is learnable:

  • As you get to know someone, things go more smoothly when you're with them
  • Socializing is easier when you've been doing a lot of it (at least, I think so)
  • Managers are regularly trained for their job
-2Said Achmiz
Those definitions of “empathy” are, however, totally inconsistent with Ben’s mention of mirror neurons; so I doubt that this is what he had in mind. (Your argument is actually problematic for several other reasons, but the aforesaid inconsistency makes your points inapplicable, so it’s not necessary to spend the time to demonstrate the other problems.)

Following this xkcd, it seems natural that lots of designers (most designers?) "get great satisfaction out of creating things that are (mostly) unnoticed" (or else these designers aren't satisfied with their jobs). In a world where so much *is* designed, it would be exhausting to notice all the details.

Causal: An early 1900s college basketball team gets all of their players high-heeled shoes, because tallness causes people to be better at basketball. Instead, the players are slowed and get more foot injuries.

Adversarial: The New York Knicks' coach, while studying the history of basketball, finds the story about the college team with high heels. He gets marketers to go to other league teams and convince them to wear high heels. A few weeks later, half of the star players in the league are out, and the Knicks easily win the championship.

3Scott Garrabrant
I thought of almost this exact thing (with stilts). I like it and it is what I plan on using for when I want a simple example. It wish it was more realistic though.

I'm curious about your item three.

Nobody told early humans to invent things. They just had to end up doing it. That's also true for crows and other primates. If you were a crow, how would you find and use a tool? (Warning: I'm trying to work toward a plausible story in the following. There are probably lots of wrong implications about animals.)

Clavicus the crow flew straight over the field to a new tree. It had seen the setting sun and knew that meant it was time to return home. Every time clavicus went to a tree, it thought for a moment abo... (read more)

1) Though there is probably someone suitable and willing living within 5 minutes of Jesse, many more of the people within 5 minutes of em are not. It's hard to filter these people, and risky to get it wrong. At best, the other person is unwilling, rude or annoying. Worse, they could be unhealthy, violent, or untrustworthy.

2) Dating sites don't optimize for efficiently starting romantic relationships. If they were really successful at this, people would spend less time on the sites, getting the sites less attention and thus ad / member revenue.

3) ... (read more)