TheManxLoiner

Wikitag Contributions

Comments

Sorted by

Vague thoughts/intuitions:

  • Using the word "importance" I think is misleading. Or, makes it harder to reason about the connection between this toy scenario and real text data. In real comedy/drama, there is patterns in the data to let me/the model deduce it is comedy or drama and hence allow me to focus on the conditionally important features.
  • Phrasing the task as follows helps me: You will be given 20 random numbers x1 to x20. I want you to find projections that can recover x1 to x20. Half the time I will ignore your answers from x1 to x10 and the other half the time x11 to x20. It's totally random which half of the numbers I will ignore. xi and x_{10+i} get the same reward, and reward decreases for bigger i. Now, I find it easier to understand the model: the "obvious" strategy is to make sure I can reproduce x1 and x11, then x2 and x12, and so on, putting little weight on x10 and x20. Alternatively, this is equivalent to having fixed importance of (0.7, 0.49,...,0.7,0.49,...) without any conditioning.
  • Follow up Id be interested in is if the conditional importance was deducible from the data. E.g. x is a "comedy" if x1 + ... + x20 > 0. Or if x1>0. With same architecture, I'd predict getting the same results though...? Not sure how the model could make use of this pattern.
  • And contrary to Charlie, I personally found the experiment crucial to understanding the informal argument. Shows how different ppl think!

there are features such as X_1 which are perfectly recovered

Just to check, in the toy scenario, we assume the features in R^n are the coordinates in the default basis. So we have n features X_1, ..., X_n

 

Separately, do you have intuition for why they allow network to learn b too? Why not set b to zero too?

If you’d like to increase the probability of me writing up a “Concrete open problems in computational sparsity” LessWrong post

I'd like this!

I think this is missing from the list. https://wba-initiative.org/en/25057/. Whole brain architectue initiative.

Should LessWrong have an anonymous mode? When reading a post or comments, is it useful to have the username or does that introduce bias?

I had this thought after reading this review of LessWrong: https://nathanpmyoung.substack.com/p/lesswrong-expectations-vs-reality

What do we mean by ?

I think the setting is:

  • We have a true value function 
  • We have a process to learn an estimate of . We run this process once and we get 
  • We then ask an AI system to act so as to maximize  (its estimate of human values)

So in this context,  is just a fixed function measuring the error between the learnt values and true values.

I think confusion could be using the term  to represent both a single instance or the random variable/process.

Thanks for this post! Very clear and great reference.

- You appear to use the term 'scope' in a particular technical sense. Could you give a one-line definition?
- Do you know if this agenda has been picked up since you made this post?

But in this Eiffel Tower example, I’m not sure what is correlating with what

The physical object Eiffel Tower is correlated with itself.
 

However, I think the basic ability of an LLM to correctly complete the sentence “the Eiffel Tower is in the city of…” is not very strong evidence of having the relevant kinds of dispositions.

It is highly predictive of the ability of the LLM to book flights to Paris, when I create an LLM-agent out of it and ask it to book a trip to see the Eiffel Tower.
 

I think the question about whether current AI systems have real goals and beliefs does indeed matter

I dont think we disagree here. To clarify, my belief is there are threat models / solutions that are not affected by whether the AI has 'real' beliefs, and there are other threats/solutions where it does matter.

I think CGP Grey perspective puts more weight on Definition 3.

I actually do not understand the distinction between Definition 2 and Definition 3. Don't need to resolve it here. I've editted post to include my uncertainty on this.

Load More