Announcement 1: I, the organizer, will be 5-10min late. Announcement 2: apparently there's some music thing happening at the amphitheater! I'll set up somewhere northeast of the amphitheater when I get there, and post more precise coordinates when I have.
$10 bounty for anybody coming / passing through Capitol Hill: pick up a blind would-be attendee outside the Zeek's Pizza by 19th and Mercer. DM me your contact information, and I'll put you in touch, and I'll pay you on your joint arrival.
Update: the library is unexpectedly closed due to staffing issues. The event is now at Fuel Coffee, one block south and across the street.
If the chance of rain is dissuading you: fear not, there's a newly constructed roof over the amphitheater!
Hey, folks! PSA: looks like there's a 50% chance of rain today. Plan A is for it to not rain; plan B is to meet in the rain.
See you soon, I hope!
You win both of the bounties I precommitted to!
Lovely! Yeah, that rhymes and scans well enough for me!
Here are my experiments; they're pretty good, but I don't count them as "reliably" scanning. So I think I'm gonna count this one as a win!
(I haven't tried testing my chess prediction yet, but here it is on ASCII-art mazes.)
I found this lens very interesting!
Upon reflection, though, I begin to be skeptical that "selection" is any different from "reward."
Consider the description of model-training:
To motivate this, let's view the above process not from the vantage point of the overall training loop but from the perspective of the model itself. For the purposes of demonstration, let's assume the model is a conscious and coherent entity. From it's perspective, the above process looks like:
- Waking up with no memories in an environment.
- Taking a bunch of actions.
- Suddenly falling unconscious.
- Waking up with no memories in an environment.
- Taking a bunch of actions.
- and so on.....
The model never "sees" the reward. Each time it wakes up in an environment, its cognition has been altered slightly such that it is more likely to take certain actions than it was before.
What distinguishes this from how my brain works? The above is pretty much exactly what happens to my brain every millisecond:
Why say that I "see" reward, but the model doesn't?
Is it cheating to say this? I don't think so. Both I and GPT-3 saw the sentence "Paris is the capital of France" in the past; both of us had our synapse weights tweaked as a result; and now both of us can tell you the capital of France. If we're saying that the model doesn't "have memories," then, I propose, neither do I.
Things have coalesced near the amphitheater. When the music kicks off again, we'll go northeast to... approximately here. 47.6309473, -122.3165802 JMJM+99F Seattle, Washington