social system designer http://aboutmako.makopool.com
often means "train the model harder and include more CoT/code in its training data" or "finetune the model to use an external reasoning aide", and not "replace parts of the neural network with human-understandable algorithms".
The intention of this part of the paragraph wasn't totally clear but you seem to be saying this wasn't great? From what I understand, these actually did all made the model far more interpretable?
Chain of thought is a wonderful thing, it clears a space where the model will just earnestly confess its inner thoughts and plans in a way that isn't subject to training pressure, and so it, in most ways, can't learn to be deceptive about it.
This is good! I would recommend it to a friend!
Some feedback.
But overall I think it addresses a certain audience who I know much better than my version of this that I hastily wrote last year when I was summoned to speak at a conference would have (and so I never showed it to them. Maybe one day I will show them yours.).
Possibly incidental, but if people were successfully maintaining continuous secure access to their signal account you wouldn't even notice because it doesn't even make an attempt to transfer encrypted data to new sessions.
I don't think e2e encryption is warranted here for the first iteration. Generally, keypair management is too hard, today, everyone I know who used encrypted Element chat has lost their keys lmao. (I endorse element chat, but I don't endorse making every channel you use encrypted, you will lose your logs!), and keypairs alone are a terrible way of doing secure identity. Keys can be lost or stolen, and though that doesn't happen every day, the probability is always too high to build anything serious on top of them. I'm waiting for a secure identity system with key rotation and some form of account recovery process (which can be an institutional service or a "social recovery" thing) before building anything important on top of e2e encryption.
Then, users can put in their own private key to see a post
This was probably a typo but just in case: you should never send a private key off your device. The public key is the part that you send.
On infrastructures for private sharing:
Feature recommendation: Marked Posts (name intentionally bland. Any variant of "private" (ie, secret, sensitive, classified) would attract attention and partially negate the point)
This feature prevents leaks, without sacrificing openness.
A marked post will only be seen by members in good standing. They'll be able to see the title and abstract in their feed, but before they're able to read it, they have to click "I declare that I'm going to read this", and then they'll leave a read receipt (or a "mark") visible to the post creator, admins, other members in good standing. (these would also just serve a useful social function of giving us more mutual knowledge of who knows what, while making it easier to coordinate to make sure every post gets read by people who'd understand it and be able to pass it along to interested parties.)
If a member "reads" an abnormally high number of these posts, the system detects that, and they may have their ability to read more posts frozen. Admins, and members who've read many of the same posts, are notified, and you can investigate. If other members find that this person actually is reading this many posts, that they seem to truly understand the content, they can be given an expanded reading rate. Members in good standing should be happy to help with this, if that person is a leaker, well that's serious, if they're not a leaker, what you're doing in the interrogation setting is essentially you're just getting to know a new entrant to the community who reads and understands a lot, talking about the theory with them, and that a happy thing to do.
Members in good standing must be endorsed by another member in good standing before they will be able to see Marked posts. The endorsements are also tracked. If someone issues too many endorsements too quickly (or the people downstream of their endorsements are collectively doing so in a short time window), this sends an alert. The exact detection algorithm here is something I have funding to develop so if you want to do this, tell me and I can expedite that project.
What do they mean by this? Isn't that contradicted by this recommendation to use the an ordinary architecture if you want fast training:
It seems like they mean faster per parameter, which is an... unclear claim given that each parameter or step, here, appears to represent more computation (there's no mention of flops) than a parameter/step in a matmul|relu would? Maybe you could buff that out with specialized hardware, but they don't discuss hardware.
I'm not sure this answers the question. What are the parameters, anyway, are they just single floats? If they're not, pretty misleading.