Dev.Errata

Data Scientist, AI Engineer, EA@ATL

Wiki Contributions

Comments

Sorted by

Hey! I checked out your site and I found it really interesting. In terms of utility & implementation simplicity, I think focusing purely on digital prescence makes some sense. At the limit, if you were able to accurately model the continuation of a sequence of all inputs + the screen state + audio output of a computer, I think this might end up being a surprisingly powerful "general-purpose-human-simulator" (given how much the average person lives their lives through their screen). However, one thing that draws me to lifelogging-style video is the promise to capture the more subtle and poetic parts of being human (listening to the other conversations in a restaurant, laughing at the joke the barista makes when you order your coffee, etc). I think these aspects are what elevate simulator models from being something merely "economically useful" to something that begins to approach "continued existence" for the person being modeled.

I definitely share the concerns re data collection. US laws are generally lenient towards this type of video recording in public spaces (though it does vary by state). I worry more about the privacy implications (independently of legality) and how it would be possible to respect the rights of anyone that's recording/is recorded, while also being able to use those recordings to do anything useful.

I'd pay a lot of money for an app like this. I wonder if recent development's like Google's MedicalLLM could come into play here, where all your symptoms are logged and then expert knowledge / a thorough review of medical literature is done automatically to recommend potential solutions

I've been interested in this topic in the past, especially from the economics / game theoretic perspective. There's one journal I know of that explores this topic that might be worth looking into:

The Journal of Mechanism and Institution Design

http://www.mechanism-design.org/

Mechanism design in this context being the kind of inverse of game theory that starts with a desired outcome and designs a system that produces that outcome, as opposed to the traditional approach of game theory where you start with a system and figuring out the outcome. More info: https://en.wikipedia.org/wiki/Mechanism_design

I also think this topic of ideal governance & mechanism design has a lot of overlap with the field of cryptoeconomics, or the economic analysis of how cryptocurrencies work / enforce behaviors via incentives: https://policyreview.info/glossary/cryptoeconomics

If you're curious about cryptoeconomics more in depth, Tim Roughgarden out of Columbia has an excellent lecture series on the topic: https://www.youtube.com/channel/UCcH4Ga14Y4ELFKrEYM1vXCg/playlists

Some more general literature that might be of interest to you:

The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics

https://en.wikipedia.org/wiki/The_Dictator%27s_Handbook

The Logic of Political Survival

https://en.wikipedia.org/wiki/The_Logic_of_Political_Survival

Both these books are by the same author(s) and provide a solid intro to the Selectorate Theory of politics (https://en.wikipedia.org/wiki/Selectorate_theory) which also provides a framework for answering some of the questions you posed regarding things like the ideal number of representatives.

Hope this helps!

First thing that comes to my mind is the chess masters that would stream their practice sessions / teaching sessions via twitch. I watched a few of these and I was surprised how close the experience came to being taught something one-on-one. Even though I wasn't the one being tutored, the types of questions that the pupil asked were similar to the ones I was thinking. I wonder if that would be useful in a professional context? I could certainly see it being useful in a computer security context, like livestreaming a CTF competition. And probably that "learn by observing others learning" approach would be useful in other contexts too

I think we should also take into account the value of English spellings that maintain common forms with other languages, even at the expense of being phonetic.

For instance, to a speaker of French or Spanish, the English word "diversification" would certainly seem less alien than a hypothetical respelling as "daiversifikeishun". Does having a common (or very similar) cross-language orthography for latinate words offer more advantages than the benefits of phonetic spellings? I'm not sure, but it should certainly be part of the discussion.

I suspect the benefit is greatest in technical fields with lots of latin-derived vocabulary (i.e. health & biology). Would international scientific cooperation become more difficult if French and Spanish speakers had to relearn spellings for words like "capillaries" [kapileirīs] or "canine" [keinain] whereas the original english was almost identical to the spelling in their native language.

Really interesting post, and I think proper environment creation is one of, if not the most important question when it comes to the RL-based path to AGI.

You made a point that, contrary to the expectations of some, environments like Go or Starcraft are not sufficient to create the type of flexible, adaptive AGI that we're looking for. I wonder if success in creating such AGI is dependent primarily on the complexity of the environment? That is, even though environments like Starcraft are quite complex and require some of the abstract reasoning we'd expect of AGI, the actual complexity isn't anywhere close to the complexity of the real world in which we want those AGIs to perform. I wonder too if increasing environmental complexity will provide some inherent regularisation, i.e. it's more difficult to fall into very narrow solutions when the possible states of your environment are very large.

If that is the case, the question that naturally follows is how do we create environments that mimic the complexity of the actual world? Of course a full simulation isn't feasible, but I wonder if it would be possible to create highly complex world models using neural techniques.

This would be quite difficult computationally, but what would happen if one were to train an RL agent, where the environment was provided by a GPT-3 esque world model? For instance, imagine AI-dungeon (a popular gpt-3-based DnD dungeon master) and an RL agent interacting with AI-dungeon. I'm not certain what the utility function could be, maybe maximizing gold / xp / level / or similar? Certainly an agent that can "win at" DnD would be closer to an AGI than anything that's been made to date. Similarly, I could imagine a future version of GPT that modeled video frames {i.e. predict the next frame based on the previous frame(s)}. An RL agent that was trained to produce some given frame as a desired state would certainly be able to solve problems in the real world, no? (Of course the actual implementation of a video based GPT would, computationally, be incredibly expensive, but not completely out of the question). Are there merits to this approach?