I am curious in general about who here, if anyone, is actively researching the AGI and/or FAI problems directly in a full-time capacity (or soon will be). So if that's you, please say hello! Or if you know another website/mailing-list/etc this question would be more appropriate to ask, please let me know.

If you are interested in saying more about who you are and what you're doing, I've included some additional questions below. Feel free to provide as much or as little information as you'd like - but the more the better!

  • Are you working for a particular organization or a known AGI project? If so, which? Link? If not, are you working on these issues independently, or can you otherwise explain your situation?
  • What is your overall theory/philosophy on FAI/AGI? How are you similar to and how are you different from Eliezer Yudkowsky in this respect?
  • What has been your overall approach to study for this line of research / what specific curriculum and what specific books/papers/etc would you recommend? I would be interested in as much detail as you can provide here.
  • Do you have any published material (even informal/in-progress information, documentation, discussion, blogs, etc)? Links?
  • What are you working on now and what's coming next in your work? Are you solving some interesting problem, creating some interesting new idea, bringing together a grand theory, actually building a working FAI/AGI or similar, or approaching some big milestone along any of these paths or others? What do your plans and timeline for the future look like?

New Comment
14 comments, sorted by Click to highlight new comments since:

Currently working for the FHI, mainly on FAI-like problems. Got a paper coming out soon on Oracle AI (http://www.aleph.se/papers/oracleAI.pdf)

[-][anonymous]40

I just wanted to say thank you for all the cool websites you made/got SIAI to spend resources on! :)

Also thank Lightwave for being the resources.

You won't get a lot of responses if you ask people to name themselves.

Because all else equal, announcing that you're doing something impressive feels like a (small) status hit, so if nothing else moves people to overcome this trivial inconvenience (for example, recognizing that it actually isn't a status hit, or that the default behavior in a given context is to respond rather than stay silent, or if someone personally asked you), nothing gets done.

announcing that you're doing something impressive feels like a (small) status hit

Something isn't right here. Do you mean it feels like a status grab - a status hit to others, avoided out of politeness? Or that people who do extremely impressive (as opposed to moderately impressive ones) things shouldn't need to announce it, so "I'm saving the world" is a status loss but "I'm learning Swahili" is a status gain?

I interpreted this as meaning that needing to nominate yourself implies that nobody else cares enough about your work to name you as an example, meaning that you're not actually that important.

Convoluted. But do you feel it's plausible?

I know I've felt that way every now and then, though on those occasions the reason has also been clear to me. I'm not sure if it's equally plausible for someone to feel that way and not realize the logic behind it.

Something isn't right here. Do you mean it feels like a status grab - a status hit to others, avoided out of politeness?

Politeness is about covert/deniable transactions in status-related attributes, so it's a curiosity stopper in this context, not an explanation. It probably feels like a status hit because it's expected (perhaps incorrectly) to feel like a status grab to others. What you feel isn't generally a reason for responding a certain way, instead it's a means: something external should be a reason, whose detection might be represented as a feeling, in turn triggering a behavior.

Then it's lucky I don't overthink these things.

And your story sounds plausible, but the opposite would sound equally plausible to me.

Then it's lucky I don't overthink these things.

I was characterizing an emotional response, not reasoning. There doesn't seem to be a clear argument for that response being correct in this case.