Towards_Keeperhood

I'm trying to prevent doom from AI. Currently trying to become sufficiently good at alignment research. Feel free to DM for meeting requests.

Wiki Contributions

Comments

Sorted by
Answer by Towards_Keeperhood10

Perhaps also not what you're looking for, but you could check out the google hashcode archive (here's an example problem). I never participated though, so don't know whether they would make that great tests. But it seems to me like general ad-hoc problem solving capabilities are more useful in hashcode than in other competetive programming competitions.

GPT4 summary: "Google Hash Code problems are real-world optimization and algorithmic challenges that require participants to design efficient solutions for large-scale scenarios. These problems are typically open-ended and focus on finding the best possible solution within given constraints, rather than exact correctness."

Answer by Towards_Keeperhood10

Maybe not what you're looking for because it's not like one hard problem but more like many problems in a row, and generally I don't really know whether they are difficult enough, but you could (have someone) look into Exit games. Those are basically like escape rooms to go. I'd filter for Age16+ to hopefully filter for the hard ones, though maybe you'd want to separately look up which are particularly hard.

I did one or two when I was like 15 or 16 years old, and recently remembered them and I want to try some more for fun (and maybe also introspection), though I didn't get around to it yet. I think they are relatively ad-hoc puzzles though as with basically anything you can of course train to get good at Exit games in particular by practicing. (It's possible that I totally overestimate the difficulty and they are actually more boring than I expect.)

(Btw, probably even less applicable to what you are looking for, but CondingEscape is also really fun. Especially the "Curse of the five warriors" is good.)

I hope I will get around to rereading the post and edit this comment to write a proper review, but I'm pretty busy, so in case I don't I now leave this very shitty review here.

I think this is probably my favorite post from 2023. Read the post summary to see what it's about.

I don't remember a lot of the details from the post and so am not sure whether I agree with everything, but what I can say is:

  1. When I read it several months ago, it seemed to me like an amazingly good explanation for why and how humans fall for motivated reasoning.
  2. The concept of valence turned out very useful for explaining some of my thought processes, e.g. when I'm daydreaming something and asking myself why, then for the few cases where I checked it was always something that falls into "the thought has high valence" - like e.g. imagining some situation where I said something that makes me look smart.

Another thought, though I don't actually have any experience with this, but mostly doing attentive silent listening/observing might also be useful for learning how the other person is doing research.

Like, if it seems boring to just observe and occasionally say sth, try to better predict how the person will think or so.

The mein reason I'm interested in orcas is because they have 43 billion cortical neurons, whereas the 2 land animals with the most cortical neurons (where we have have optical-fractionator measurements) are humans and chimpanzees with 21 billion and 7.4 billion respectively. See: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons#Forebrain_(cerebrum_or_pallium)_only

Pilot whales is the other species I'd consider for experiments - they have 37.2 billion cortical neurons.

For sperm whales we don't have data on neuron densities (though they do have the biggest brains). I'd guess they are not quite as smart though because they can dive long and they AFAIK don't use very collaborative hunting techniques.

Cool, thanks, that was useful.

(I'm creating a language for communicating with orcas, so the phonemes will be relatively unpractical for humans. Otherwise the main criteria are simple parsing structure and easy learnability. (It doesn't need to be super perfect - the perhaps bigger challenge is to figure out how to teach abstract concepts without being able to bootstrap from an existing language.) Maybe I'll eventually create a great rationalist language for thinking effectively, but not right now.)

Is there some resource where I can quickly learn the basics of the Esperanto composition system? Somewhere I can see the main base dimensions/concepts?

I'd also be interested in anything you think was implemented particularly well in a (con)language.

(Also happy to learn from you rambling. Feel free to book a call: https://calendly.com/simon-skade/30min )

Thanks!

But most likely, this will all be irrelevant for orcas. Their languages may be regular or irregular, with fixed or random word order, or maybe with some categories that do not exist in human languages.

Yeah I was not asking because of decoding orca language but because I want inspiration for how to create the grammar for the language I'll construct. Esparanto/Ido also because I'm interested about how well word-compositonality is structured there and whether it is a decent attempt at outlining the basic concepts where other concepts are composites of.

Currently we basically don't have any datasets where it's labelled what orca says what. When I listen to recordings, I cannot distinguish voices, though idk it's possible that people who listened a lot more can. I think just unsupervised voice clustering would probably not work very accurately. I'd guess it's probably possible to get data on who said what by using an array of hydrophones to infer the location of the sound, but we need very accurate position inference because different orcas are often just 1-10m distance from each other, and for this we might need to get/infer decent estimates of how water temperature varies by depth, and generally there have not yet been attempts to get high precision through this method. (It's definitely harder in water than in air.)

Yeah basically I initially also had rough thoughts into this direction, but I think the create-and-teach language way is probably a lot faster.

I think the earth species project is trying to use AI to decode animal communication, though they don't focus on orcas in particular, but many species including e.g. beluga whales. Didn't look into it a lot but seems possible I could do sth like this in a smarter and more promising way, but probably still would take long.

Thanks for your thoughts!

I don't know what you'd consider enough recordings, and I don't know how much decent data we have.

I think the biggest datasets for orca vocalizations are the orchive and the orcasound archive. I think they each are multiple terabytes big (from audio recordings) but I think most of it (80-99.9% (?)) is probably crap where there might just be a brief very faint mammal vocalization in the distance.
We also don't have a way to see which orca said what.

Also orcas from different regions have different languages, and orcas from different pods different dialects.

I currently think the decoding path would be slower, and yeah the decoding part would involve AI but I feel like people just try to use AI somehow without a clear plan, but perhaps not you.
What approach did you imagine?

In case you're interested in few high-quality data (but still without annotations): https://orcasound.net/data/product/biophony/SRKW/bouts/

Load More