Since the announcement of the Brain-like AGI project "aintelope" in 2022, we have been busy working continuously on the project. The project was entirely based on volunteer work from the start until September 2023. Our principle was sustainability. The project had to work with low or interrupted contributions from participants with a full-time job and family. And it worked. The project has progressed and grown steadily. This is a first status report of project aintelope. You can also read it as a story of how slow and incremental progress leads to a prospering project.

Participants

  • Andre Kochanke - Data Scientist, founder of aintelope UG (non-profit) 
  • Hauke Rehfeld - Python, founding member 
  • Gunnar Zarncke - Managing Director and founder of aintelope UG (non-profit)
  • Joel Pyykkö - AI Researcher, member since 10/2022
  • Roland Pihlakas - AI Safety Researcher, member since 1/2023
  • Rasmus Herlo - Neuroscience Researcher and advisor since 11/2023
  • Fabian Zarncke - Student, founder of  aintelope UG (non-profit)

TL;DR;

In 2023, aintelope made progress in agent learning, simulations, and discussions of AI behaviors like the Waluigi Effect. We virtually hosted multiple successful hackathons, aligning our code with brain-like AGI concepts. Conversations at EAGxNordics 2023 sparked ideas for better development and teamwork.

We put together an information hazard policy. Significant effort was spent on improving the platform and code base. Collaborating with the AI Alignment community, especially since recently with neuroscience researcher Rasmus Herlo as an advisor, has started promising.

First quarter

In the first quarter of 2023, aintelope was focused on exploration and discussion culminating in a hackathon and retrospective. The team explored various aspects of AI development, the exploration of 3D environment simulation and many discussions of AI behavior patterns including the Waluigi Effect and human-like aspects in AI, like cultural learning and instinct encoding. Papers and books on emotion in reinforcement learning were shared. 

Practical topics were improving the project structure, the savanna environment, agents modules, and managing GitHub effectively. And how to make the AI training process more transparent and monitoring-friendly, with enhancements in model inference and training workflow documentation. One frustration was administrative delays in tax exemption status. 

Hackathon

A hackathon was scheduled to foster collaborative coding and problem-solving. The Hackathon, held on 16 April 2023, was a hands-on, focused session with the core team members Gunnar, Andre, Roland, Joel, and Hauke. The primary goal was to align the current implementation with the brain-like AGI paradigm and streamline the project’s codebase. Tasks were getting all developers on board, cleaning up the code, and crystallizing projects for training and testing. Action items were creating a data flow schema, making observation mechanics explicit, and reward mechanism modularization. The hackathon was seen as a great success and to be repeated.

Retrospective

On the day after, a retrospective was conducted in the form of a structured review session by the same team. A subjective “temperature reading” of team sentiment, showed comfortable to fresh temperatures. Appreciations and aspirations were the focus and the team acknowledged the progress in theoretical aspects, weekly meetings, and the shared purpose in AI safety. Concerns were the need for faster development feedback, more implementation planning, and higher implementation velocity. Key takeaways included the desire for a stable group contributing to the project, enhanced knowledge exchange, and managing the complexity of advanced AI research. The team identified treasures like achieving reliable and understandable learning through instincts and recognized potential cliffs such as the risk of overlooking subtle bugs or facing complexity challenges in advanced AI models.

Second Quarter

In the second quarter, aintelope advanced its agent learning research, focusing on reward heatmaps and multi-instinct functionalities. Concurrently, the team maintained a strong emphasis on ethical AI, regularly engaging in discussions about info hazards and formulating an info hazard policy. The team conducted experiments to speed up learning and improve agent alignment. 

Technically, the team enhanced agent actions and decision-making metrics and streamlined their codebase and the test suite. A relief was the official registration of the company, with a setback in the form of the need for a new bank account because of account closure by the previous online bank. The team ran another successful hackathon this quarter. The team also explored whether the brain-like approach could be combined or used for aligning of Large Language Models (LLMs). Joel represented the team at the EAGxNordics 2023 conference, which led to a reevaluation of communication strategies and a refinement of internal processes like standup meeting structures and project alignment.

Third and Fourth Quarter

The application for Agentic AI Research Grants; secured Foresight Institute funding for Sep 2023 to Feb 2024 for two project members Joel Pyykkö and Roland Pihlakas.

The team replaced PyTorchLightning with PyTorch to gain more control of the training process and streamlined Git processes. The team conducted experiments on fast learning and agent alignment and enhancing test suite development.

The team agreed on using the Signal messenger for secure communication and planned to use 1Password for secure password management

The Git repository's development work was enhanced by discussions on research. This included exploring connections between brain-like safety algorithms, agent learning research, neuroscience, psychology, and multi-instinct functionality.

The team has almost weekly outreach calls with other researchers from the AI Alignment community, facilitated by the AI Alignment Slack hosted by the AI Safety Support project. Interested researchers were invited the the weekly calls. This has led to the recent participation of Rasmus Herlo as a neuroscience advisor. The team explored collaborations with AE Studio and was running a third hackathon.

The team prioritizes ethical AI with ongoing discussions and extension of their information hazard policy.

Recent Activities from Beginning of 2024

We have ongoing applications for funding for our full-time researchers at Cooperative AI, and OpenAI (Agentic AI).

The website is being updated and managed via Google Sites; the aintelope.net email setup is initiated.

Planned Activities in 2024

  • An in-person team get-together in March. 
  • Participation in the planned AI Safety Retreat in Berlin (AISER 2024).
  • Further hackathons 

We hope to provide more information on aintelope.net soon. 

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:11 PM

I enjoyed my brief stint of volunteer contribution, and I hope to participate in one of your hackathons at some point.

Addendum: The aintelope UG has adopted the Asilomar principles of beneficial AI. Here is the shareholder resolution: 

This shareholder resolution provides concrete guidance of §2 sentence 2 of the aintelope memorandum of association on the principles of research.

The aintelope company shall follow the 23 principles of beneficial AI defined by the 2017 Asilomar Conference on Beneficial AI as reproduced in the appendix. 

In case of doubt, guidelines or principles from other neutral bodies or conferences on AI safety that are of equal or stronger strictness with regard to safety, may be used. 

As aintelope grows we strive to secure resources to implement Asilomar principles, such as the legal, ethical, and social system from point 2, where we currently do not yet have dedicated expertise.

For reference: