I think you actually do not have very much power as a board member. During normal operations, you can give advice to the CEO, but you have no power beyond the "access to the CEO". If the CEO resigns or is being forced out, you briefly have an important power, but it is a very narrow power, the ability to find and vote in a replacement CEO.
The position is very public and respected so it may feel like "a lot of power" but quite often even a mid level employee at the organization has more real power over the direction of the company than a board member does.
I suspect this question is misworded:
Will there be a 4 year interval in which world GDP growth doubles before the first 1 year interval in which world GDP growth doubles?
Do you mean in which world GDP doubles? World GDP growth doubles when it goes from, say, 0.5% yearly growth to 1% yearly growth.
Personally, I suspect world GDP is most likely to next double in a period after a severe war or depression, so you might want to rephrase to avoid that scenario if that isn't what you're thinking about.
I believe there already is a powerful AI persuasion tool: the Facebook algorithm. It is attempting to persuade you to stay on Facebook, to keep reading Facebook so that their "engagement" metrics are optimized. Indeed, many of the world's top AI researchers are employed in building this tool. So far it is focused much more on ranking than on text generation, but if AI text generation improves to the extent that it's interesting for humans to read, I would expect Facebook to incorporate that into newsfeed. AI-generated "news summaries" might be one area that this happens first.
I don't think the metaphor about writing code works. You say, "Imagine a company has to solve a problem that takes about 900,000 lines of code." But in practice, a company never possesses that information. They know what problem they have to solve, but not how many lines of code it will take. Certainly not when it's on the order of a million lines.
For example, let's say you're working for a pizza chain that already does delivery, and you want to launch a mobile app to let people order your food. You can decompose that into parts pretty reasonably - you need an iOS app, you need an Android app, and you need an API into your existing order management system that the mobile apps can call. But how are you going to know how many lines of code those subproblems are? It probably isn't helpful to think about it in that way.
The factoring into subproblems also doesn't quite make sense in this example: "Your team still has to implement 300k lines of code, but regardless of how difficult this is, it's only marginally harder than implementing a project that consists entirely of 300k lines." In this case, if you entirely ignore the work done by other teams, the Android app will actually get harder, because you can't just copy over the design work that's already been done by the iOS team. I feel like all the pros and cons of breaking a problem into smaller parts are lost by this high-level way of looking at it.
My null hypothesis about this area of "factored cognition" would be that useful mechanisms of factoring a problem into multiple smaller problems are common, but they are entirely dependent on the specific nature of the problem you are solving.
I would not spend $500 on such an event because an event held by my local rationality community doesn't seem very important to me. You may have a different opinion about your $500 and your local rationality community and that's fine.
We could in theory try to kill all AI researchers (or just go whole hog and try to kill all software engineers, better safe than sorry /s).
I think this is a good way of putting it. Many people in the debate refer to "regulation". But in practice, regulation is not very effective for weaponry. If you look at how the international community handles dangerous weapons like nuclear weapons, there are many cases of assassinations, bombing, and war in order to prevent the spread of nuclear weapons. This is what it would look like if the world was convinced that AI research was an existential threat - a world where work on AI happens in secret, in private military programs, with governments making the decisions, and participants are risking their lives. Probably the US and China would race to be the first one to achieve AGI dominance, gambling that they would be able to control the software they produced.
If the government is going to mandate something, it should also pay for it.
This isn't really how government mandates work. The government mandates that you wear seat belts in cars, but it doesn't pay for seat belts. The government mandates that all companies going public follow the SEC regulations on reporting, but it doesn't pay for that reporting to happen. The government mandates that restaurants regularly clean up the floor, but it doesn't pay for janitors. The government mandates that you wear clothes in public, but it doesn't buy you clothes. Etc, etc.
So I think your intuition is simple, but it largely does not map to reality.
Now, for the rats, there’s an evolutionarily-adaptive goal of "when in a salt-deprived state, try to eat salt". The genome is “trying” to install that goal in the rat’s brain. And apparently, it worked! That goal was installed! And remarkably, that goal was installed even before that situation was ever encountered!
I don't think this is remarkable. Plenty of human activities work this way, where some goal has been encoded through evolution. For example, heterosexual teenage boys often find teenage girls to be attractive and want to get them naked, even before they have ever managed to do it successfully, without a true conscious understanding of their eventual goals. Or babies know to seek out nipple-shaped objects, before they have ever interacted with a nipple.
It just really depends on what the project is. If there were some generic way to evaluate all $500 donations, then some centralized organization would be doing that already. You have to use your own personal, human judgment.
In high school, college, and graduate school I was not very hard working. But when I left school and started working in the tech industry I suddenly became very hard working. At first, I was surrounded by hard workers who were very smart, and I was very competitive, so I wanted to be one of the best. Over time, these other hard workers became my friends, and since we were all working together on hard projects, other people came to depend on me. If I said I would have X done by the end of the week, and I didn't get it done, my friends would be disappointed. But if I got X done and also got Y done, my friends would be excited and proud of me.
In school none of my friends ever really cared whether I got something done or not. Everyone is just working on their own thing.
Once I spent several years being very hard working I got into the habit and it became easier for me to also work hard in situations like startups where I had to rely on myself more to drive things forwards. Then, once you get good at working hard, you often want to create an environment that makes everyone else work hard, too.
So my main advice is to find a job where your coworkers are both very smart and very hard working.