Wikitag Contributions

Comments

Sorted by

The heads of AI labs are functionally cowards that would been the one at the first knock on their door by state agents. Some have preemptively bent the knee to get into the good graces of the Trump admin like Altman and Zuckerberg to accelerate their progress. While Trump himself might be out of the loop, his adminstration is staffed by people who know what AGI means and are looking for any sources of power to pursue their agenda.

Why do you think labs aren't making a focused effort on this problem? Is vision understanding not valuable for an automated software engineer or AI scientist?

Would be interesting to see how this experiment changes if the games are played iteratively, that is, if players can get a sense of who they are playing with, how they lie and deceive and what their tells are. I suspect that humans would outperform in this respect because of our better memory.

This can apply to humans as well. If you apologize for some terrible thing you did to another person long enough ago that they've put it out of their immediate memory, and then apologize at this later time, it can drag up those old memories and wounds. The act of apologizing can be selfish, and cause more harm than the apologizer would intend.

Pokemon is a game literally made to be played and beaten by children. Six years old might be pushing the lower bound, but it didn't become one of the largest gaming and entertainment franchises in the world by being too difficult to play for children, whom the game is designed for. 

Yes, kids get stuck and they do use extra resources like searching up info on game guides (old man moment, before the internet you had to find a friend who had the physical version and would let you borrow or look at it). But is the ability to search the internet the bottleneck that prevents Claude from getting past Mt. Moon in under 50 hours? That does not seem likely. In fact giving it access to the internet where it can get even more lost with potentially additional useless or irrelevant information could make the problem worse.

The important question is, why now? Why with so little evidence to back-up what is such an extreme action?

I'm confused, what about AI art makes it such that humans cannot continue to create art? It seems like the bone to pick isn't with AIs generating 'art' it's that some artists have historically been able to make a living by creating commercial art, and AI's being capable of generating commercial art threatens the livelihood of those human artists.

There is nothing keeping you from continuing to select human generated art, or creating it yourself, even as AI generated art might be chosen by others.

Just like you should be free to be biased towards human art, I think others should be free to either not be biased or even biased towards AI generated works.

The world population is set to decline over the course of this century. Fewer humans will mean fewer innovations as the world grows greyer, and a smaller workforce must spend more effort and energy taking care of a large elderly population. Additionally, climate change will eat into some other fraction of available resources to simply keep civilization chugging along that could have instead been used for growth. The reason AGI is so important is because it decouples intelligence from human population growth.

How distressed would you be if the "good ending" were opt-in and existed somewhere far away from you? I've explored the future and have found one version that I think would satisfy your desire but I'm asking to get your perspective. Does it matter whether there are super-intelligent AIs but they leave our existing civilization alone and create a new one out on the fringes (the Artic, Antarctica or just out in space) and invite any humans to come along to join them without coercion? If you need more details, they're available at the Opt-In Revolution, in narrative form.

If they can build the golem once, surely they can build it again. I see no reason why not to order it to destroy itself—not even in an explicit manner but simply by putting it into situations where it faces a decision whether to sacrifice itself to save others, and then watching what decision it makes. And once you know how to build one, you can streamline the process to build many more to gather enough statistical confidence that the golem will, in a variety of situations in- and out-of-context, make decisions that prioritize the well-being of others over itself. 

Load More