All of iwis's Comments + Replies

iwis10

I will write what I think about this topic. I thought mentioning the solutions in the 2nd post is enough, but maybe describing and discussing them more is a good idea.

iwis20

Thank you for your opinion. The goal is to build a model of the world that can be used to increase the collective intelligence of people and computers, so usefulness in 99% of cases is enough.

When problems associated with popularity occur, we can consider what to do with the remaining 1%. There are more reliable methods of authenticating users. For example, electronic identity cards are available in most European Union countries. In some countries, they are even used for voting over the Internet. I don't know how popular they are among citizens but I assum... (read more)

2Spade
I think these are pretty good, if not somewhat intrusive strategies to mitigate the problems that concern me. Kudos! It wasn't a typo; disregarding manipulation, weighted contributions in murky circumstances might produce behavior similar to that of a prediction market, which would be better behavior than a system like Wikipedia exhibits under similar circumstances. In a similar vein, perhaps adding monetary incentive -- or, more likely, giving users the ability to provide a monetary incentive -- to add correct information to a topic would be another good mechanism to encourage good behavior.
iwis10

> Cyc does not work.
What if the group of users adding knowledge was significantly larger than the Cyc team?.

Edit: I ask because CyC is built by a group of its employees, it is not crowdsourced. Crowdsourcing often involves a much larger group of people, like in Wikipedia.

> In principle, it could probably succeed with enough data input, but it is not practical.
Why is it not practical?

> that would be hard to notice
What do you mean by "to notice" here?

1Johannes C. Mayer
Cyc does not seem like the things that I would expect to work very well compared to a system that can build the world model from scratch because even if it is crowd sourced it would take to much effort. I mean notice that the inference algorithms are too bad, to make the system capable enough. You can still increase the capability of the system very slowly, by just adding more data. So it seems easy to instead of fixing the inference, to just focus on adding more data, which is the wrong move in that situation.
iwis10

There is too much stuff, such that it takes way way too long for a human to enter everything.

Is it also true for a large group of people? If yes then why?

1Johannes C. Mayer
Cyc does not work. At least not yet. I haven't really looked into it a lot, but I expect that it will also not work in the near future for anything like doing a pivotal act. And they got a lot of man-hours put into it. In principle, it could probably succeed with enough data input, but it is not practical. Also, it would not succeed if you don't have the right inference algorithms, and I guess that would be hard to notice when you are distracted entering all the data. Because you can just never stop entering the data, as there is so much of it to enter.
Answer by iwis10

On https://consensusknowledge.com, I described the idea of building a knowledge database that is understandable for both people and computers, that is, for all intelligent agents. It would be a component responsible for memory and interactions with other agents. Using this component, agents could increase intelligence much faster, which could lead to the emergence of the collective human superintelligence, AGI, and generally the collective superintelligence of all intelligent agents. At the same time, due to the interpretability of the database of knowledg... (read more)

1Johannes C. Mayer
I haven't read it in detail. The hard part of the problem is that we need to have a system that can build up a good world model on it's own. There is too much stuff, such that it takes way way too long for a human to enter everything. Also I think that we need to be able to process basically arbitrary input streams with our algorithm. E.g. build a model of the world just by seeing a camera feed and the input of a microphone. And then we want to figure out how to constrain the world model, such that if we use some planning algorithm we also designed on this world model we know that it won't kill us because there is weird stuff in the world model, like there is weird stuff in solomonoff induction, because that are just arbitrary programs. Also, a hard part is to make a world model that is that general, that it can represent the complexity of the real world interpretable. If you have a database where you just enter facts about the world like X laptop has Y resolution, that seems to be not nearly powerful enough. Your world model only seems to be complex and talk about the real world, because you use natural language words as descriptors. So to a human brain these things have meaning, but not to a computer by default. That is how you can get a false sense of how good your world model is.
iwis*20

My name is Dariusz Dacko. On https://consensusknowledge.com I described the idea of building a knowledge base using crowdsourcing. I think that this could significantly increase the collective intelligence of people and ease the construction of safe AGI. Thus, I hope I will be able to receive comments from LessWrong users about this idea.

iwis-30

I proposed a system where AGI agents can cooperate with people: https://consensusknowledge.com.