I think this sums up the problem. If you want to build a safe AI you can't use neural nets because you have no clue what the system is actually doing.
How is that translation coming along? I could help with German.
OK, when I said "easy" I exaggerated quite a bit (I edited in the original post). More accurate would be: "in the last three years at least one new party became popular enough to enter parliament" (the country is Germany and the party would be the AfD, before that, there was the German Pirate Party). Actually, to form a new party the signatures from at least 0.1% of all eligible voters are needed.
but it sounds like a difficult thing to sell to the public in sufficient numbers to get enough influence to change anything.
I also see that problem, my idea was to try to recruit some people on German internet fora and if there is not enough interest drop the idea.
I'm thinking about starting a new political party (in my country getting into parliament as a new party is e̶a̶s̶y̶ not virtually impossible, so it's not necessarily a waste of time). The motivation for this is that the current political process seems inefficient.
Mostly I'm wondering if this idea has come up before on lesswrong and if there are good sources for something like this.
The most important thing is that no explicit policies are part of the party's platform (i.e. no "we want a higher minimum wage"). I don't really have a party program yet, but the basic idea is as follows: There are two parts to this party; the first part is about Terminal Values and Ethical Injunctions. What do we want to achieve and what do we avoid doing even if it seems to get us closer to our goal. The Terminal Values could just be Frankena's list of intrinsic values. The first requirement for people to vote for this party is that they agree with those values.
The second part is about the process of finding good policies. How to design a process that generates policies that help to satisfy our values. Some ideas:
The idea is that the party won't really be judged based on the policies it produces but rather on how well it keeps to the specified process. The values and the process is what identifies the party. Of course there should be some room for changing the process if it doesn't work...
The evaluation of policies in terms of how well they satisfy values seems to be a difficult problem. The problem is that Utilitarianism is difficult in practice.
So, there are quite a few open questions.
That would be a lot of posts. If we're talking about making a new post in Discussion everyday, that would likely drown-out most other threads. It would be even worse in Main.
One could start a new subreddit for this reading group. Something like reddit.com/r/LWreadinggroup. But that would defeat the purpose of reviving lesswrong.com.
However Mr. Eliezer's basic rules say it doesn't count.
Ah, I see. Didn't know the rules were so strict. (Btw shouldn't it be "Mr. Yudkowsky"?)
nanobots released into the atmosphere
Wait, were you allowed to design them yourself? (The timestamp is in UTC iirc.)
Is there actually good AI research somewhere in Europe? (Apart from what the FHI is doing.) Or: can the mission for FAI benefit at all from me doing my PhD at the AI lab of some university? (Which is my plan currently.)
What language will proceedings generally be conducted in?
English, of course.
I want to do a PhD in Artificial General Intelligence in Europe (not machine learning or neuroscience or anything with neural nets). Anyone know a place where I could do that? (Just thought I'd ask...)