Anon User

Wiki Contributions

Comments

Sorted by

Ah, OK, then would suggest adding it to both title and body to make it clear, and to not waste time of people what are not the audience for this.

Sorry, feedback on what? Where is your resume/etc - what information to you expect the feedback to be based on?

But here is actional feedback - when asking people to help you for free out of goodness of their hearts (including this post!), you need to get out of your way to make it as easy and straightforward for them as possibl. When asking for feedback provide all the relevant information collected in an easy to navigate package,with TLDR summaries, etc. When asking for a recommendation, introduction, etc provide brief talking points, with more detailed iinformation provided for context (and make it clear you do not expect them to need to review it, and it is provided "just in case you would find it helpful".

Interesting - your 40/20/40 is a great toy example to think about, thanks! And it does show that a simple instant runoff schema for RCV should not necessarily help that much...

Anon User177

I am not sure about the median researcher. Many fields have a few "big names" that everybody knows and who's opinions have disproportionate weight.

  • Finally, we wouldn't get a second try - any bugs in your AIs, particularly the 2nd one, are very likely to be fatal. We do not know how to create your 2nd AI in such a way that the very first time we turn it on, all the bugs were already found and fixed.
  • Also, human values, at least the ones we know how to consciously formulate, are pretty fragile - they are things that we want weak/soft optimization for, but would actually be very bad if a superhuman AI would hard-optimize. We do not know how to capture human values in a way that things would not go terribly wrong if the optimization is cranked to the max, and your Values AI is likely to not help enough, as we would not know what missing inputs we are failing to provide it (because they are aspects of our values that would only become important in some future circumstances we cannot even imagine today).
Answer by Anon User10
  • We do not know how to create an AI that would not regularly hallucinate. The Values AI hallucinating would be a bad thing.
  • In fact, training AI to closer follow human values seems to just cause it to say what humans want to hear, while being objectively incorrect more often.
  • We do not know how to create an AI that reliability follows the programed values outside of a training set. Your 2nd AI going off the rails outside of the training set would be bad.

Do you care about what kind of peace it is, or just that there is some sort of peace? If latter, I might agree with you on Trump being more likely to quickly get us there. For former, Trump is a horrible choice. On of the easiest way for a US President to force a peace agreement in Ukraine is probably to privately threaten Ukranians to withhold all support, unless they quickly agree to Russian demands. IMHO, Trump is very likely to do something like that. The huge downside is that while this creates a temporary peace, it would encourage Russia to go for it again with other neighbors,and to continue other destabilizing behaviors across the globe (in collaboration with China, Iran, North Korea, etc). Also increases the chances of China going at Taiwan.

Ability to predict how outcome depends on inputs + ability to compute the inverse of the prediction formula + ability to select certain inputs => ability to determine the output (within limits of what the influencing the inputs can accomplish). The rest is just an ontological difference on what language to use to describe this mechanism. I know that if I place a kettle on a gas stove and turn on the flame, I will get the boiling water, and we colloquially describe this as bowling the water. I do not know all the intricacies of the processes inside the water, and I am not directly controlling individual heat exchange subprocesses inside the kettle, but if would be silly to argue that I am not controlling the outcome of the water getting boiled.

Perhaps you are missing the point of what I am saying here somewhat? The issue is is not the scale of the side-effect of a computation, it's the fact that the side-effect exists, so any accurate mathematical abstraction of an actual real-world ASI must be prepared to deal with solving a self-referential equation.

Load More