The actual peace deal will be something for the Ukraine to agree to. It is not up to Trump to dictate the terms. All Trump should do is to stop financing the war and we will have peace.
Having said that, if it is somehow possible for Trump to pressure Ukraine into agreeing to become a US colony, my support for Trump was a mistake. The war would be preferable to the peace.
Good post! We will soon have very powerful quantum computers that probably could simulate what will happen if a mirror bacteria is confronted with the human immune system. Maybe there is no risk at all or an existential risk to humanity. This should be a prioritized task for our first powerful quantum computer to find out.
Because he says so.
I’m not allowed to vote in the election but I hope Trump wins because I think he will negotiate a peace in Ukraine. If Harris wins I think the war will drag on for another couple of years at worst.
I have no problem getting pushback.
I guess it could be a great tool to help people quickly learn to converse in a foreign language.
The ”eternal recurrence” is surprisingly the most attractive picture of the ”afterlife”. The alternatives: the annihilation of the self or eternal life in heaven are both unattractive, for different reasons. Add to this that Nietzsche is right to say that the eternal recurrence is a view of the world which seems compatible with a scientific cosmology.
I remember I came up with a similar thought experiment to explain the Categorical Imperative.
Assume there is only one Self-Driving Car on the market, what principle would you want it to follow?
The first priciple we think of is: ”Always do what the driver would want you to do”.
This would certainly be the principle we would want if our SDC was the only car on the road. But there are other SDCs and so in a way we are choosing a principle for our own car which is also at the same time a ”universal law”, valid for every car on the road.
With this in mind, it is easy to show that the principle we could rationally want is: ”Always act on that principle which the driver can rationally will to become a universal law”.
Coincidently this is also Kant’s Categorical Imperative.
Yes, but that ”generative AI can potentially replace millions of jobs” is not contradictory to the statement that it eventually ”may turn out to be a dud”.
I initially reacted in the same way as you to the exact same passage but came to the conclusion that it was not illogical. Maybe I’m wrong but I don’t think so.
I think the auther ment that there was a perception that it could replace millions of jobs, and so an incentive for business to press forward with their implementation plans, but that this would eventually back fire if the hallucination problem is insoluble.
We don’t want an ASI to be ”democratic”. We want it to be ”moral”. Many people in the West conflate the two words thinking that democratic and moral is the same thing but it is not. Democracy is a certain system of organizing a state. Morality is how people and (in the future) an ASI behave towards one another.
There are no obvious reasons why an authocratic state would care more or less about a future ASI being immoral, but an argument can be made that autocratic states will be more cautious and put more restrictions on the development of an ASI because autocrats usually fear any kind of opposition and an ASI could be a powerful adversary of itself or in the hands of powerful competitors.