I know only that I know nothing. As I remember It's memory from very specific local court with strong agricultural connection. Not every court could afford expert for specific case,
LLM internet research show that it's possible to find such in Western countries, but we couldn't be sure that these are not LLM halucinations about existance anyway it's clear that both humans and LLMs are under 'instrumental convergence' that didn't allow to think deeper, listen each others and so on.:
Courts that deal with farming-related cases often require judges to bec...
'Always Look on the Bright Side of Life'
Life is like playing Diablo
on hardcore mode: you can read all the guides, create the perfect build, and find ideal companions, only to die because the internet disconnects
Playing on hardcore is exciting—each game tells the story of how these characters will meet their end
'Always Look on the Bright Side of Death' - Monty Python
Do you know any interesting camp in Europe about HPMOR or something similar, my 11 daughter asked where is her letter to Hogwards. She start read book and ask why do nobody make film about this great fanfic.
Do you have any idea of good child camps for education in Europe? Or elsewhere?
1.1. The adoption of such laws is long way
Usually, it is a centuries-long path: Court decisions -> Actual enforcement of decisions -> Substantive law -> Procedures -> Codes -> Declaration then Conventions -> Codes.
Humanity does not have this much time, it is worth focusing on real results that people can actually see. It might be necessary to build some simulations to understand which behavior is irresponsible.
Where is the line between creating a concept of what is socially dangerous and what ...
Good day!
I fully share the views expressed in your article. Indeed, the ideal solution would be to delete many of the existing materials and to reformat the remaining ones into a format understandable to every novice programmer, transhumanist, or even an average person.
As a poker player and a lawyer assisting consumers who have suffered from the consequences of artificial intelligence, as well as someone interested in cryptocurrencies and existential risks, I first invested in Eliezer Yudkowsky's ideas many years ago. At that time, I saw how generative-pre...
Version 1 (adopted):
Thank you, shminux, for bringing up this important topic, and to all the other members of this forum for their contributions.
I hope that our discussions here will help raise awareness about the potential risks of AI and prevent any negative outcomes. It's crucial to recognize that the human brain's positivity bias may not always serve us well when it comes to handling powerful AI technologies.
Based on your comments, it seems like some AI projects could be perceived as potentially dangerous, similar to how snakes or spiders are instincti...
I guess we need to maximase different good possible outcome, and each of them
for example to rise propability of Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, humans could
prohibit all autonomous AGI use.
Esspecially those that use uncontrolled clusters of graphical proccessors in authocraties without international AI-safe supervisors like Eliezer Yudkowsky, Nick Bostrom or their crew
this, restrictions of weak APIs systems and need to use human operators
make nature borders of AI scalability so AGI find ...
We have many objective values that result from cultural history, such as mythology, concepts, and other "legacy" things built upon them. When we say these values are objective, we mean that we receive them as they are, and we cannot change them too much. In general, they are kind of infinite mythologies with many rules that "help" people do something right "like in the past" and achieve their goals "after all."
Also we have some objective programmed value, our biological nature, our genes that work for reproduction
When something really scary happens, ...
I have read this letter with pleasure. Pacifism in wartime is an extremely difficult position.
Survival rationality, humanity is extremely important!
It seems to me that the problem is very clearly revealed through compound percent (interest).
If in a particular year the probability of a catastrophe (man-made, biological, space, etc.) overall is 2%, then the probability of human survival in the next 100 years is 0.98 ^ 100 = 0.132,
That is 13.2%, this figure depresses me.
The ideas of unity and security are the only ones that are inside the discourse of red sys...
I signed it.
Pacifism is really not in trend. Both sides of the conflict are convinced that they are absolute right: paranoid Russia, and a defensive Ukraine.
Public pacifism is in the minority. Almost everyone has taken one side, or is silent and seeks safety.
For an individual Ukrainian or Russian, it might be danger to sign this.
Like in ancient Roman Empire. People are either for Blue chariots or for Green ones. No one is interested in the opinion that death races are nonsense.
Anyway. It's irrational, but I signed
isn't?
Key Problem Areas in AI Safety:
- Orthogonality: The orthogonality problem posits that goals and intelligence are not necessarily related. A system with any level of intelligence can pursue arbitrary goals, which may be unsafe for humans. This is why it’s crucial to carefully program AI’s goals to align with ethical and safety standards. Ignoring this problem may lead to AI systems acting harmfully toward
... (read more)