avturchin

Wiki Contributions

Comments

Sorted by

As was said above, first you need to pick a stone from the ground or pretend that you are doing this if there is no stone around. Even if you have a stone, make the gesture that you take it from the ground. 

Another important point is to do it quickly and aggressively with loud cry. Also you can pull back one's arm with a stone. 

The whole trick is that dogs are so afraid of stones that they will run away before you actually throw it or they see where it fails. 

Good point about impossibility of sex in LD. But masturbation is actually a form of day dreaming,

I wrote Active Imagination as an Alternative to Lucid Dreaming: Theory and Experimental Results which is basically about controlled daydreaming. 

Yes daydreaming is underestimated

Can you make Trolley meme for Death in Damascus and Doomsday Argument? 

Can prove that you can express any decision theory problem as some Trolley problem?

Yes. It is important point. 

There are some game theory considerations here:

If I throw the stone, all dogs will know that I don't have it anymore, so it would be safe for them to continue the attack (whether I hit one or miss). Therefore, it's better for me to threaten and keep two stones rather than actually throw one.

If dogs really want to attack me, they might prefer that I throw the stone so they can attack afterward.
However, I think each dog fails to consider that I'm most likely to throw the stone at another dog. Each individual dog has a small chance of being injured by the stone, and they could succeed if they continue the attack. Real hunters like wolves might understand this.

Lifehack: If you're attacked by a group of stray dogs, pretend to throw a stone at them. Each dog will think you're throwing the stone at it and will run away. This has worked for me twice.

In general, I agree with you: we can't prove with certainty that AI will kill everyone. We can only establish a significant probability (which we also can't measure precisely).

My point is that some AI catastrophe scenarios don't require AI motivation. For example:
- A human could use narrow AI to develop a biological virus
- An Earth-scale singleton AI could suffer from a catastrophic error
- An AI arms race could lead to a world war

ABBYY created Finereader which was one of the best OCR systems. 

Collapse of mega-project to create AI based on linguistics

ABBYY spent 100 million USD for 30 years to create a model of language using hundreds of linguists. It fails to compete with transformers. This month the project was closed. More in Russian here: https://sysblok.ru/blog/gorkij-urok-abbyy-kak-lingvisty-proigrali-poslednjuju-bitvu-za-nlp/

Load More