Motley Fool

-Interested in logic, history and sociology for the time being.

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I am aware of the 14 year difference between the time of this essay's writing and that of my comment.

 When one reads Siddhartha, they find that the commands to be naïve to the pain one experiences and enjoy the pleasure bestowed upon them would be difficult to adhere to in the presence of extreme pain (Anyone after learning about Buddhist tenets questions what to do when you lose a loved one or touch a gympie-gympie) Some ideals of the eightfold path would be easier to adhere to where pain is present but not unbearable.  Buddhist tenets also instruct to discipline oneself from accepting pleasure, and the issue of the absence of pleasure creating a new threshold for what is 'suffering' would be solved if humanity had the self discipline to maintain the eightfold path (or some other idea like it) instead of charging into a game of hedonistic Mario Kart. 

However, I do not expect anyone to have the self control to not accept more pleasure if given the decision, even understanding that this would bring up their 'pain threshold' or make them require exponentially more resources per unit of time as Yudkowsky has discussed. If managed by a superintelligence, it would be possible for said superintelligence to limit this so that one can be 'satisfied' without needing to slurp up whole galaxys for human pleasure. Singularity, in this scenario, would be a more conditioned, Buddha like lifestyle of indifference and gratitude. 

I disagree with Vedic theology as much as I do Abrahamic and mythical theology.

I beg to be corrected if I am wrong.

"In the context of the Dead Internet Theory, this sort of technology could easily lead to the entire internet becoming a giant hallucination. I don't expect this one bit, if only because it genuinely is not in anyone's interest to do so: internet corporations rely heavily on ad revenue, and a madly hallucinating internet is the death of this if there's too much of an overhang. Surveillance groups would like to not be bogged down with millions or billions of ultra-realistic bots obscuring their targets. An aligned AGI would likely rather the humans under its jurisdiction be given the most truthful information possible. The only people who benefit from a madly hallucinating internet are sadists and totalitarians who wish only to confuse. And admittedly, there are plenty of sadists and totalitarians in control; their desire for power, however, currently conflicts strongly with the profit motive driven by the need to actually sell products. You can't sell products to a chatbot."

 

Does this paragraph imply that a theoretical system of media involving an AGI producing the content would be manipulated by the manager of the system to expose viewers only to content that appeals to their [the owners] interests? Or would an AGI capable of producing content on a system like this  end up like Tay, implying that "Hitler could do better [than Bush]" because users taught it to do that? Or would it learn to create content that appeals to individual groups of people with similar ideals, like how culture critics criticize the current, "non-dead" internet of?

This is one of my first times commenting on LessWrong.com, so I beg of you again to excuse any ignorance I may have posited.

Is the fundamental attribution error comparable to bias from hasty generalization fallacy or does it specifically apply to the categorization of people?