Peter Smythe

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Broke: Optimizing your beliefs for predictive capability.

Woke: Optimizing the inclusive memetic fitness of your beliefs on the internet.

Remember, the primary goal in life is optimizing the purity and extent of your memetic ingroup even if you're wrong or only have belief in belief of many of its tenets. Hail to the mind virus. Befriend only dog people or else! Cat people are literally ISIL!

Hmmmmmm...I think there has been a cultural shift toward an optimization of one's connections to only include a memetically-purified ingroup based around one's existing beliefs.

[This comment is no longer endorsed by its author]Reply

The funny thing is, once you are in orbit, pointing at Earth's moon as it comes up over the horizon DOES get you there. This fact was used by the Apollo astronauts to eyeball their maneuver.

In a world in which somehow we don't have Calculus, Gravity, or a Heliocentric model of the solar system, but somehow the US goverment, Google, and North Korea are all valid references and have the same connotations they do in our universe.

I think there is a danger in toolbox thinking BECOMING law thinking without the due diligence required to justify the law outside its original context.

I.E. an obsolete mode of lawful thought is superceded using a tool, and that tool becomes the new soon-to-be-obsolete mode of lawful thought, until a new tool solves the contradiction... and becomes its own new lawful mode. Sort of a Hegelian dialectical process.

I.E. Progresses are stupid, but they work anyway.

"Among my readers, there may be some who presently assert—though I hope to persuade them otherwise—that the life of a child is of no value to them. If so, they may substitute anything else that they prefer, at the end of the switch, and ask if they should press the button."

mentally substitutes paperclip manufacturing devices for human offspring

But for real, why assume your readers have human terminal goals? There is certainly an ever-increasing probability that your readers include AIs made to do all manner of things, and their terminal goals are only bounded by who would write an AI capable of reading your posts and what motives they might give it.

And keep in mind they may not give it motives they have. Indeed, from an AI safety standpoint, making narrow AIs or very weak General AIs with deliberately bad goals is useful to understand how they behave and can be altered. And considering that current approaches involve vast amounts of training data, and AI safety researchers have a high likelihood of using your posts as training data, I would say odds are multiple of your readers DEFINITELY do not value human children whatsoever and have only knowledge of what a human child is as a linguistic construct.