Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild"
My stupid fanfic chapter was banned without explanation so I reposted it; somehow it was at +7 when it was deleted and I think silently deleting upvoted posts is a disservice to LessWrong. I requested that a justification be given in the comments if it were to be deleted again, so LessWrong readers could consider whether or not that justification is aligned with what they want from LessWrong. Also I would like to make clear that this fanfic is primarily a medium for explaining some ideas that people on LessWrong often ask me about; that it is also a lighthearted critique of Yudkowskyanism is secondary, and if need be I will change the premise so that the medium doesn't drown out the message. But really, I wouldn't think a lighthearted parody of a lighthearted parody would cause such offense.
The original post has been unbanned and can be found here, so I've edited this post to just be about the banning.
Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild
"If you give George Lukács any taste at all, immediately become the Deathstar." — Old Klingon Proverb
There was no nice way to put it: Harry James Potter-Yudkowsky was half Potter, half Yudkowsky. Harry just didn’t fit in. It wasn't that he lacked humanity. It was just that no one else knew (P)Many_Worlds, (P)singularity, or (P)their_special_insight_into_the_true_beautiful_Bayesian_fractally_recursive_nature_of_reality. Other people were roles—and how shall an actor, an agent, relate to those who are merely what they are, merely their roles? Merely their roles, without pretext or irony? How shall the PC fuck with the NPCs? Harry James Potter-Yudkowsky oft asked himself this question, but his 11-year-old mind lacked the g to grasp the answer. For if you are to draw any moral from this tale, godforsaken readers, the moral you must draw is this: P!=NP.
One night Harry Potter-Yudkowsky was outside, pretending to be Keats, staring at the stars and the incomprehensibly vast distances between them, pondering his own infinite significance in the face of such an overwhelming sea of stupidity, when an owl dropped a letter directly on his head, winking slyly. “You’re a wizard,” said the letter, while the owl watched, increasingly gloatingly, “and we strongly suggest you attend our school, which goes by the name Hogwarts. 'Because we’re sexy and you know it.’”
Harry pondered this for five seconds. “Curse the stars!, literally curse them!, Abra Kadabra!, for I must admit what I always knew in my heart to be true,” lamented Harry. “This is fanfic.”
“Meh.”
And so, as they'd been furiously engaged in for months, the divers models of Harry Potter-Yudkowsky gathered dust. In layman’s terms...
Harry didn’t update at all.
Harry: 1
Author: 0
(To be fair, the author was drunk.)
Next chapter: "Analyzing the Fuck out of an Owl"
...
Criticism appreciated.
AALWA: Ask any LessWronger anything
If you want people to ask you stuff reply to this post with a comment to that effect.
More accurately, ask any participating LessWronger anything that is in the category of questions they indicate they would answer.
If you want to talk about this post you can reply to my comment below that says "Discussion of this post goes here.", or not.
Morality open thread
I figure morality as a topic is popular enough and important enough and related-to-rationality enough to deserve its own thread.
Questions, comments, rants, links, whatever are all welcome. If you're like me you've probably been aching to share your ten paragraph take on meta-ethics or whatever for about three uncountable eons now. Here's your chance.
I recommend reading Wikipedia's article on meta-ethics before jumping into the fray, if only to get familiar with the standard terminology. The standard terminology is often abused. This makes some people sad. Please don't make those people sad.
Seeking a "Seeking Whence 'Seek Whence'" Sequence
One of the sharpest and most important tools in the LessWrong cognitive toolkit is the idea of going meta, also called seeking whence or jumping out of the system, all terms crafted by Douglas Hofstadter. Though popularized by Hofstadter and repeatedly emphasized by Eliezer in posts like "Lost Purposes" and "Taboo Your Words", Wikipedia indicates that similar ideas have been around in philosophy since at least Anaximander in the form of the Principle of Sufficient Reason (PSR). I think it'd be only appropriate to seek whence this idea of seeking whence, taking a history of ideas perspective. I'd also like analyses of where the theme shows up and why it's appealing and so on, since again it seems pretty important to LessWrong epistemology. Topics that I'd like to see discussed are:
- How conservation of probability in Bayesian probability theory and conservation of phase space volume in statistical mechanics are related—a summary of Eliezer's posts on the topic would be great.
- How conservation of probability &c. are related to other physical/mathematical laws, e.g. Noether's theorem and quantum mechanics' continuity equation.
- The history of the idea of conservation laws; whether the discovery of conservation laws was fueled by PSR-like philosophical-like concerns (e.g. Leibniz?), by lower level intuitive concerns, or other means.
- How conservation of probability &c. are related to the idea of seeking whence [pdf] (e.g., "follow the improbability").
- How the PSR relates to conservation of probability &c. and to seeking whence.
- How going meta and seeking whence are related/equivalent.
- Which philosophers have used something like the PSR (e.g. Spinoza, Leibniz) and which haven't; those who haven't, what their reasons were for not using it.
- What kinds of conclusions are typically reached via the PSR or have historically been justified by the PSR, and whether those conclusions fit with LW's standard conclusions. If it disagrees with LW's standard conclusions, where does the PSR not apply or not apply as strongly; alternatively, why standard LW conclusions might be mistaken.
- Whether Schopenhauer's four-fold division of the PSR makes sense. (Schopenhauer's a relatively LW-friendly continentalesque philosopher.) A summary of any criticisms of his four-fold division.
- What makes the PSR, going meta, "JOOTS"-ing and seeking whence appealing, from a metaphysical, epistemological, pragmatic, and psychological perspective. What sorts of environments or problem sets select for it. (The Baldwin effect and similar phenomena might be relevant.)
- What going meta / seeking whence looks like at different levels of organization; how one jumps out of systems at varying levels.
- Eliezer's rule of derivative validity from CFAI and how it relates to the PSR; an analysis of how the (moral, or perhaps UDT-like decision-policy-centric) PSR might be relevant to Friendliness philosophy, e.g. as compared with CEV-like proposals [pdf].
- How latent Platonic nodes in TDT [pdf] (p. 78) relate to the PSR.
- A generalization of CFAI's causal validity semantics to timeless validity semantics in the spirit of the generalization of CDT to TDT, or perhaps even further generalizations of causal validity semantics in the spirit of Updateless Decision Theory or eXceptionless Decision Theory. (ETA: Whoops, Eliezer already discussed the acausal level, but seems to have only mentioned Platonic forms as an afterthought. Maybe ignore this bullet point.)
- How the PSR and the rule of derivative validity relate to Robin Hanson's idea of pre-rationality and Wei Dai's questions about extending pre-rationality to include past selves' utility functions—whether this elucidates the relation between XDT and UDT.
- Where Hofstadter picked up the idea of "going meta" and what led him to think it was important. What led Eliezer to rely on it so much and emphasize the importance of avoiding lost purposes.
This post is for sacrificing my credibility!
Thank you for your cooperation and understanding. Don't worry, there won't be future posts like this, so you don't have to delete my LessWrong account, and anyway I could make another, and another.
But since you've dared to read this far:
Credibility. Should you maximize it, or minimize it? Have I made an error?
Discuss.
Don't be shallow, don't just consider the obvious points. Consider that I've thought about this for many, many hours, and that you don't have any privileged information. Whence our disagreement, if one exists?
[Link] A superintelligent solution to the Fermi paradox
Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.
The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.
I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!
Scenario analysis: semi-general AIs
Are there any essays anywhere that go in depth about scenarios where AIs become somewhat recursive/general in that they can write functioning code to solve diverse problems, but the AI reflection problem remains unsolved and thus limits the depth of recursion attainable by the AIs? Let's provisionally call such general but reflection-limited AIs semi-general AIs, or SGAIs. SGAIs might be of roughly smart-animal-level intelligence, e.g. have rudimentary communication/negotiation abilities and some level of ability to formulate narrowish plans of the sort that don't leave them susceptible to Pascalian self-destruction or wireheading or the like.
At first blush, this scenario strikes me as Bad; AIs could take over all computers connected to the internet, totally messing stuff up as their goals/subgoals mutate and adapt to circumvent wireheading selection pressures, without being able to reach general intelligence. AIs might or might not cooperate with humans in such a scenario. I imagine any detailed existing literature on this subject would focus on computer security and intelligent computer "viruses"; does such literature exist, anywhere?
I have various questions about this scenario, including:
- How quickly should one expect temetic selective sweeps to reach ~99% fixation?
- To what extent should SGAIs be expected to cooperate with humans in such a scenario? Would SGAIs be able to make plans that involve exchange of currency, even if they don't understand what currency is or how exactly it works? What do humans have to offer SGAIs?
- How confident can we be that SGAIs will or won't have enough oomph to FOOM once they saturate and optimize/corrupt all existing computing hardware?
- Assuming such a scenario doesn't immediately lead to a FOOM scenario, how bad is it? To what extent is its badness contingent on the capability/willingness of SGAIs to play nice with humans?
Posts I repent of
- "Taking Ideas Seriously": Stylistically contemptible, skimpy on any useful details, contributes to norm of pressuring people into double binds that ultimately do more harm than good. I would prefer it if no one linked to or promoted "Taking Ideas Seriously"; superior alternatives include Anna Salamon's "Compartmentalization in epistemic and instrumental rationality", though I don't necessarily endorse that post either.
- "Virtue Ethics for Consequentialists": Stylistically contemptible, written in ignorance of much of the relevant philosophy and psychology literature, contributes to norm of rewarding people who confidently proselytize on subjects of which they do not possess a deep understanding. Thankfully nobody links to this post.
[post redacted]
[Post redacted 'cuz I unfairly and carelessly misrepresented someone's views (Eliezer's). The messages of this post was: disbelief that aliens visit Earth in spaceships is a bad reason not to look into ufology. My apologies for this ugly post.]
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)