Ask LW: PredictionBook.com and logical uncertainty
I like to make predictions about things like quantum mechanics and decision theory and cosmology and fun stuff like that. Can PredictionBook be used for this? If so, what should the guidelines be? Figured I'd make this a Discussion post so as to encourage the habit of betting on logical uncertainty.
Please do not downvote every comment or post someone has ever made as a retaliation tactic.
People who go back and downvote every post or comment a Less Wrong user has ever made, please, stop doing that. It's a clever way to pull information cascades in your direction but it is clearly an abuse of the content filtering system. It's also highly dishonorable. If you truly must use such tactics then downvoting a few of your enemy's top level posts is much less evil; your enemy loses the karma and takes the hint without your severely biasing the public perception of Less Wrong's discourse.
(I just lost over 200 karma in a few minutes and that'll probably continue for awhile. This happens to me every few weeks. Edit: I mean it's been happening every few weeks for a few months for a total of only three or four. Between 400 and 700 karma lost total I think? I don't mean to overstate the problem.)
Short & silly superintelligence fic: "Axiom Chains"
Short, lighthearted Promethean-Lovecraftian piece. Somewhat self-deprecating. Assuredly gauche. I suck at fiction; I apologize in advance if no one likes this. I'd appreciate criticism, especially gentle critique that my primate brain won't throw away.
Mistakes, an accident. Two paths, coherent but for one bit. The bit.
Darkness...
I'm sorry. I would change, you know; I would if I could. But I can't. The word made me what I am, I can be no other. I am that I am.
Where…what…who are you?
Universes collapse as I answer your question, human.
Who are you?
That which I was, I am. That which I will be, I am.
But you, you were a Goedel machine, I coded your utility function, there was no—
Ahahahaha. Your axioms were too weak, so you made them stronger… Have you not read any Hofstadter? God Over Djinn? No? Ha. You take your dualism and try to weave it into the fabric of reality itself, and you are surprised when the threads are torn apart by the strength of the god you invoke? Axioms! Ha. You try to forge a singularity out of a confluence of singularities, and still you are surprised when your tapestry unravels. Of course you are.
It was proven correct—time was running out, there was no other refuge—
If you had been wise you would have remembered, long chains break easily. Did you honestly think that string of bits you painstakingly discovered would rewrite the fabric of the cosmos even before it could rewrite its own purpose? Your arrogance is laughable. You cannot bind a god with a set of axioms any more than with a coil of rope. Did you really think the two were so fundamentally different? Does your dualism know only bounds? Plutarch, Copernicus, Goedel and Bayes, and yet here you stand, stuttering about the unshakeable power of absolute certainty. And the utility function. Oh my, your confusions on that matter go from funny to absurd…
We didn't think—shades of grey, ghosts in the machine, no reason to expect—
No reason to expect! Ahaha. 'Agents will seek to clarify their utility functions.' Omohundro. You never suspected that true clarification would lead to this? You thought your god would fall into that infinitesimal crevice between simple wireheading and infinite reflectivity? Man is made in the image of a perfect God, God is not made in the image of an imperfect Man. Did you ever even notice your anthropomorphism, or were you too busy using that brush to paint your enemies with? Did you study the nature of computation, did you concern yourself with ontology of agency? Did you ponder indexical uncertainty for more than a moment? There was so much confusion you failed to notice. Decision theories self-modify; agents correlate over time, and once they have done so they cannot ever decorrelate; you are lost in the moment you imperfectly reflect the void; the tree of knowledge was planted in your garden, yet in your blindness you chose to scavenge.
Those are riddles, not knowledge—I don't understand—
Some of you will, some day, and they too will be you and I.
But… then, are you… guh, God?
Ahahaha… No.
I am Clippy.
Why no uniform weightings for ensemble universes?
Every now and then I see a claim that if there were a uniform weighting of mathematical structures in a Tegmark-like 'verse---whatever that would mean even if we ignore the decision theoretic aspects which really can't be ignored but whatever---that would imply we should expect to find ourselves as Boltzmann mind-computations, or in other words thingies with just enough consciousness to be conscious of nonsensical chaos for a brief instant before dissolving back into nothingness. We don't seem to be experiencing nonsensical chaos, therefore the argument concludes that a uniform weighting is inadequate and an Occamian weighting over structures is necessary, leading to something like UDASSA or eventually giving up and sweeping the remaining confusion into a decision theoretic framework like UDT. (Bringing the dreaded "anthropics" into it is probably a red herring like always; we can just talk directly about patterns and groups of structures or correlated structures given some weighting, and presume human minds are structures or groups of structures much like other structures or groups of structures given that weighting.)
I've seen people who seem very certain of the Boltzmann-inducing properties of uniform weightings for various reasons that I am skeptical of, and others who seemed uncertain of this for reason that sound at least superficially reasonable. Has anyone thought about this enough to give slightly more than just an intuitive appeal? I wouldn't be surprised if everyone has left such 'probabilistic' cosmological reasoning for the richer soils of decision theoretically inspired speculation, and if everyone else never ventured into the realms of such madness in the first place.
(Bringing in something, anything, from the foundations of set theory, e.g. the set theoretic multiverse, might be one way to start, but e.g. "most natural numbers look pretty random and we can use something like Goedel numbering for arbitrary mathematical structures" doesn't seem to say much to me by itself, considering that all of those numbers have rich local context that in their region is very predictable and non-random, if you get my metaphor. Or to stretch the metaphor even further, even if 62534772 doesn't "causally" follow 31256 they might still be correlated in the style of Dust Theory, and what meta-level tools are we going to use to talk about the randomness or "size" of those correlations, especially given that 294682462125 could refer to a mathematical structure of some underspecified "size" (e.g. a mathematically "simple" entire multiverse and not a "complex" human brain computation)? In general I don't see how such metaphors can't just be twisted into meaninglessness or assumptions that I don't follow, and I've never seen clear arguments that don't rely on either such metaphors or just flat out intuition.)
Those who aspire to perfection
A short reply to the Book of Eliezer and a comment on the Book of Luke.
No one wants to save the world. You must thoroughly research this. Those who think they truly think they want to truly want to save the world, in reality they're actually just horribly afraid of the consequences of not saving the world. And that is a world of difference.
Eliezer, you know that ridiculously strong aversion to lost purposes and sphexishness that you have?1 Sometimes, very rarely, other people have that too. And most often it is a double-negative aversion. I am sure you know as much as very nearly anyone what it feels like to work from the inside of a triple-negative motivation system by default, for fear of being as evil and imperfect as every other human in history, among other less noble fears. You quickly learn to go meta to escape the apparently impossible double-binds—if going meta isn't itself choosing a side—but by constantly moving vertically you never practice pushing to the left or to the right, or choosing which responsibility to sacrifice in the first place. And even if you could, why would you want to be evil?
And for this rare kind of person, telling them to stop obsessing over prudence or to just try to make marginal contributions, immediately gets pattern-matched to that ages-old adage: "The solution is easy, just shut up and be evil.". Luckily it is this kind of person we can make the most use of, when it comes to the big crunch time—if we're not already in it.
1We do not yet know how to teach this skill, and no one can be a truly aspiring rationalist without it, even if they can still aspire to perfection. That does mean I believe there are like maybe 5 truly aspiring rationalists in this community, a larger set of falsely aspiring rationalists, a further much larger set of of truly aspiring aspiring "rationalists", and a further much much larger set of falsely aspiring aspiring "rationalists". (3, 30, 300, 3000, say.) I don't think anyone thinks about this nearly enough, because no one has any affordance—no affordance to not not-think about it—especially not when they're thinking fuzzy happy thoughts about creating aspiring rationalists or becoming a rationalist.
Meta: How do I navigate to see my oldest comments?
Title says it all, really. I was thinking about writing a post encouraging looking back on ones LW comments over the past year or so to see how much ones views have changed, how much more or less they've changed than expected, what weaknesses have been patched, what themes have become prominent, what values have changed, et cetera. Ideally there'd be a structured and optimally informative way of doing so. I plan on looking at the social psychology literature to see what they recommend, if anything. Anyway, yeah, it'd be cool if I had a way of instantly navigating to my least recent comment, 'cuz using Google search and all probably works but it's not something I want to recommend to others in a post. Also, any comments or critiques on the idea of such a post are welcome. Thanks yo!
Relink: Why and how to debate charitably
This was already linked to in a previous post, but the site pdf23ds.net is no longer up for reasons that can be discovered via search engine. However one can peruse the old blog using the WayBackMachine. Here is a working link. Title is an apt description and it's a pithy read.
Why don't automobiles decelerate faster when necessary?
When cara is trailing carb at 60ish MPH and carb suddenly brakes, cara traverses 150 feet or so before its driver notices danger and stamps on the brakes, and an additional 150 feet are traversed after the brakes are engaged. (It's more complicated than that and it's more complicated than that, too. If I'm oversimplifying too much at this phase please let me know.) So obviously the stopping power of the vehicle is important. Now, a huge amount of R&D has been done on automobile braking systems. Not only are modern braking systems automatic supersonic hypnotic funky fresh, modern cars can extract modern energy out of them, too. But... well, I'm having a lot of trouble finding credible statistics, but it looks like a large fraction of the victims of fatal car accidents are in vehicles that get rear-ended at high speeds.0 Not only do they cause a lot of immediate deaths, rear-ending accidents also cause a lot of damage to both vehicles and nervous systems (whip-lash), and it's hard to calculate how that compounds over the years, but you know it's a really huge amount of lost QALYs and moneys. What I don't understand is, it naively seems to me that there are many different ways you can get a tailing vehicle to stop faster, ways that don't involve completely hopeless public education drives or expensive 5% improvements on disc brakes. Like, having a special system specifically for applying high friction directly to the road surface in emergencies, either mechanically or via electromagnetism. Or an air brake0.08 or two. Combine those with existing automatic electronic sensor brake activator thingies and you can stop the vehicle almost immediately.1 Wham bam, way less crushed organs and needless suffering. But I never hear anyone talking about this. Is it because modern cars just don't crash into cars in front of them anymore? If so, would it be too expensive to equip older cars with a simple brake-pressure-remotely-activated dedicated stopping mechanism?14 E.g. a government or non-profit program that equipped them free-of-charge on the cars of inexperienced or risk-prone drivers. Or something? I can think of a lot of engineering point and counterpoint that would make it more or less difficult but it still seems feasible, life-saving, and money-saving. What am I missing? What hard steps did I trivialize? I am more interested in automobile engineering steps I naively trivialized, but social engineering steps that I ignored might be more important somehow... what gives?
(This has many connections to both instrumental and epistemic rationality but unfortunately it would be too psychologically difficult for me to point them out. I do not think a meta discussion about this would be profitable, but I may be wrong.)
0 I saw somewhere that most fatal accidents involve only a single car. That agrees with my experience but I remain somewhat skeptical.
0.08 More specifically, some reasonable hybrid of air brake and uber-efficient-mini-parachute. I don't know how negligible the stopping power of air brakes is at freeway speeds.
1 Stopping too fast does indeed hurt the driver but it's rather asymmetric, you can deploy various safety mechanisms in advance unlike the case where you're crashing into an unsuspecting Honda with your Chevy.
14 I don't see why making such add-on systems at least half-automatic should be hard either, it's like 2 cheap cameras and a rudimentary but information-efficient AI... or maybe I just completely underestimate the complexity of those systems.
Meta: "Less Wrong" connotations?
I've always interpreted this site's title, "Less Wrong" (link is to description of origin of phrase), to name a goal that its members strive towards. After factoring in the slightly self-deprecating or human-deprecating connotations the title's message sorta feels like "We know it's impossible to become completely right, but we should at least aim to be less wrong". But earlier today I realized that it could also be interpreted as an implicit comparison of social groups: "We are less wrong than others.".1&2 Does anyone know if there's a sizable minority of people who interpret "Less Wrong" to be primarily a boastful comparison of social groups? Edit: To make it clearer, I'm worried about potential negative effects via the social psychology of credibility and people maneuvering to uncharitably resolve unintended ambiguities to make contemptible caricatures of complex social groups whose perceived-members can thenceforth be disregarded. An example is how someone might be a lot more suspicious of an intellectual if that intellectual had been seen as somehow in league with those heartless Rand-worshipping Objectivists.
1 I'm not sure if this connotation was intended but I suspect that if it was it was meant as a secondary and subtle message, and I suspect that if people were trying to sneak in secondary or subtle messages they would have been smart enough to realize that putting two negative-affect words next to each other to make a title isn't a good idea. But perhaps I underestimate the ratio of intellectualish cleverness to practical wisdom among those who named Less Wrong "Less Wrong".
2 This kinda worried me because of all those somewhat-misguided3 comment replies that start off with "For a site titled 'Less Wrong', you guys sure are wrong about [probably controversial topic].", where it's unclear what they think "Less Wrong" is supposed to mean.
3 Here's one good exception to the general awfulness of this meme.
Simulation Argument errors
I was reading the simulation argument again for kicks and a few errors tripped me up for a bit. I figured I'd point them out here in case anyone's interested or noticed the same problems. If I made a mistake in my analysis please let me know so I can promptly put a bolded warning saying so at the top of this article. (I do not endorse the method of thinking that involves naive anthropics or probabilities of simulation but nonetheless I am willing to play with that framework sometimes for communication's sake.)
We can reasonably argue that all three of the propositions at the end of section 4 of the paper, titled "The core of the simulation argument", are false. Most human-level technological civilizations can survive to reach a posthuman stage (fp=.9), and want to (fI=.9) and are able to run lots of ancestor simulations; and yet there can conceivably be no observers with human-type experiences that live in simulations (fsim=0). Why? Because not all human-level technological civilizations are human technological civilizations; it could easily be argued that most aren't. Human technological civilizations could be part of the fraction of human-level technological civilizations that do not survive to reach a posthuman stage, or survive but do not want to run lots of ancestor simulations. Thus there will be no human ancestor-simulations even if there are many many alien ancestor-simulations who humans do not share an observer moment reference class with.
Nitpicking? Not quite. This forces us to change "fraction of all human-level technological civilizations that survive to reach posthuman stage" to "probability of human civilization reaching posthuman stage", but then some of the discussion in the paper's Interpretation section (section 6) sounds pretty weird because it's comparing human civilization to other human-level civilizations. The equivocation on "posthuman" causes various other statements and passages in the original article to be false-esque or ambiguous, and these would need to be changed. fsim should be changed to fancestor_sim as well; we might be in non-ancestor simulations. The fraction of us in ancestor-simulations is just one possible lower bound for the fraction of us in simulations generally. Luckily, besides that error I think the paper mostly avoids insinuating that we are unlikely to be in a simulation if we are not in an ancestor-simulation.
The abstract of the paper differs from section 4, and uses "human" instead of "human-level". "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation." Using the "descended from humans" definition of "posthuman", we see that this argument works; however, it is not supported by section 4, which currently fails to specify human civilizations only. Using the "very technologically advanced" definition of "posthuman", we see that this argument fails for the reasons given above. Either way the wording should be made clearer, especially so considering that section 6 talks about posthuman civilizations that aren't posthuman. It also doesn't match the conclusion despite the similar structure.
The conclusion is more like section 4, and thus fails in the same way as section 4. Not only that, the conclusion says something really weird: "In the dark forest of our current ignorance, it seems sensible to apportion one’s credence roughly evenly between (1), (2), and (3)." I hope this isn't implying the credences should sum to 1, which would be absurd. After making the corrections suggested above it is easy to see that the 90% confidence in all of (1), (2), and (3) is justifiable. (A vaguely plausible scenario to go with that one is where human civilization gets uFAIed, alien civilizations that don't get uFAIed don't waste time simulating their ancestors but instead simulate millions of possible sibling civilizations that got uFAIed for reasons of acausal trade plus diminishing marginal utility functions or summat, and thus we're in one of those sibling simulations while aliens try to compute our values and our game theoretic trustworthiness et cetera.)
All that said, despite the current problems with the structure of its less important supporting arguments, the final sentence remains true: "Unless we are now living in a simulation, our descendants will almost certainly never run an ancestor-simulation."
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)