Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JenniferRM 18 October 2017 03:38:53AM *  0 points [-]

I'm one of the ~300 people who took the survey.

I would not have thought the process was screwed up unless you called it a screw up yourself. In fact, I'd suggest that it was not, in fact, screwed up much at all. A much lower turnout doesn't seem very surprising to me, or a sign of personal failure on your part (though it might be data that reveals a truth you are personally sad about).

I took the survey because I thought it was the only place to deliver a systematic and democratic expression of my preferences for whether or how to change the LW website (which lots of other people probably don't care about, but I do) and I wanted to say something in my answers along the lines of "please don't 'fix' LW with no regard for what made LW work as much as it did and thereby make it worse, and please please change the LEAST AMOUNT that can possible be changed and see how that works for a bit first, then worry about so-called improvements as second order steps, like if possible have it look exactly the same at the UX level for the first take, while being backed by a new system".

If you want my free advice about the survey, the thing to do would be to set up a Google form "diaspora survey" after getting buy in from Scott and others to promote the diaspora survey as "relevant to my blog's audience".

Next year have a diaspora survey for sure, and maybe a narrowly scoped LW survey as extra credit if you have time.

The difference in response rates to the surveys is probably an important number in itself ;-)

The surveys feels like an area where good enough is the enemy of at all and thus the second order "good enough" is to just do the least effort thing possible, fix any last minutes things that feel extremely painful, and sweep problems under the rug, and claim retrospectively to have intended whatever the good parts of the outcome was, and blame everything else on lack of time.

If you don't like Google forms because privacy... so what? It isn't like people haven't already had their privacy invaded a lot by Google already, even for these same kinds of questions a year or two ago, so what's another year's worth of data?

Then just add a small proviso to summaries of the process along the lines of "hey, this isn't professional work, it is a hobby, so if you want something more or different done, feel free to send me email <here> to volunteer to do tasks like X, Y, or Z".

Scott/Yvain regularly added such provisos and his combination of wry self deprecation and doing something was really really impressive from a distance :-)

Comment author: RomeoStevens 18 September 2017 07:29:41PM 1 point [-]

(Then perhaps build a second such solution that is orthogonal to the first. And so on, with a stack of redundant and highly orthogonal highly generic solutions, any one of which might be the only thing that works in any given disaster, and which does the job all by itself.)

This is excellent! Can this reasoning be improved by attempting to map the overlaps between x-risks more explicitly? The closest I can think of is some of turchin's work.

Comment author: JenniferRM 19 September 2017 01:25:11AM 2 points [-]

My pretty limited understanding is that this is a fairly standard safety engineering approach.

If you were going to try to make it just a bit more explicit a spreadsheet might be enough. If you want to put serious elbow grease into formal modeling work I think a good keyword to get into the literature might be "fault trees". The technique came out of Bell Labs in the 1960's but I think it really came into its own when it was used to model nuclear safety issues in the 1980's? There's old Nuclear Regulatory Commission work that got pretty deep here I think.

Comment author: NancyLebovitz 18 September 2017 11:30:09PM 0 points [-]

"One obvious candidate for such a generic cost effective safety intervention is a small but fully autonomous city on mars, or antarctica, or the moon, or under the ocean (or perhaps four such cities, just in case) that could produce food independently of the food production system traditionally used on the easily habitable parts of Earth."

That sort of thing might improve the odds for the human race, but it doesn't sound like it would do much for the average person who already exists.

Comment author: JenniferRM 19 September 2017 12:56:57AM 0 points [-]

Correct. Once you're to the point of planning for these kinds of contingencies you're mostly talking about the preservation of the spark of human sentience at all in what might otherwise turn out to be a cold and insentient galaxy.

Comment author: JenniferRM 17 September 2017 11:22:27PM *  19 points [-]

I'm super impressed by all the work and the good intentions. Thank you for this! Please take my subsequent text in the spirit of trying to help bring about good long term outcomes.

Fundamentally, I believe that a major component of LW's decline isn't in the primary article and isn't being addressed. Basically, a lot of the people drifted away over time who were (1) lazy, (2) insightful, (3) unusual, and (4) willing to argue with each other in ways that probably felt to them like fun rather than work.

These people were a locus of much value, and their absence is extremely painful from the perspective of having interesting arguments happening here on a regular basis. Their loss seems to have been in parallel with a general decrease in public acceptance of agonism in the english speaking political world, and a widespread cultural retreat from substantive longform internet debates as a specific thing that is relevant to LW 2.0.

My impression is that part of people drifting away was because ideologically committed people swarmed into the space and tried to pull it in various directions that had little to do with what I see as the unifying theme of almost all of Eliezer's writing.

The fundamental issue seems to be existential risks to the human species from exceptionally high quality thinking with no predictably benevolent goals that was augmented by recursively improving computers (ie the singularity as original defined by Vernor Vinge in his 1993 article). This original vision covers (and has always covered) Artificial Intelligence and Intelligence Amplification.

Now, I have no illusions that an unincorporated community of people can retain stability of culture or goals over periods of time longer than about 3 years.

Also, even most incorporated communities drift quite a bit or fall apart within mere decades. Sometimes the drift is worthwhile. Initially the thing now called MIRI was a non-profit called "The Singularity Institute For Artificial Intelliegence". Then they started worrying that AI would turn out bad by default, and dropped the "...For Artificial Intelligence" part. Then a late arriving brand-taker-over ("Singularity University") bought their name for a large undisclosed amount of money and the real research started happening under the new name "Machine Intelligence Research Institute".

Drift is the default! As Hanson writes: Coordination Is Hard.

So basically my hope for "grit with respect to species level survival in the face of the singularity" rests in gritty individual humans whose commitment and skills arises from a process we don't understand, can't necessarily replicate, and often can't even reliably teach newbies to even identify.

Then I hope for these individuals to be able to find each other and have meaningful 1:1 conversations and coordinate at a smaller and more tractable scale to accomplish good things without too much interference from larger scale poorly coordinated social structures.

If these literal 1-on-1 conversations happen in a public forum, then that public forum is a place that "important conversations happen" and the conversation might be enshrined or not... but this enshrining is often not the point.

The real point is that the two gritty people had a substantive give and take conversation and will do things differently with their highly strategic lives afterwards.

Often times a good conversation between deeply but differently knowledgeable people looks like an exchange of jokes, punctuated every so often by a sharing of citations (basically links to non-crap content) when a mutual gap in knowledge is identified. Dennet's theory of humor is relevant here.

This can look, to the ignorant, almost like trolling. It can look like joking about megadeath or worse. And this appearance can become more vivid if third and fourth parties intervene in the conversation, and are brusquely or jokingly directed away.

The false inference of bad faith communication becomes especially pernicious if important knowledge is being transmitted outside of the publicly visible forums (perhaps because some of the shared or unshared knowledge verges on being an infohazard).

The practical upshot of much of this is that I think that a lot of the very best content on Lesswrong in the past happened in the comment section, and was in the form of conversations between individuals, often one of whom regularly posted comments with a net negative score.

I offer you Tim Tyler as an example of a very old commenter who (1) reliably got net negative votes on some of his comments while (2) writing from a reliably coherent and evidence based (but weird and maybe socially insensitive) perspective. He hasn't been around since 2014 that I'm aware of.

I would expect Tim to have reliably ended up with a negative score on his FIRST eigendemocracy vector, who would also probably be unusually high (maybe the highest user) on a second or third such vector. He seems to me like the kind of person you might actually be trying to drive away, while at the same time being something of a canary for the tolerance of people genuinely focused on something other than winning at a silly social media game.

Upvotes don't matter except to the degree that they conduce to surviving and thriving. Getting a lot of upvotes and enshrining a bunch of ideas into the canon of our community and then going extinct as a species is LOSING.

Basically, if I had the ability to, for the purposes of learning new things, I would just filter out all the people who are high on the first eigendemocracy vector.

Yes, I want those "traditionally good" people to exist and I respect their work... but I don't expect novel ideas to arise among them at nearly as high a rate, to even be available for propagation and eventual retention in a canon.

Also, the traditionally good people's content and conversations are probably going to be objectively improved if people high in the second and third and fourth such vectors also have a place, and that place allows them the ability to object in a fairly high profile way when someone high in the first eigendemocracy vector component proposes a stupid idea.

One of the stupidest ideas, that cuts pretty close to the heart of such issues, is the possible proposal that people and content whose first eigendemocracy vector are low should be purged, banned, deleted, censored, and otherwise made totally invisible and hard to find by any means.

I fear this would be the opposite of finding yourself a worthy opponent and another step in the direction of active damage to the community in the name of moderation and troll fighting, and it seems like it might be part of the mission, which makes me worried.

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: JenniferRM 14 September 2017 10:40:31AM 21 points [-]

I took the survey and upvoted every comment already here.

Comment author: username2 22 July 2017 03:33:21PM 11 points [-]

This year is 5777 in the Hebrew calendar. So someone has been counting for roughly that long.

Nitpick (as it doesn't affect your general argument): What actually happened was at some point some king advisor or prophet applied some guesswork to oral history that bordered on myth (e.g. Noah living 950 years) and decided the world was created in 3761 BCE. This is, in fact, exactly the same logic used by creationists to date the Earth to be ~6000 years old. That's the origin of the Hebrew calendar. There hasn't been 5777 years of continuous counting. More like 3500, maybe.

Comment author: JenniferRM 25 July 2017 10:49:02PM *  1 point [-]

There are poorly documented rumors running around on the net that the Yorùbá have a religious system that contains a chronological system that says our year 2017 is the year 10,059.

This claim deserves scrutiny rather than trust, and might stretch the idea of a calendar a bit...

It is very hard to find formal academic writing on the subject... Reading around various websites and interpolating, it seems that the cultural group was split in two by the Nigeria/Benin border and so I think there may be no single coherent state power that might back the calendar out of unifying nationalist sentiment. Also they may have no native word for "calendar"? Also it is a lunar calendar of 364 days and the intercalary adjustments might not be systematic and it may have been pragmatically abandoned in favor of the system the international world has mostly been standardizing on...

Still, I personally am interested not only in old surviving institutions but also in things that function as edge cases. Straining words like "old" or "surviving" or "institution". The edge cases often help quite a bit to illustrate the optimization constraints and design pressures that go into very long running social practices :-)

Comment author: JenniferRM 25 July 2017 06:54:24AM *  1 point [-]

I suspect that you are leaping to the idea of "infinite regress" much too quickly, and also failing to look past it or try to simply "patch" the regress in a practical way when you say:

Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.

Consider the uses that the Dirichlet distribution is classically put to...

Basically, if you stack your distributions two or three (or heaven forbid four) layers deep, you will get a LOT of expressiveness and yet the number of steps up the abstraction hierarchy still can be counted with the fingers of one hand. Within only a few thousand experiments even the topmost of your distributions will probably start acquiring a bit of shape that usefully informs subsequent experiments.

Probably part of the reason you seem to give up at the first layer of recursion and just assume that it will recurse unproductively forever is that you're thinking in terms of some small number of slogans (axioms?) that can be culturally transmitted in language by relatively normal people engaging in typical speech patterns, perhaps reporting high church Experiments that took weeks or months or years to perform, and get reported in a peer reviewed journal and so on.

Rather than conceptually center this academic practice, perhaps it would make more sense to think of "beliefs" as huge catalogues of microfacts, often subverbal, and "experiments" as being performed by even normal humans on the time scales of milliseconds to minutes?

The remarkable magical thing about humans is not that we can construct epistemies, the remarkable thing is that humans can walk, make eye contact and learn things from it, feed ourselves, and pick up sticks to wave around in a semi-coordinated fashion. This requires enormous amounts of experimentation, and once you start trying to build them from scratch yourself you realize the models involved here are astonishing feats of cognitive engineering.

Formal academic science is hilariously slow by comparison to babies.

The problems formal intellectual processes solve is not the problem of figuring things out quickly and solidly, but rather (among other things) the problem of lots of people independently figuring out many of the same things in different orders with different terminology and ending up with the problem of Babel.

Praise be to Azathoth, for evolution already solved "being able to learn stuff pretty good" on its own and delivered this gift to each of us as a birthright. The thing left to us to to solve something like the "political economy of science". Credit assignment. Re-work. Economies of scale... (In light of social dynamics, Yvain's yearly predictions start to make a lot more sense.)

A useful keyword here is "social epistemology" and a good corpus of material is the early work of Kevin Zollman, including this overview defending the conceptual utility of social epistemology as a field.

Comment author: ImmortalRationalist 06 July 2017 11:42:28AM 0 points [-]

If you are a consequentialist, it's the exact same calculation you would use if happiness were your goal. Just with different criteria to determine what constitute "good" and "bad" world states.

Comment author: JenniferRM 06 July 2017 10:04:39PM *  3 points [-]

I think you're missing the thrust of my question.

I'm asking something more like "What if mental states are mostly a means of achieving worthwhile consequences, rather than being mostly the consequences that should be cared about in and for themselves?"

It is "consequences" either way.

But what might be called intrinsic hedonism would then be a consequentialism that puts the causal and moral stop sign at "how an action makes people feel" (mostly ignoring the results of the feelings (except to the degree that the feelings might cause other feelings via some series of second order side effects)).

An approach like this suggests that if people in general could reliably achieve an utterly passive and side effect free sort of bliss, that would be the end game... it would be an ideal stable outcome for people to collectively shoot for, and once it was attained the lack of side effects would keep it from being disrupted.

By contrast, hedonic instrumentalism (that I'm mostly advocating) would be a component of some larger consequentialism that is very concerned with what arises because of feelings (like what actions, with what results) and defers the core axiological question about the final value of various world states to a separate (likely independent) theory.

The position of hedonic instrumentalism is basically that happiness that causes behavior with bad results for the world is bad happiness. Happiness that causes behavior with good results in the world is good happiness. And happiness is arguably pointless if it is "sterile"... having no behavioral or world affecting consequences (though this depends on how much control we have over our actions and health via intermediaries other than by wireheading our affective subsystems). What does "good" mean here? That's a separate question.

Basically, the way I'm using the terms here: intrinsic hedonism is "an axiology", but hedonic instrumentalism treats affective states mostly as causal intermediates that lead to large scale adjustments to the world (though behavior) that can then be judged by some external axiology that pays attention to the whole world and the causal processes that deserve credit for bringing about the good world states.

You might break this down further, where perhaps "strong hedonic instrumentalism" is a claim that in actual practice, humans can (and already have, to some degree) come up with ways to make plans, follow the plans with action, and thereby produce huge amounts of good in the world, all without the need for very much "passion" as a neural/cognitive intermediate.

Then "weak hedonic instrumentalism" would be a claim that maybe such practices exist somewhere, or could exist if we searched for them really hard, and probably we should do that.

Then perhaps "skeptical hedonic instrumentalism" would be a claim that even if such practices don't exist and might not even be worth discovering, still it is the case that intrinsic hedonism is pretty weaksauce as far as axiologies go.

I would not currently say that I'm a strong hedonic instrumentalist, because I am not certain that the relevant mental practices exist as a factual matter... But also I'm just not very impressed by a moral theory that points to a little bit of tissue inside one or more skulls and says that the whole world can go to hell, so long as that neural tissue is in a "happy state".

Comment author: JenniferRM 05 July 2017 07:56:23AM 3 points [-]

What if happiness is not our goal?

Comment author: JenniferRM 17 June 2017 09:31:36AM *  4 points [-]

Three places similar ideas have occurred that spring to mind:

FIRST Suarez's pair of novels Daemon and Freedom(tm) are probably the most direct analogue, because it is a story of taking over the world via software, with an intensely practical focus.

The essential point for this discussion here and now is that prior to launching his system, the character who takes over the world first tests the quality of the goal state that he's aiming at by implementing it first as a real world MMORP. Then the takeover of the world proceeds via trigger-response software scripts running on the net, but causing events in the real world via: bribes, booby traps, contracted R&D, and video game like social engineering.

The MMORP start not only functions as his test bed for how he wants the world to work at the end... it also gives him starting cash, a suite of software tools for describing automated responses to human decisions, code to script the tactics of swarms of killer robots, and so on.

SECOND Nozick's Experience Machine thought experiment is remarkably similar to your thought experiment, and yet aimed at a totally different question.

Nozick was not wondering "can such a machine be described in detail and exist" (this was assumed) but rather "would people enter any such machine and thereby give up on some sort of atavistic connection to an unmediated substrate reality, and if not what does this mean about the axiological status of subjective experience as such?"

Personally I find the specifics of the machine to matter an enormous amount to how I feel about it... so much so that Nozick's thought experiment doesn't really work for me in its philosophically intended manner. There has been a lot of play with the concept in fiction that neighbors on the trope where the machine just gives you the experience of leaving the machine if you try to leave it. This is probably some kind of archetypal response to how disgusting it is in practice for people to be pure subjective hedonists?

THIRD Greg Egan's novel Diaspora has most of the human descended people living purely in and as software.

In the novel any common environment simulator and interface (which has hooks into the sensory processes of the software people) is referred to as a "scape" and many of the software people's political positions revolve around which kinds of scapes are better or worse for various reasons.

Konishi Polis produces a lot of mathematicians, and has a scape that supports "gestalt" (like vision) and "linear" (like speech or sound) but it does not support physical contact between avatars (their relative gestalt positions just ghost around and through each other) because physical contact seems sort of metaphysically coercive and unethical to them. By contrast Carter-Zimmerman produces the best physicists, and it has relatively high quality physics simulations built into their scape, because they think that high quality minds with powerful intuitions require that kind of low level physical experience embedded into their everyday cognitive routines. There are also flesh people (who think flesh gives them authenticity or something like that) and robots (who think "fake physics" is fake, even though having flesh bodies is too dangerous) and so on.

All of the choices matter personally to the people... but there is essentially no lock in, in the sense that people are forced to do one thing or another by an overarching controller that settles how things will work for everyone for all time.

If you want to emmigrate from Konishi to Carter Zimmerman you just change which server you're hosted on (for better latency) and either have mind surgery (to retrofit your soul with the necessary reflexes for navigating the new kind of scape) or else turn on a new layer of exoself (that makes your avatar in the new place move according to a translation scheme based on your home scape's equivalent reflexes).

If you want to, you can get a robot body instead (the physical world then becomes like a very very slow scape and you run into the question of whether to slow down your clocks and let all your friends and family race ahead mentally, or keep your clock at a normal speed and have the robot body be like a slow moving sculpture you direct to do new things over subjectively long periods of time). Some people are still implemented in flesh, but if they choose they can get scanned into software and run as a biology emulation. Becoming biologically based is the only transformation rarely performed because... uh... once you've been scanned (or been built from software from scratch) why would you do this?!

Interesting angles:

Suarez assumes physical coercion and exponential growth as the natural order, and is mostly interested in the details of these processes as implemented in real political/economic systems. He doesn't care about 200 years from now and he uses MMORP simulations as simply a testbed for practical engineering in intensely human domains.

Nozick wants to assume utopia, and often an objection is "who keeps the Experience Machine from breaking down?"

Egan's novel has cool posthuman world building, but the actual story revolves around the question of keeping the experience machine from breaking down... eventually stars explode or run down... so what should be done in the face of a seemingly inevitable point in time where there will be no good answer to the question of "how can we survive this new situation?"

View more: Next