All of Squark's Comments + Replies

Squark00

"Neural networks" vs. "Not neural networks" is a completely wrong way to look at the problem.

For one thing, there are very different algorithms lumped under the title "neural networks". For example Boltzmann machines and feedforward networks are both called "neural networks" but IMO it's more because it's a fashionable name than because of actual similarity in how they work.

More importantly, the really significant distinction is making progress by trail and error vs. making progress by theoretical understanding. The... (read more)

Squark50

What makes you think so? The main reason I can see why the death of less than 100% of the population would stop us from getting back is if it's followed by a natural event that finishes off the rest. However 25% of current humanity seems much more than enough to survive all natural disasters that are likely to happen in the following 10,000 years. The black death killed about half the population of Europe and it wasn't enough even to destroy the pre-existing social institutions.

0Douglas_Knight
The Black Death destroyed the social institution of serfdom. (Most people see that as a good thing.) I don't think it is that easy to judge. The universities continued to exist in name, but it looks to me like they were destroyed. They switched from studying useful philosophy to the scholasticism that is usually attributed to an earlier period. The black death produced a 200 year dark age ("the Renaissance"). But the books survived, including the recent books of the Oxford Calculators, and people were able to build on them when they rebuilt the social institutions.
3gjm
We have a lot more infrastructure than Europe had at the time of the Black Death. If we lost 75% of the population, it might devastate things like the power grid, water supply and purification, etc. We have (I think) more complicatedly interdependent institutions than Europe at the time of the Black Death. Relatively small upheavals in, e.g., our financial systems can cause a lot of chaos, as shown by our occasional financial crises. If 75% of the population died, how robust would those systems be? The following feels like at least a semi-plausible story. Some natural or unnatural disaster wipes out 75% of the population. This leads to widescale failure of infrastructure, finance, and companies. In particular, we lose a lot of chip factories and oil wells. And then we no longer have the equipment we need to make new ones that work as well as the old ones did, and we run out of sufficiently-accessible oil and cannot make fast enough technological progress to replace it with solar or nuclear energy on a large scale, nor to find other ways of making plastics. And then we can no longer make the energy or the hardware to keep our civilization running, and handling that the best we can takes up all our (human and other) resources, and even if in principle there are scientific or technological breakthroughs that would solve that problem we no longer have the bandwidth to make them. The human race would survive, of course. But the modern highly technology-dependent world would be pretty much screwed. (I am not claiming that the loss of 75% of the population would definitely do that. But it seems like it sure might.)
Squark00

Hi Peter! I am Vadim, we met in a LW meetup in CFAR's office last May.

You might be right that SPARC is important but I really want to hear from the horse's mouth what is their strategy in this regard. I'm inclined to disagree with you regarding younger people, what makes you think so? Regardless of age I would guess establishing a continuous education programme would have much more impact than a two-week summer workshop. It's not obvious what is the optimal distribution of resources (many two week workshops for many people or one long program for fewer people) but I haven't seen such an analysis by CFAR.

0pcm
Peer pressure matters, and younger people are less able to select rationalist-compatible peers (due to less control over who their peers are). I suspect younger people have short enough time horizons that they're less able to appreciate some of CFAR's ideas that take time to show benefits. I suspect I have more intuitions along these lines that I haven't figured out how to articulate. Maybe CFAR needs better follow-ups to their workshops, but I get the impression that with people for whom the workshops are most effective, they learn (without much follow-up) to generalize CFAR's ideas in ways that make additional advice from CFAR unimportant.
Squark*160

The body of this worthy man died in August 2014, but his brain is preserved by Alcor. May a day come when he lives again and death is banished forever.

1DragonGod
Amen to that comrade.
Squark00

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?

3AnnaSalamon
I don't think this; it seems to me that the next decade or two may be pivotal, but they may well not be, and the rest of the century matters quite a bit as well in expectation. There are three main reasons we've focused mainly on adults: 1. Adults can contribute more rapidly, and so can be part of a process of compounding careful-thinking resources in a shorter-term way. E.g. if adults are hired now by MIRI, they improve the ratio of thoughtfulness within those thinking about AI safety, and this can in turn impact the culture of the field, the quality of future years’ research, etc. 2. For reasons resembling (1), adults provide a faster “grounded feedback cycle”. E.g., adults who come in with business or scientific experience can tell us right away whether the curricula feel promising to them; students and teens are more likely to be indiscriminatingly enthusiastic. . 3. Adults can often pay their own way at the workshops; children can’t; we therefore cannot afford to run very many workshops for kids until we somehow acquire either more donation, or more financial resource in some other way. Nevertheless, I agree with you that programs targeting children can be higher impact per person and are extremely worthwhile in the medium- to long-run. This is indeed part of the motivation for SPARC, and expanding such programs is key to our long-term aims; marginal donation is key to our ability to do these quickly, and not just eventually.
0pcm
I disagree. My impression is that SPARC is important to CFAR's strategy, and that aiming at younger people than that would have less long-term impact on how rational the participants become.
Squark10

Link to "Limited intelligence AIs evaluated on their mathematical ability", and link to "AIs locked in cryptographic boxes".

Squark00

On the other hand, articles and books can reach a much larger number of people (case in point: the Sequences). I would really want to see a more detailed explanation by CFAR of the rationale behind their strategy.

Squark60

Thank you for writing this. Several questions.

  • How do you see CFAR in the long term? Are workshops going to remain in the center? Are you planning some entirely new approaches to promoting rationality?

  • How much do you plan to upscale? Are the workshops intended to produce a rationality elite or eventually become more of a mass phenomenon?

  • It seem possible that revolutionizing the school system would have much higher impact on rationality than providing workshops for adults. SPARC might be one step in this direction. What are you thoughts / plans regarding this approach?

Squark00

!!! It is October 27, not 28 !!!

Also, it's at 19:00

Sorry but it's impossible to edit the post.

Squark20

First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.

This is not quite what happens. When you do UDT properly, the result is that the Tegmark level IV multiverse has finite capacity for human lives (when human lives are counted with 2^-{Kolomogorov complexity} weights, as they should). Therefore the "bare" utility function has some kind of diminishing returns but the "effective" utility function is ... (read more)

Squark00

If you have trouble finding the location, feel free to call me (Vadim) at 0542600919.

Squark00

In order for the local interpretation of Sleeping Beauty to work, it's true that the utility function has to assign utilities to impossible counterfactuals. I don't think this is a problem...

It is a problem in the sense that there is no canonical way to assign these utilities in general.

In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utili

... (read more)
Squark00

It's also a valid interpretation to have the "outcome" be whether Sleeping Beauty wins, loses, or doesn't take an individual bet about what day it is (there is a preference ordering over these things), the "action" being accepting or rejecting the bet, and the "event" being which day it is (the outcome is a function of the chosen action and the event).

In Savage's theorem acts are arbitrary functions from the set of states to the set of consequences. Therefore to apply Savage's theorem in this context you have to consider bl... (read more)

0Manfred
First, thanks for having this conversation with me. Before, I was very overconfident in my ability to explain this in a post. In order for the local interpretation of Sleeping Beauty to work, it's true that the utility function has to assign utilities to impossible counterfactuals. I don't think this is a problem, but it does raise an interesting point. Because only one action is actually taken, any consistent consequentialist decision theory that considers more than one action is a decision theory that has to assign utilities to impossible counterfactuals. But the counterfactuals you mention up are different: they have to be assigned a utility, but they never actually get considered by our decision theory because they're causally inaccessible - their utilities don't affect anything, in some logical-counterfactual or algorithmic-causal counterfactual sense. In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local "consequences" in Savage's theorem, and specifies those causally-inaccessible utilities. This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be "okay" to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties. I think this idea might neglect the importance of causal information in deciding what to call an "event." Do you have some examples in mind? I've seen this claim before, but it's either relied on the assumption that probabilities can be recovered straightforwardly from the optimal action (not valid when the straightforward decision theory fails, e.g. absent-minded driver, Psy-kosh's non-anthropic problem), or that cer
Squark00

I'm not asking researchers to predict what they will discover. There are different mindsets of research. One mindset is looking for heuristics that maximize short term progress on problems of direct practical relevance. Another mindset is looking for a rigorously defined overarching theory. MIRI is using the latter mindset while most other AI researchers are much closer to the former mindset.

Squark00

I disagree with the part "her actions lead to different outcomes depending on what day it is." The way I see it, the "outcome" is the state of the entire multiverse. It doesn't depend on "what day it is" since "it" is undefined. The sleeping beauty's action simultaneously affects the multiverse through several "points of interaction" which are located in different days.

0Manfred
What makes something an "outcome" in Savage's theorem is simply that it follows a certain set of rules and relationships - the interpretation into the real world is left to the reader. It's totally possible to regard the state of the entire universe as the "outcome" - in that case, the things that corresponds to the "actions" (the thing that the agent chooses between to get different "outcomes") are actually the strategies that the agent could follow. And the thing that the agent always acts as if it has probabilities over are the "events," which are the things outside the agent's control that determine the mapping from "actions" to "outcomes," and given this interpretation the day does not fulfill such a role - only the coin. So in that sense, you're totally right. But this interpretation isn't unique. It's also a valid interpretation to have the "outcome" be whether Sleeping Beauty wins, loses, or doesn't take an individual bet about what day it is (there is a preference ordering over these things), the "action" being accepting or rejecting the bet, and the "event" being which day it is (the outcome is a function of the chosen action and the event). Here's the point: for all valid interpretations, a consistent Sleeping Beauty will act as if she has probabilities over the events. That's what makes Savage's theorem a theorem. What day it is is an event in a valid interpretation, therefore Sleeping Beauty acts as if it has a probability. Side note: It is possible to make what day it is a non-"event," at least in the Savage sense. You just have to force the "outcomes" to be the outcome of a strategy. Suppose Sleeping Beauty instead just had to choose A or B on each day, and only gets a reward if her choices are AB or BA, but not AA or BB (or any case where the reward tensor is not a tensor sum of rewards for individual days). To play this game well, Savage's theorem does not say you have to act like you assign a probability to what day it is. The canonical exampl
Squark00

Hi Charlie! Actually I complete agree with Vladimir on this: subjective probabilities are meaningless, meaningful questions are decision theoretic. When the sleeping beauty is asked "what day is it?" the question is meaningless because she is simultaneously in several different days (since identical copies of her are in different days).

0Manfred
Short version: consider Savage's theorem (fulfilling the conditions by offering Sleeping Beauty a bet along with the question "what day is it?", or by having Sleeping Beauty want to watch a sports game on Monday specifically, etc.). Savages theorem requires your agent to have a preference ordering over outcomes, and have things it can do that lead to different outcomes depending on the state of the world (events), and it states that consistent agents have probabilities over the state of the world. On both days, Sleeping Beauty satisfies these desiderata. She would prefer to win the bet (or watch the big game), and her actions lead to different outcomes depending on what day it is. She therefore assigns a probability to what day it is. We do this too - it is physically possible that we've been duplicated, and yet we continue to assign probabilities to what day it is (or whether our favorite sports will be there when we turn on the TV) like normal people, rather than noticing that it is meaningless since we might be simultaneously in different days.
Squark00

A "coincidence" is an a priori improbable event in your model that has to happen in order to create a situation containing a "copy" of the observer (which roughly means any agent with a similar utility function and similar decision algorithm).

Imagine two universe clusters in the multiverse: one cluster consists of universe running on fragile physics, another cluster consists of universes running on normal physics. The fragile cluster will contain much less agent-copies than the normal cluster (weighted by probability). Imagine you have ... (read more)

Squark40

I did a considerable amount of software engineer recruiting during my career. I only called the references at an advanced stage, after an interview. It seems to me that calling references before an interview would take too much of their time (since if everyone did this they would be called very often) and too much of my time (since I think their input would rarely disqualify a candidate at this point). The interview played the most important role in my final decision, but when a reference mentioned something negative which resonated with something that concerned me after the interview, this was often a reason to reject.

Squark10

I'm digging into this a little bit, but I'm not following your reasoning. UDT from what I see doesn't mandate the procedure you outline. (perhaps you can show an article where it does) I also don't see how which decision theory is best should play a strong role here.

Unfortunately a lot of the knowledge on UDT is scattered in discussions and it's difficult to locate good references. The UDT point of view is that subjective probabilities are meaningless (the third horn of the anthropic trilemma) thus the only questions it make sense to ask are decision-th... (read more)

0PeterCoin
I'll dig a little deeper but let me first ask these questions: What do you define as a coincidence? Where can I find an explanation of the N 2^{-(K + C)} weighting?
Squark10

Hi Peter! I suggest you read up on UDT (updateless decision theory). Unfortunately, there is no good comprehensive exposition but see the links in the wiki and IAFF. UDT reasoning leads to discarding "fragile" hypotheses, for the following reason.

According to UDT, if you have two hypotheses H1, H2 consistent with your observations you should reason as if there are two universes Y1 and Y2 s.t. Hi is true in Yi and the decisions you make control the copies of you in both universes. Your goal is to maximize the a priori expectation value of your uti... (read more)

0PeterCoin
I'm digging into this a little bit, but I'm not following your reasoning. UDT from what I see doesn't mandate the procedure you outline. (perhaps you can show an article where it does) I also don't see how which decision theory is best should play a strong role here. But anyways I think the heart of your objection seems to be "Fragile universes will be strongly discounted in the expected utility because of the amount of coincidences required to create them". So I'll free admit to not understanding how this discounting process works, but I will note that current theoretical structures (standard model inflation cosmology/string theory) have a large amount of constants that are considered coincidences and also produce a large amount of universes like ours in terms of physical law but different in terms of outcome. I would also note that fragile universe "coincidences" don't seem to me to be more coincidental in character than the fact we happen to live on a planet suitable for life. Lastly I would also note that at this point we don't have a good H1 or H2.
Squark00

Scalable in what sense? Do you foresee some problem with one kitchen using the hiring model and other kitchens using the volunteer model?

0[anonymous]
Yes, I think it might lead to discord, at least. 'Oh, it's just a job for them - a cost, not an opportunity - certainly they will try to do as little as possible, and this will reflect poorly on us, and we don't have rich lawyers!' 'Oh, how come they don't switch to hiring? We used to do so much less, but they don't even try!' or something like that.
Squark00

I don't follow. Do you argue that in some cases volunteering in the kitchen is better than donating? Why? What's wrong with the model where the kitchen uses your money to hire workers?

0[anonymous]
Nothing wrong - if you prove the business scalable. (Which might not be true for many charities out there, but that would not make them inefficient; only the donating as contributing.) I admit I have no experience with free kitchens, though.
Squark20

I didn't develop the idea, and I'm still not sure whether it's correct. I'm planning to get back to these questions once I'm ready to use the theory of optimal predictors to put everything on rigorous footing. So I'm not sure we really need to block the external inputs. However, note that the AI is in a sense more fragile than a human since the AI is capable of self-modifying in irreversible damaging ways.

Squark30

I assume you meant "more ethical" rather than "more efficient"? In other words, the correct metric shouldn't just sum over QALYs, but should assign f(T) utils to a person with life of length T of reference quality, for f a convex function. Probably true, and I do wonder how it would affect charity ratings. But my guess is that the top charities of e.g. GiveWell will still be close to the top in this metric.

Squark10

Your preferences are by definition the things you want to happen. So, you want your future self to be happy iff your future self's happiness is your preference. Your ideas about moral equivalence are your preferences. Et cetera. If you prefer X to happen and your preferences are changed so that you no longer prefer X to happen, the chance X will happen becomes lower. So this change of preferences goes against your preference for X. There might be upsides to the change of preferences which compensate the loss of X. Or not. Decide on a case by case basis, but ceteris paribus you don't want your preferences to change.

Squark30

I don't follow. Are you arguing that saving a person's life is irresponsible if you don't keep saving them?

-1[anonymous]
(I think) I'm arguing that if you have with some probability saved some people, and you intend to keep saving people, it is more efficient to keep saving the same set of people.
Squark20

If we find a mathematical formula describing the "subjectively correct" prior P and give it to the AI, the AI will still effectively use a different prior initially, namely the convolution of P with some kind of "logical uncertainty kernel". IMO this means we still need a learning phase.

2Wei Dai
In the post you linked to, at the end you mention a proposed "fetus" stage where the agent receives no external inputs. Did you ever write the posts describing it in more detail? I have to say my initial reaction to that idea is also skeptical though. Human don't have a fetus stage where we think/learn about math with external inputs deliberately blocked off. Why do artificial agents need it? If an agent couldn't simultaneously learn about math and process external inputs, it seems like something must be wrong with the basic design which we should fix instead of work around.
Squark40

"I understand that it will reduce the chance of any preference A being fulfilled, but my answer is that if the preference changes from A to B, then at that time I'll be happier with B". You'll be happier with B, so what? Your statement only makes sense of happiness is part of A. Indeed, changing your preferences is a way to achieve happiness (essentially it's wireheading) but it comes on the expense of other preferences in A besides happiness.

"...future-me has a better claim to caring about what the future world is like than present-me does." What is this "claim"? Why would you care about it?

0AstraSequi
I don’t understand your first paragraph. For the second, I see my future self as morally equivalent to myself, all else being equal. So I defer to their preferences about how the future world is organized, because they're the one who will live in it and be affected by it. It’s the same reason that my present self doesn’t defer to the preferences of my past self.
Squark30

I think it is more interesting to study how to be simultaneously supermotivated about your objectives and realistic about the obstacles. Probably requires some dark arts techniques (e.g. compartmentalization). Personally I find that occasional mental invocations of quasireligious imagery are useful.

1Gunnar_Zarncke
Isn't this the same or related to mental contrasting?
Squark20

I'm not sure about "no correct prior", and even if there is no "correct prior", maybe there is still "the right prior for me", or "my actual prior", which we can somehow determine or extract and build into an FAI?

This sounds much closer home. Note, however, that there is certain ambiguity between the prior and the utility function. UDT agents maximize Sum Prior(x) U(x) so certain simultaneous redefinitions of Prior and U will lead to the same thing.

2Wei Dai
But in that case, why do we need a special "pure learning" period where you force the agent to explore? Wouldn't any prior that would qualify as "the right prior for me" or "my actual prior" not favor any particular universe to such an extent that it prevents the agent from exploring in a reasonable way? To recap, if we give the agent a "good" prior, then the agent will naturally explore/exploit in an optimal way without being forced to. If we give it a "bad" prior, then forcing it to explore during a pure learning period won't help (enough) because there could be environments in the bad prior that can't be updated away during the pure learning period and cause disaster later. Maybe if we don't know how to define a "good" prior but there are "semi-good" priors which we know will reliably converge to a "good" prior after a certain amount of forced exploration, then a pure learning phase would be useful, but nobody has proposed such a prior, AFAIK.
Squark60

Puerto Rico?! But Puerto Rico is already a US territory!

-1buybuydandavis
That just went bankrupt. Maybe they can show us how it's done.
6DanielLC
We'll make it a double territory.
0Lumifer
Minor details :-P
Squark00

Cool! Who is this Kris Langman person?

Squark30

As I discussed before, IMO the correct approach is not looking for the one "correct" prior since there is no such thing but specifying a "pure learning" phase in AI development. In the case of your example, we can imagine the operator overriding the agent's controls and forcing it to produce various outputs in order to update away from Hell. Given a sufficiently long learning phase, all universal priors should converge to the same result (of course if we start from a ridiculous universal prior it will take ridiculously long, so I still grant that there is a fuzzy domain of "good" universal priors).

4Wei Dai
I'm not sure about "no correct prior", and even if there is no "correct prior", maybe there is still "the right prior for me", or "my actual prior", which we can somehow determine or extract and build into an FAI? How do you know when you've forced the agent to explore enough? What if the agent has a prior which assigns a large weight to an environment that's indistinguishable from our universe, except that lots of good things happen if the sun gets blown up? It seems like the agent can't update away from this during the training phase. So you think "universal" isn't "good enough", but something more specific (but perhaps not unique as in "the correct prior" or "the right prior for me") is? Can you try to define it?
Squark20

I have described essentially the same problem about a year ago, only in the framework of the updateless intelligence metric which is more sophisticated than AIXI. I have also proposed a solution, albeit provided no optimality proof. Hopefully such a proof will become possible once I make the updatless intelligence metric rigorous using the formalism of optimal predictors.

The details may change but I think that something in the spirit of that proposal has to be used. The AI's subhuman intelligence growth phase has to be spent in a mode with frequentism-styl... (read more)

0Stuart_Armstrong
Do let me know if you succeed!
Squark-10

I fail to understand what is repugnant about the repugnant conclusion. Are there any arguments here except discrediting the conclusion using the label "repugnant"?

Squark10

It is indeed conceivable to construct "safe" oracle AIs that answer mathematical questions. See also writeup by Jim Babcock and my comment. The problem is that the same technology can be relatively easily repurposed into an agent AI. Therefore, anyone building an oracle AI is really bad news unless FAI is created shortly afterwards.

I think that oracle AIs might be useful to control the initial testing process for an (agent) FAI but otherwise are far from solving the problem.

Squark10

This is not a very meaningful claim since in modern physics momentum is not "mv" or any such simple formula. Momentum is the Noether charge associated with spatial translation symmetry which for field theory typically means the integral over space of some expression involving the fields and their derivatives. In general relativity things are even more complicated. Strictly speaking momentum conservation only holds for spacetime asymptotics which have spatial translation symmetry. There is no good analogue of momentum conservation for e.g. compact space.

Nonetheless, the EmDrive drive still shouldn't work (and probably doesn't work).

Squark110

The concern that ML has no solid theoretical foundations reflects the old computer science worldview, which is all based on finding bit exact solutions to problems within vague asymptotic resource constraints.

It is an error to confuse the "exact / approximate" axis with the "theoretical / empirical" exis. There is plenty of theoretical work in complexity theory on approximate algorithms.

A good ML researcher absolutely needs a good idea of what is going on under the hood - at least at a sufficient level of abstraction.

There is dif... (read more)

0YVLIAZ
That's a bad example. You are essentially asking researchers to predict what they will discover 50 years down the road. A more appropriate example is a person thinking he has medical expertise after reading bodybuilding and nutrition blogs on the internet, vs a person who has gone through medical school and is an MD.
-1V_V
Though humans are the most populous species of large animal on the planet. Condoms were invented because evolution, being a blind watchmaker, forgot to make sex drive tunable with child mortality, hence humans found a loophole. But whatever function humans are collectively optimizing, it still closely resembles genetic fitness.
Squark50

Hi Yaacov, welcome!

I guess that you can reduce X-risk by financing the relevant organizations, contributing to research, doing outreach or some combination of the three. You should probably decide which of these paths you expect to follow and plan accordingly.

Squark30

Disagreeing is ok. Disagreeing is often productive. Framing your disagreement as a personal attack is not ok. Lets treat each other with respect.

Squark20

I do think that some kind of organisational cooperative structure would be needed even if everyone were friends...

We don't need the state to organize. Look at all the private organizations out there.

It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for...

The cause might be something created artificially by the FAI. One idea I had is a universe with "pseudodeath" which doesn't literally kill you but relocates you to another part of the u... (read more)

0g_pepper
Sort of a cosmic witness relocation program! :).
Squark50

P.S.

I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.

My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I'm sure that I personally would enjoy interacting with you.

Also, I am confident you will go far in life. Good dragon hunting!

3Acty
--
-6VoiceOfRa
-4Lumifer
I wouldn't call it an ambush, but in any case Acty emerged from that donnybrook in quite a good shape :-)
Squark40

I value unity for its own sake...

I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey t... (read more)

3Acty
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends - provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won't cut it - there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren't dragons to slay, then there's no need for us to work together and people can do whatever they like. And yeah - the tradeoff would definitely need to be considered. If the AI told me, "Sorry, but I need to solve negentropy and if you try and help me you're just going to slow me down to the point at which it becomes more likely that everyone dies", I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for - we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly. And yeah - I'm not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds... silly and maybe not safe, so I guess it'll have to be a gradual upgrade process in all of us. I quite like that idea though - it's like having a second childhood, except this time you're learning to remember eve
Squark10

Hi Act, welcome!

I will gladly converse with you in Russian if you want to.

Why do you want a united utopia? Don't you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn't we want to experiment with different things to get there?

Would you feel "dwarfed by an FAI" if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn't ever come down to solve your homework.

Acty100

--

Squark00

In the sacredness study, the condition "assume that you cannot use the money to make up for your action" doesn't compile. Does it mean I cannot use the money to generate positive utility in any way? So, effectively the money isn't worth anything by definition?

Squark00

I'm no longer sure what is our point of disagreement.

Squark30

Anyone wants to organize an experiment?

8gwern
No need. Order effects are one of the biases tested on YourMorals.org: http://lesswrong.com/lw/lt3/poll_lesswrong_group_on_yourmoralsorg_2015/ http://lesswrong.com/lw/8lk/poll_lesswrong_group_on_yourmoralsorg/
Squark00

The relation to the Civil Rights Act is an interesting observation, thank you. However, if the court did not cite the Act in its reasoning the connection is tenuous. It seems me that the most probable explanation is still that the Supreme Court is applying very lax interpretation which strongly depends on the personal opinions of the judges.

0hairyfigment
I was actually talking about what I see as the real historical, cultural process by which the Court reached its decision. (Do you not think the CRA influenced personal opinions about equality?) And I'm saying even this process has some legal support. But I must stress that the necessary interpretation by the courts happened in the 1970s - or at least that would have sufficed - and thus focusing on this 2015 ruling makes very little sense.
Squark10

Hi Kaj, thx for replying!

This makes sense as a criticism of versions of consequentialism which assume a "cosmic objective utility function". I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no "right morality everyone should follow" since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is th... (read more)

1UtilonMaximizer
The "preferences version" of consequentialism is also what I prefer. I've never understood the (unfortunately much more common) "cosmic objective utility function" consequentialism which, among other things, doesn't account for nearly enough of the variability in preferences among different types of brains.
Load More