All of Technologos's Comments + Replies

VNM utility is a necessary consequence of its axioms but doesn't entail a unique utility function; as such, the ability to prevent Dutch Books is derived more from VNM's assumption of a fixed total ordering of outcomes than anything.

Or you could just take more, so that the nervousness is swamped by the general handshakery...

Seth appears to be contrasting a "job" with things like "being an entrepreneur in business for oneself," so perhaps the first of your options.

-1brazil84
Yes I agree.

I think much of the problem here comes from something of an equivocation on the meaning of "economic disaster." A country can post high and growing GDP numbers without benefiting its citizens as much as a country with weaker numbers; the linked paper notes that

real per capita private consumption was lower than straight GDP per capita figures suggest because of very high investment rates and high military expenditures, and the quality of goods that that consumption expenditure could bring was even lower still."

Communism is good at maint... (read more)

My understanding is that one primary issue with frequentism is that it can be so easily abused/manipulated to support preferred conclusions, and I suspect that's the subject of the article. Frequentism may not have "caused the problem," per se, but perhaps it enabled it?

And in particular, there's good reason to believe that brains are still evolving at a decent pace, where it looks like cell mechanisms largely settled a long while back.

Oh, I meant that saying it was going to torture you if you didn't release it could have been exactly what it needed to say to get you to release it.

Perhaps it does--and already said it...

1pozorvlak
In which case, your actions are irrelevant - it's going to torture you anyway, because you only exist for the purpose of being tortured. So there's no point in releasing it.

What you say is true while the Koran and the Bible are referents, but when A and B become "Mohammed is the last prophet, who brought the full truth of God's will" and "Jesus was a literal incarnation of God," (the central beliefs of the religions that hold the respective books sacred) then James' logic holds.

I realize how arrogant it must seem for young, uncredentialled (not even a Bachelor's!) me to conclude that brilliant professional philosophers who have devoted their entire lives to studying this topic are simply confused. But, disturbing as it may be to say ... that's how it really looks.

Perhaps the fact that they have devoted their lives to a topic suggests that they have a vested interest in making it appear not to be nonsense. Cognitive dissonance can be tricky even for the pros.

What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?

I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.

It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).

2SilasBarta
Then it's wildly and substantively different from moral decisions people actually make, and are wired to be prepared for making. A world in which you can divert information flows like that differs in many ways that are hard to immediately appreciate. The reasoning I gave wasn't necessarily utilitarian -- it also invokes deontological "you should adhere to existing social norms about pushing people off trolleys". My point was that it still makes utilitarian sense.

I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest--in this case, the decision to push or not, or to switch tracks--and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.

I'd say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.

buying life insurance

For what it's worth, I've heard people initially had many of the same hangups about life insurance, saying that they didn't want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner's death, and thus making it less of a selfish thing.

I wonder if cryo needs a similar marketing parallel. "Don't you want to see your parents again?"

Could you supply a (rough) probability derivation for your concerns about dystopian futures?

I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.

Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?

Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.

I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.

Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?

Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.

To quote Daniele Vare: "Diplomacy is the art of letting someone have your way."

Agreed, and I suspect that certainty and abrasive attributes are also less problematic if truth is not being sought after.

This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.

I think the uncomfortable part is that bill's (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.

I'd suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))... If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.

Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.

-2[anonymous]
People are often risk-averse in terms of utility. That is, they would sometimes not take a choice with positive expected value in utility because of the possible risk. For instance, if you have to choose between A and B, where A is a definite gain of 1 utile and B is a 50% chance of staying the same, and a 50% chance of gaining 2 utiles, both choices have the same expected value, but a risk-averse person would prefer choice A because it has smaller risk.

I should note that most of the organizations we are talking about (Alcor, ACS, CI) are non-profits.

I didn't mind the old one, but I do like the "sticky brains" label that we can use for this concept in the future.

Agreed--the trick is that being wrong "only once" is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I'm hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.

an unprecedented and unlikely phenomenon

Possible precedents: the Library of Alexandria and the Dark Ages.

0timtyler
Reaching, though: the dark ages were confined to Western Europe - and something like the Library of Alexandria couldn't happen these days - there are too many libraries.

Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.

Who said he was? If Eliezer can convince somebody to let him out of the box--for a financial loss no less--then certainly a transhuman AI can, right?

2RobinZ
Certainly they can; what I am emphasizing is that "transhuman" is an overly strong criterion.

Sure, though the mechanism I was referring to is "it can convince its handler(s) to let it out of the box through some transhuman method(s)."

0RobinZ
Wait, since when is Eliezer transhuman?

This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.

9Alicorn
I don't think the publicly available details establish "how", merely "that".

Indeed, I agree--I meant that it doesn't matter what conclusions you hold as much as how you interact with people as you search for them.

I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it's not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).

The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson's deism had essentially no effect on his political and moral philosophiz... (read more)

8orthonormal
I agree with the rest of your comment, but this seems very wrong to me. I'd say rather that the unity we (should) look for on LW is usually more meta-level than object-level, more about pursuing correct processes of changing belief than about holding the right conclusions. Object-level understanding, if not agreement, will usually emerge on its own if the meta-level is in good shape.

To be clear, I wasn't arguing against applying the outside view--just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?

I agree with your comment on the pr... (read more)

3cousin_it
You're right. Some reference classes containing the Singularity have a 0% success rate, some fare better. I don't assign the Singularity exactly zero credence, and I don't think taw does either.

There is a difference between giving something negative utility and giving it decreasing marginal utility. It's sufficient to give the AI exponents strictly between zero and one for all terms in a positive polynomial utility function, for instance. That would be effectively "inputting" the marginal utility of resources, given any current state of the world.

1byrnema
I was considering the least convenient argument, the one that I imagined would result in the least aggressive AI. (I should explain here that I considered that even a 0 terminal utility for the resource itself would not result in 0 utility for that resource, because that resource would have some instrumental value in achieving things of value.) (Above edited because I don't think I was understood.) But I think the problem in logic identified with inputting the value of an instrumental value remains either way.

I propose a further hypothesis: high-status people have internalized Laws 4, 5, and 46 of the 48 Laws of Power, but especially Law 1: Never Outshine the Master.

After years of practice in switching between seeming competent relative to underlings and less so relative to superiors, they develop the ability to segregate audiences as you described.

Crime is down during the current recession. It's possible that the shock simply hasn't been strong enough, but it may be evidence nonetheless.

I think Hanson's hypothesis was more about true catastrophes, though--if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn't worry about morality.

I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.

It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.

0thomblake
It seems like a non-sequitur in response to Roko's illustration of what a utility function can be used to represent.

And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.

0Roko
Correct. But it's still an implementable policy. I didn't say it was sensible!
0thomblake
It seems as though you're reading this hypothetical utility function properly.

Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.

The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.

Often, when people appear to be making an obviously wrong argument in this forum, it's a matter of communication rather than massive logic failure.

0[anonymous]
Whaddaya mean by "negation of reference class"? Let's see, you negate each individual prediction in the class and then take the conjunction (AND) of all those negations: "everything will stay the same". This is obviously false. But this doesn't imply that each individual negation is false, only that at least one of them is! I'd be the first to agree that at least one technological change will occur, but don't bullshit me by insinuating you know which particular one! Could you please defend your argument again?

I'd heard it re: the smoking bans implemented in Minneapolis; I'm not surprised that Canada takes an especially paternalist position on the matter.

Also, more than votes are gained when demonizing smokers--there are also the smokers' tax dollars.

0Aurini
Brother, you don't even want to know what we're paying each day up here in Soviet Canuckistan. sigh And they call the LOTTERY a stealth tax on the poor...

For what it's worth, the argument I'd heard--not that I agree with it, to be clear--was that visitors/patrons weren't the issue: the law was designed to essentially extend safe-work-environment laws to bars. Thus, it was the employees who were the at-risk party.

1Paul Crowley
I wish that the law had been written in line with other hazardous materials laws. Then there would be (very expensive) smoking bars in which the staff wore full-on hazmat suits at any time that they might be exposed to the hazardous smoke, and so forth. EDIT: to be clear, I mean this seriously, not as a joke about smoking laws.
0mattnewport
In Canada? Here in Vancouver it is illegal to smoke within 6m of doorways, windows or air intakes of any building. It is hard to see how that level of restriction can be attributed to a work safety motivation.

Best I can tell, Science is just a particularly strong form (/subset) of Bayesian evidence. Since it attempts (when done well) to control for many potentially confounding factors and isolate true likelihoods, we can have more confidence in the strength of the evidence thus obtained than we could from general observations.

0Jack
Yeah, though a lot of science is just building localized, domain specific ontologies (here's what kinds of fish there are, here's what kind of stars there are etc.) and I'm not sure this kind of scientific knowledge is much better than observations you or I make routinely. Also, some evidence gathering is every bit as powerful as science (or more so) and yet is rarely counted as a science ( advanced sports statistics or marketing studies for example).

Agreed, and a lot of modern fields, including many of the natural sciences and social sciences, derive from philosophers' framework-establishing questions. The trick is that we then consider the fields therein derived as solving the original questions, rather than philosophy.

Philosophy doesn't really solve questions in itself; instead, it allows others to solve them.

I wonder if "How does neurons firing cause us to have a subjective experience?" might be unintentionally begging Mitchell_Porter's question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.

Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, "beliefs that the future will be just like the past" have a zero success rate.

Perhaps the inventions listed above suggest that it's unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a r... (read more)

4cousin_it
The negation of "a Singularity will occur" is not "everything will stay the same", it's "a Singularity as you describe it probably won't occur". I've no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.

That's not uncommon. Villains act, heroes react.

I interpreted Eliezer as saying that that was a cause of the stories' failure or unsatisfactory nature, attributing this to our desire to feel like decisions come from within even when driven by external forces.

I'm perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I'd modify the classes slightly, however:

  • Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
  • Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
  • Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen
... (read more)
4cousin_it
I think taw asked about reference classes of predictions. It's easy to believe in penicillin after it's been invented.

Agreed. Part of the reason I love reading Asimov is that he focuses so much on the ideas he's presenting, without much attempt to invest the reader emotionally in the characters. I find the latter impairs my ability to synthesize useful general truths from fiction (especially short stories, my favorite form of Asimov).

I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don't see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.

Load More