All of Audere's Comments + Replies

Audere121

I'm a transhumanist. I believe in morphological freedom. If someone wants to change sex, that's a valid desire that Society should try to accommodate as much as feasible given currently existing technology. In that sense, anyone can choose to become trans.

The problem is that the public narrative of trans rights doesn't seem to be about making a principled case for morphological freedom, or engaging with the complicated policy question of what accommodations are feasible given the imperfections of currently existing technology. Instead, we're told that ever

... (read more)

I don't think it's a generational thing, because I do object to the self-labeling freedom. Yes, it sounds bad to be against something called "freedom", but it is necessary unless you want to bite the bullet in favor of things like "freedom to make up whatever beliefs you want without evidence"—which is what I think is ultimately at stake here.

I want shared maps that reflect the territory. We want people to have the freedom to modify their body and social presentation in the territory, but I don't think this (not even the social presentation part) implies t... (read more)

Audere63

Yes, the point of the proof isn't that the sane pure bets condition and the weak indifference condition are the be-all and end-all of corrigibility. But using the proof's result, I can notice that your AI will be happy to bet a million dollars against one cent that the shutdown button won't be pressed, which doesn't seem desirable. It's effectively willing to burn arbitrary amounts of utility, if we present it with the right bets. 

Ideally, a successful solution to the shutdown problem should violate one or both of these conditions in clear, limited wa

... (read more)
Audere30

If we implement your example, the AI is willing to bet at arbitrarily poor odds that the on switch will be on, thus violating the sane pure bets condition. 

You can have particular decision problems or action spaces that don't have the circular property of the Northland-Southland problem, but the fact remains that if an AI fulfills the weak indifference condition reliably, it must violate the sane pure bets scenario in some circumstances. There must be insane bets that it's willing to take, even if no such bets are available in a particular situation.

B... (read more)

2Charlie Steiner
Yes. But the symmetry of the sane pure bets condition doesn't quite match what we want from corrigibility anyhow. I don't want an AI with a shutdown button to be making contingency plans to ensure good outcomes for itself even when the shutdown button is pressed.
Audere43

After hearing the problem, the question I asked myself was: at what odds would I bet that the coin came up heads? And the answer is that I would have a neutral expected return betting at 2:1 odds. This lines up with the Bayesian answer of P(heads) = 1/3.

2riversflow
To me adding a bet fundamentally changes the question. Canonically, the question is phrased based on credence, maximizing a bet isn't the same as being rational about reality. Asking for a bet (odds) is not the same as asking for probability of the coin flip(credence).  For the same reason that I know that I may die in the next 5 years but simultaneously be unwilling to take that as a nontransferable bet at even the most generous odds, Sleeping Beauty should believe that the coin flip was 50/50 but be unwilling take a bet of the same odds. So, using deductive reasoning to answer the question: A fair coin has a 50/50 value, a fair coin was flipped, Sleeping Beauty gains no new information, therefore Sleeping Beauty should respond that she believes there is a 50% chance of Heads.  A bet on each day changes to formulation, there are stakes that need to maximized and so now we are considering the odds: Using the credence that a fair coin is fair and given the set of outcomes from the initial condition of the problem, we can apply statistics and say that the odds are 1/3 that the coin flip resulted in heads.    A more rigorous argument can be found in this paper: * When Betting Odds and Credences Come Apart: More Worries for Dutch Book Arguments   Darren Bradley and Hannes Leitgeb
0Christopher King
That depends on how much money your bet affects each time. If the first wake up only affects 1 penny and the second wake up affects 1 dollar, betting something much closer to 1/2 becomes optimal.
2tgb
That assumes that the bet is offered to you every time you wake up, even when you wake up twice. If you make the opposite assumption (you are offered the bet only on the last time you wake up), then the odds change. So I see this as a subtle form of begging the question.
Audere90

I strongly disagree that this was the point of this in TWC and would be highly surprised if Eliezer agreed with you. For one thing, the parties involved in nonconsensual sex in TWC seem to be having a perfectly fine time. I also wouldn't be surprised if someone raping an Ancient such that they have a terrible awful no-good time would fall under some other crime and still get the perpetrator arrested.

Answer by Audere120

Conjure an IQ test and take it, obviously. My IQ when dreaming ranges from greenish-purple to twelveteen o'clock.

Audere10

-Eliezer Yudkowsky trims his beard using Solmonoff Induction.

-Eliezer Yudkowsky, and only Eliezer Yudkowsky, possesses quantum immortality.

-Eliezer Yudkowsky once persuaded a superintelligence to stay inside of its box.

-1Matt Goldenberg
This one actually happened though. Mixing up real facts with fake facts gets confusing :) https://en.m.wikipedia.org/wiki/AI_box#AI-box_experiment
Audere20

"Different Minds (You're Concepts Formed Differently From Mine)" should probably be "Different Minds (Your Concepts Formed Differently From Mine)."

Audere10

The Philosopher's Polar North (can also be translated as The Philosopher's Apex) [Nhato Remix]

https://www.youtube.com/watch?v=pc4zZ43R9o0

Turn on captions to see the lyrics and their English translation. The song says a lot about searching for truth and knowledge that I find powerful.

Some excerpts from the translated lyrics:

Accepting even those facts I have denied, <before time melts my memories away>
I absentmindedly lift “truth” from an uneven distribution <as a god might guide fate>

Even my current knowledge is still uncertain, swaying,
 so wit... (read more)

Audere50

I think this might be a decent example of "rationalist" music. In particular, the lyrics communicate the value of seeking knowledge and discerning truth. There are parts that I disagree with, but overall I think it's pretty great for a Touhou soundtrack cover made by non-rationalists. Turn on captions to see the lyrics and their English translation.

The Philosopher's Polar North [Nhato Remix]

https://www.youtube.com/watch?v=pc4zZ43R9o0

I'll paste the translated lyrics here:

Accepting even those facts I have denied, <before time melts ... (read more)

Audere90

I agree that in many examples, like simple risk/reward decisions shown here, certainty does not give an option higher utility. However, there are situations in which it might be advantageous to make a decision that has a worse expected outcome, but is more certain. The example that comes to mind is complex plans that involve many decisions which affect each other. There is a computational cost associated with uncertainty, in multiple possible outcomes must be considered in the plan; the plan "branches." Certainty simplifies things. As an agent with limited computing power in a situation where there is a cost associated with spending time on planning, this might be significant.

3Oscar_Cunningham
And the fact that situations like that occurred in humanity's evolution explains why humans have the preference for certainty that they do.