All of Amarko's Comments + Replies

Amarko43

I think the complement sandwich can be useful as a stepping stone to good communication. That said, I think of it as a narrow formulation of a more general (and less precisely defined) approach to conversation that I might point to with phrases like "work with people where they are at" and "be aware of the emotions that your words induce in other people". There was an article on LessWrong that I can't find, arguing that clear communication is worded to pre-emptively avoid likely misunderstandings and misconceptions. The idea I'm pointing to is like that, b... (read more)

Amarko*61

I very much appreciate this post, because it strongly resonates with my own experience of laziness and willpower. Reading this post feels less like learning something new and more like an important reminder.

Amarko10

This is not quite accurate. You can't uniformly pick a random rational number from 0 to 1, because there are countably many such numbers, and any probability distribution you assign will have to add up to 1. Every probability distribution on this set assigns a nonzero probability to every number.

You can have a uniform distribution on an uncountable set, such as the real numbers between 0 and 1, but since you can't pick an arbitrary element of an uncountable set in the real world this is theoretical rather than a real-world issue.

As far as I know, any mathematical case in which something with probability 0 can happen does not actually occur in the real world in a way that we can observe.

2Noosphere89
Here's one example: What's the probability that our physical constants are what they are, especially the constants that seem tuned to life? The answer is if the constants are arbitrary real numbers, the answer is probability 0, and this applies no matter what number you pick. This is how we can defuse the fine-tuning argument, that the cosmos's constants have improbable values that seem tuned for life, since any other constant has probability 0, no matter whether it was able to sustain life or not: https://en.wikipedia.org/wiki/Fine-tuned_universe
2AnthonyC
Thanks, I didn't realize that! It does make sense now that I think about it. I think if you replace the rationals with the reals in my theoretical example, the rest still works? And yes, I agree about in the real world. Probabilities 0 and 1 are limits you can approach, but only reach in theory.
Amarko10

On the other hand, the more you get accustomed to a pleasurable stimulus, the less pleasure you receive from it over time (hedonic adaptation). Since this happens to both positive and negative emotions, it seems to me that there is a kind of symmetry here. To me this suggests that decreasing prediction error results in more neutral emotional states rather than pleasant states.

Amarko34

I disagree that all prediction error equates to suffering. When you step into a warm shower you experience prediction error just as much as if you step into a cold shower, but I don't think the initial experience of a warm shower contains any discomfort for most people, whereas I expect the cold shower usually does.

Furthermore, far more prediction error is experienced in life than suffering. Simply going for a walk leads to a continuous stream of prediction error, most of which people feel pretty neutral about.

1cesiumquail
I would say the warm shower causes less prediction error than the cold shower because it’s less shocking to the body, but there’s still a very subtle amount of discomfort which is hidden under all the positive feelings. The level of discomfort I’m talking about is very slight, but you would notice it if there was nothing else occupying your attention. I don’t mean to say it causes negative emotions. It’s more like the discomfort of imagining an unsatisfying shape, or watching a video at slightly lower resolution. If you compare any activity to deep sleep or unconsciousness, you can find sensations that grab your attention by being slightly irritating. As long as it’s noticeable I think it causes slight negative valence. But this is often outweighed by other aspects of the activity that increase valence. Sitting at home doing nothing might involve the negative sensations of boredom, restlessness, and impatience, all of which disappear when we go for a walk, so any discomfort is hard to notice underneath the obvious increase in valence.
1sturb
I think that the key is in the way that preferences inform our world model and thus what causes the prediction error to occur.  There are errors you would observe that would strongly indicate that your preferences are less able to be met in the posterior model. This will cause suffering whereas an update towards a model in which your needs are met more easily is likely to cause a good feeling. For example, you sit down to eat a sandwich at Subway for the first time and the sub is actually way better than you expected. You will experience a pleasant feeling, and if things like this keep happening you might feel like you've really figured out some good strategy for operating.  In a sense you are actually decreasing prediction error more than you are increasing it when a good thing happens to you because you always generate prediction error based on the difference between your ideal world and your observed reality. So when you have a very positive experience, this error between the ideal and observed is lessened. This could outweigh the prediction error of the prediction itself being wrong. The example I think of for this is the ecstatic child in Disney world.  There might be more work here though.  
Amarko10

This reminds me of a lot of discussions I've had with people where we seem to be talking past each other, but can't quite pin down what the disagreement is.

Usually we just end up talking about something else instead that we both seem to derive value from.

Amarko54

It seems to me that the constraints of reality are implicit. I don't think "it can be done by a human" is satisfied by a method requiring time travel with a very specific form of paradox resolution. It sounds like you're arguing that the Church-Turing thesis is simply worded ambiguously.

2Noosphere89
I definitely remember a different Church-Turing Thesis than what you said, and if we kept to realistic limitations of humans, then the set of computable functions is way, way smaller than the traditional view, so that's not really much of a win at all. At any rate, it is usable by a human, mostly because humans are more specific, and critically a Universal Turing Machine can perfectly emulate a human brain, so if we accept the CTC model as valid for Turing Machines, then we need to accept it's validity for humans, since humans are probably just yet another form of Turing Machine. I will edit the post though, to make it clear what I'm talking about.
Amarko3-4

It looks Deutschian CTCs are similar to a computer that can produce all possible outputs in different realities, then selectively destroy the realities that don't solve the problem. It's not surprising that you could solve the halting problem in such a framework.

3Noosphere89
The way CTC computers actually work is because getting the right answer to a hard problem, assuming we can act on the fundamental degrees of freedom of the universe, means that this is the only logically self-consistent solution, and that getting the wrong answer, or not getting an answer at all is logically inconsistent, and thus can't exist.
Amarko20

Our symbolic conception of numbers is already logarithmic, as order of magnitude corresponds to the number of digits. I think an estimate of a product based on an imaginary slide rule would be roughly equivalent to estimating based on the number of digits and the first digit.

Amarko20

Similar to point 2: I find that reading a book in the morning helps my mood. Particularly a physical fiction book.

1Celenduin
What is "physical fiction"?
Amarko42

I've definitely noticed the pattern of habits seeming to improve my life without them feeling like they are improving my life. On a similar note, a lot of habits seem easy to maintain while I'm doing them and obviously beneficial, but when I stop I have no motivation to continue. I don't know why that is, but my hope is that if I notice this hard enough it will become easier for me to recognize that I should do the thing anyway.

Amarko11

I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.

[I decided to dump my personal intuition about AI risk below. I don't have any specific facts to back it up.]

It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent "goal-maximiser" AI put forward in arguments for AI doom.

The tools that we have depend more on the specific details of the underlying mechanic... (read more)

Amarko30

Perhaps they could be next to the "Reply" button, and fully contained in the comment's container?

Amarko*62

The answer is pretty clear with Bayes' Theorem. The world in which the coin lands heads and you get the card has probability 0.0000000005, and the world in which the coin lands tails has probability 0.5. Thus you live a world with a prior probability of 0.5000000005, so the probability of the coin being heads is 0.0000000005/0.5000000005, or a little under 1 in a billion.

Given that the worst case scenario of losing the bet is saying you can't pay it and losing credibility, you and Adam should take the bet. If you want to (or have to) actually commit to pay... (read more)

1simon
Surprised at that level of risk aversion. I would definitely take the bet given the "pure" thought experiment, though in reality the odds would be a lot lower given the probability that some information would have leaked by day 9, or e.g. the different possibilities listed by Dagon.
Amarko20

Personally I would be interested in a longer post about whatever you have to say about the battery and battery design. You could make a sequence, so that it can be split into multiple posts.

Amarko30

I assume work is output/time. If a machine is doing 100% of the work, then the human's output is undefined since the time is 0.

2baturinsky
Yes. And also, it is an importance of the human/worker. While there is still some part of work that machine can't do, human thatcan do the remaining part is important. Once machine can do everything, human is disposable.
Amarko80

Some properties that I notice about semistable equilibria:

  • It is non-differentiable, so any semsistable equilibrium that occurs in reality is only approximate.
  • If the zone of attraction and repulsion are the same state, random noise will inevitably cause the state to hop over to the repulsive side. So what a 'perfect' semistable equilibrium will look like is a system where the state tends towards some point, hangs around for a while, and then suddenly flies off to the next equilibrium. This makes me think of the Gömböc.
  • A more approximate semsistable equilibr
... (read more)
Amarko30

There's still the problem that two people can't occupy the same space at the same time, so we need people to be able to swap places instantly. This then requires some coordination, which is mentioned below.

Some commenters have mentioned economy of scale—It can be more efficient to pool together resources to make a bunch of one thing at a time. For example, people want paperclips but they could get them much faster if they operate a massive paperclip-making machine rather than everyone making their own individually. I think this is already covered though, a... (read more)

Amarko40

I wonder if there's a different potential takeaway here than "find what feels rewarding". Duhig’s story makes me think of a perspective I've learned from TEAM-CBT: Bad habits (and behavioural patterns in general) are there for a reason, as a solution to some other problem. An important first step to changing your behaviour is to understand the reasons why not to change, and then really consider what is worth changing. It sounds to me that Duhig figured out what problem eating cookies was trying to solve.

At least, that's the theory as I understand it. I hav... (read more)

1lynettebye
Hmm, that framing doesn't feel at odds with mine. Finding what's rewarding can definitely include whatever it is that's reinforcing the current behavior. I emphasized the gut-level experience because I expect those emotions contain the necessary information that's missing from rational explanations for what they "should" do. 
Amarko20

I've learned to be resilient against AI distortions, but 'octagonal red stop sign' really got me. Which is ironic, you'd think that prompt would be particularly easy for the AI to handle. The other colours and shapes didn't have a strong effect, so I guess the level of familiarity makes a difference.

I think the level of nausea is a function of the amount of meaning that is being distorted, eg. distorted words, faces or food have a much stronger effect than warped clock faces or tables, for example. (I would also argue there is more meaning to the shape of a golf club than a clock face.)