LESSWRONG
LW

Rob Lucas
584280
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Don't Eat Honey
Rob Lucas5d10

One reason to think that bee suffering and human suffering are comparably important (within one or two orders of magnitude) is just that suffering is suffering.  When you feel pain you don't really feel much else than pain; when it's intense enough you can't really experience much other than the pain, you can't think clearly, you can't do all of the cognitive things that seem to separate us from bees, you just experience suffering in some raw form, and that seems very bad.  If we can imagine bee's suffering is something like this, it seems like it's bad in a similar way to human suffering.

But one (not the only) issue here is that this way of viewing human suffering treats the human mind as a discrete entity.  There is one individual who is suffering, there is one bee which is suffering, and these seem like comparable things.
I don't think that's a reasonable model of the mind.  Instead, there are many separate but interconnected parts of the mind, all of them suffering when we are in pain.  The bee, by nature of being a simpler creature, has a mind made up of many fewer such parts, and thus there are just fewer beings who are suffering in this way when a bee suffers than when a human does.

Of course, these separate parts of the mind integrate into a larger whole, but that doesn't make them not present.  And I think noticing that the mind is made up of many distinct parts gives a better intuitive picture of what a person is than does thinking of us as discrete entities.  But if we take this picture seriously it clearly justifies a moral distinction (not of kind but of quantity) between more complex and less complex beings.  That simplification is to see a human mind as made up of more 'people' than a bee's mind.  This justifies ideas like treating neuron count as an important moral distinction.

Again, the separate agents within the mind interact and merge to create a larger emergent entity, yet there remain distinctions between them which should make us think that treating a human as a single agent and a bee as a single agent on par with them is misguided.

Reply
Free Will, Like Probability, is About Local Knowledge
Rob Lucas1mo10

I think you mean "determinism being false...", the rest of your comment makes sense in that context.

In which case I think you're saying, if determinism is false, libertarian free will would be possible.  And since that's true, when I suggest that we should define free will in relation to our (lack of) knoweldge about the world, I'm dismissing the possibly better definition given by a libertarian free will perspective.

Is that right?

If so I think that's right.  I do think there are arguments against libertarian free will that hold even if determinism is false, but I don't make any such arguments in the post, it doesn't address the question of the validity of libertarian free will at all, and to the extent that I want to make a positive claim with the piece, that is probably a flaw.  I'll consider making a minor edit to the substack version of the article that at least mentions this, though I probably won't try to make the argument against libertarian free will as the piece is already long enough as it is. 

Thanks for pointing this out, I did legitimately miss that.

(And if I misunderstood your point and you were saying something else please let me know!)

Reply
Free Will, Like Probability, is About Local Knowledge
Rob Lucas1mo10

I think I agree with this.

However, as this is a response to the comment that I think made clear the reasons why I would agree, maybe I'm missing something important.

Maybe my error is related to the fact, which you correctly point out, that my article assumes that determinism is true and asks, "if determinism is true, can we still have free will?".  It seems to me that determinism only strengthens the incompatibilist position, which is why the article uses it as a framework.  But it sounds like you're saying there is at least some way in which if determinism isn't true this can strengthen the incompatibilist viewpoint?

Reply
Free Will, Like Probability, is About Local Knowledge
Rob Lucas1mo10

The probability stuff is meant as an analogy to see the place that our knowledge can have in our description of reality, not as directly implying free will.  It's saying "free will is like this" not "this premise leads to the conclusion that free will exists".

I definitely agree that free will has multiple definitions, and certainly there are some definitions by which free will just doesn't exist.  The point I try to make in the piece is that this is a useful definition of free will, not that it's the only possible one.  I try to object to the objection that a compatibilist definition doesn't add anything useful to our understanding of the world, but not to the idea that no other definition is coherent.  I don't think I say anywhere that free will exists under any definition of free will, only that this definition is a good one that captures meaningful things about the world.

I regret a little the initial comments I made introducing the piece, I was just trying to imply that while we've all seen the same arguments about free will over and over people might find a new nuance here.  I certainly don't think I invented compatibilism.  I just think there's a slightly new perspective by looking from this analogy to probability to understand a little more clearly why compatibilism works.

Anyway, thanks for your comment, I think I mostly agree with everything.  I suppose I'm probably a little extreme in my views on probability, though.

Reply
On Eating the Sun
Rob Lucas6mo40

I agree that it's plausible just from priors that ASI could find a way to eat the sun.  The matter is there, and while it's strongly gravitationally bound in a way that's inconvenient, there's nothing physically impossible about getting it out of that arrangement into one that's more convenient to using fusion reactors or something.

But an analysis of how plausible the scenario is would certainly have made the post more valuable.  There are plausible proposals for how to get the fuel present in the sun out such that it could be used more efficiently, and while it may be possible that an ASI might come up with a more elegant or efficient plan, there are some fundamental physical limits on exactly how efficient the process could be made.

Wikipedia has some discussion of possible methods: https://en.m.wikipedia.org/wiki/Star_lifting

That article says: "This energy could be collected by a Dyson sphere; using 10% of the Sun's total power output would allow 5.9 x10^21 kilograms of matter to be lifted per year (0.0000003% of the Sun's total mass)", but this doesn't take account of the possibility of using the collected mass to fuel fusion reactions that are then used to power the mass collection.  What are the constraints on that process (my first thought is you have to worry about heat if you try to get the total power too high).

10,000 years sounds like enough time if you can get an exponential process going that uses the fuel harvested from the sun to collect more fuel.  But any process will have some constraints, such as max temperature at which the various parts of your system can function, or the specific materials which your system is made of (do you have to build your fusion reactors out of materials harvested from metal rich bodies? can you use carbon converted into diamondoid nanomachines? can you get enough of those materials out of the fusion of hydrogen to keep the process going once it's started?).  Even if your fuel harvesters and fusion reactors can stand up to the high temperatures necessary to eat the sun in that time frame, what about everything else in the solar system?  Does this process sterilize the earth of biological life?

Once I consider that there will be some sort of physical contraints on the process and also remember the fact that the sun is really big, it's not obvious that even an exponential process of fuel harvesting from the sun will be completed in a 10,000 year time frame.

Reply
Bigger Livers?
Rob Lucas8mo1611

One reason is just that eating food is enjoyable.  I limit the amount of food I eat to stay within a healthy range, but if I could increase that amount while staying healthy, I could enjoy that excess.

I think there are two aspects to the enjoyment of food.  One is related to satiety.  I enjoy the feeling of sating my appetite, and failing to sate it leaves me with te negative experience of craving food (negative if I don't satisfy those cravings.

But the other aspect is just the enjoyment of eating each individual bite of food.  Not the separate enjoyment of sating my appetite, but just the experience of eating.*

When I was younger and much more physically active I ate very large amounts of food.  I miss being able to do that.  I'm just as sated now with the much smaller portions I eat, but eating a small breakfast instead of a large one is a different experience. 

This probably doesn't justify some sort of risky intervention in increasing liver size.  Food is enjoyable, but so are a lot of other things in life.  But shifting to a higher protien diet seems like the kind of safe intervention, potentially even also healthier in other respects, that, if it has the side effect of being able to eat a little more food, could improve quality of life with minimal other costs.  Potential costs I see are related to the price of protein relative to other sources of nutrition, the cost of additional food (if the point is being able to eat more, you've got spend money for that excess), and, depending on one's moral views, something related to the source of the protien being added.

 

*I think Kahneman's remembering vs. expereincing selves adds some confusion here as well. When we remember a meal we don't necessarily remember the enjoyment we got from every bite, but probably put more weight on the feeling of satiety and the peak experience (how good did it taste at its best?).  But the experiencing self experiences every bite.  How much you want to weight the remembering vs. experiencing self is a philosophical issue, but I just want to note that it comes up here.

Reply
What can we learn from insecure domains?
Rob Lucas8mo10

I think tailcalled's point here is an important one.  You've got very different domains with very different dynamics, and it's not apriori obvious that the same general principle is involved in making all of these at first glance dangerous systems relatively safe.  It's not even clear to me that they are safer than you'd expect.  Of course that depends on how safe you'd expect them to be.  

Many people have lost their money from crypto scams.  Catastrophic nuclear war hasn't happened yet, but it seems like we may have had some close calls, and looked at on a chance/year basis it still seems we're in a bad equilibrium.  It's not at all clear that nuclear weapons are safer than we'd naively assume.  Cybersecurity issues haven't destroyed the global economy, but, for instance on the order of a hundred of billion dollars of pandemic relief funds were stolen by scammers.

That said, if I were looking for a general principle that might be at play in all of these cases I'd look at something like offensive/defense balance.

Reply
avturchin's Shortform
Rob Lucas8mo70

When I was trekking in Qinghai my guide suggested we do a hike around a lake on our last day on the way back to town.  It was just a nice easy walk around the lake.  But there were tibetan nomads (nomadic yak herders, he just referred to them as nomads) living on the shore of the lake, and each family had a lot of dogs (Tibetan Mastiffs as well as a smaller local dog they call "three eyed dogs").  Each time we got near their territory the pack would come out very aggressively.

He showed me how to first always have some stones ready, and second when they approached to throw a stone over their head when they got too close. "Don't hit the dogs" he told me, "the owners wouldn't be happy if you hit them, and throwing a stone over their heads will warn them off".

When they came he said, "You watch those three, I need to keep an eye on the ones that will sneak up behind us."  Each time the dogs used the same strategy.  There'd be a few that were really loud and ran up to us aggressively.  Then there'd be a couple sneaking up from the opposite side, behind us.  It was my job to watch for them and throw a couple of stones in their direction if they got too close.

He also made sure to warn me, "If one of them does get to you, protect your throat.  If you have to give it a forearm to bite down on instead of letting it get your throat."  He had previously shown me the large scar on his arm where he'd used that strategy in the past.  When I looked at him sort of shocked he said, "don't worry, it probably won't come to that."  At this point I was wondering if maybe we should skip the lake walk, but I did go there for an adventure.  Luckily the stone throwing worked, and we were walking on a road with plenty of stones, so it never really got too dangerous.

Anyway, +1 to your advice, but also look out for the dogs that are coming up behind you, not just the loud ones that are barking like mad as a distraction.

Reply
Of Birds and Bees
Rob Lucas8mo20

I don't think you've highlighted the casual factor here.  It's not at all clear that the reason bees and ants have a more effective response to predators than do flocks of birds is that the bees are individually less intelligent than the birds.

There's a very clear evolutionary/game theoretic explanation for the difference between birds and bees here: specifically the inclusive fitness of individual bees is tied to the outcome of the collective whereas the inclusive fitness of the birds is not.

In a game theoretic framework we might say that the payoff matrices for the birds and bees are different, so of course we'd expect them to adopt different strategies.

Neither of these is dependent upon the respective intelligences of individual members of the collectives.

This makes me predict that we should see the effectiveness of group strategies to be more strongly correlated with the alignment of the individuals incentive structure than with the (inverse of) intelligence of their individual members, as your post suggests.

So, for instance, within flocking birds, do birds with smaller brains/body mass ratios adopt better strategies?  Within insects, what pattern do we see?  I would suggest that the real pattern we'll end up finding is the one related to inclusive fitness.  So I'd predict that pack animals who associate with close relatives like wolves and lions will adopt better collective strategies than animals that form collectives with non-relatives.

Once you control for this I might even expect intelligence of individual members to positively correlate with group strategies, as it can allow them to solve coordination problems that less intelligent individuals couldn't solve.  This would explain the divergence of humans from the trend you notice.  But I'm speculating here.

Reply
What You Can Give Instead of Advice
Rob Lucas8mo52

I like the three suggested approaches instead of giving advice directly.  All three seem like good ideas.

However, all three of your approaches seem like things that could still be done in combination with giving advice.  "Before giving advice, try to fully understand the situation by asking questions" seems like a reasonable way to implement your first suggestion, for instance.  Personal experiences can be used to give context for why you are giving the advice you are giving, and clearing up misconceptions can be an important first step before giving more concrete advice.  This doesn't mean that these approaches need to be combined with giving advice, but they aren't in opposition to it and can perhaps be the thing that shifts us from bad advice to good advice.

In general I see you trying to tip us into a more collaborative frame with friends or collegues who come to us with problems.  Instead of immediately trying to solve their problem independently, try to work with them to better understand the issue and see if you have something worthwhile to add.  This makes sense to me.

I find your second paragraph oversimplified.   It's not at all clear that being in different circumstances means your advice doesn't apply to others.  There are many situations where it's exactly because you come from a different perspective that you can expect to have useful advice.

My final criticism is with respect to the idea that advice is no longer applicable in the modern world of the internet.  I don't think this is true.  A lot of the time people simply don't know what options are available and so wouldn't even consider looking for entire classes of solutions without being given advice that guides them to it.  There have been many cases in my own life when I've benefited from advice when I didn't even realise I had a problem: I was doing something in a way that worked but was highly suboptimal, and when a friend saw what I was doing suggested a simpler and more elegant solution that immediately made things easier for me.  I wouldn't even have thought to ask (or to search the internet) for a solution to this problem, because I already had a solution, and so didn't think of it in terms of having a problem to be solved.  In these cases unsolicited advice was highly useful for me.

I think there's a good reading of what you are saying as "advice is overrated", and you are trying to shift us to a more collaberative framework.  Since advice is overrated and reactive advice is overused, maybe a heuristic like "don't give advice" is useful to shift us away from a typical immediate reaction to friends with problems where we typically try to solve the problem immediately rather than simply asking questions to delve deeper.

Reply
Load More
0Change And Identity: a Story and Discussion on the Evolving Self
17d
0
4Free Will, Like Probability, is About Local Knowledge
1mo
6
1Bayesian Punishment
2y
1
8The biological intelligence explosion
4y
5