!!! It is October 27, not 28 !!!
Also, it's at 19:00
Sorry but it's impossible to edit the post.
We will play board games and have fascinating discussions, as always. Bring your own board games!
If you have trouble finding the place or have other questions, call me (Vadim) at +972542600919.
Game night in LessWrong Tel Aviv! Meeting at Electra Tower floor 29 as always. We are going to play board games and socialize. We might also do some impro theater. Bring your games and a good mood. Feel free to come late but we'll probably finish around 22-23.
Facebook event: https://www.facebook.com/events/500002670160746/
If you have trouble finding the place, feel free to call me (Vadim) at 0542600919
!!! It is October 27, not 28 !!!
Also, it's at 19:00
Sorry but it's impossible to edit the post.
We will meet at Google Israel on the 29th floor, as always.
The speaker this time is Yoav Hollander, inventor of the "e" hardware verification language and founder of Verisity. His description of the talk:
"I'll (briefly) describe the FAI verification problem, and admit that I don't really know how to solve it. I'll also warn against 'magical thinking', i.e. assuming that because a fool-proof solution is needed, it will somehow appear before the window of opportunity slams on our finger tips.
I'll review what works (and what does not) in HW verification and in autonomous systems verification, and discuss why some of that may be relevant for FAI verification.
I'll then open the room for discussion."
Facebook event: https://www.facebook.com/events/907241922691991/ My phone: 0542600919 (Vadim)
So in this case my question is why Kaj suggests his proposal instead of using bounded utility.
Two reasons.
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
Second, the way I arrived at this proposal was that RyanCarey asked me what's my approach for dealing with Pascal's Mugging. I replied that I just ignore probabilities that are small enough, which seems to be thing that most people do in practice. He objected that that seemed rather ad-hoc and wanted to have a more principled approach, so I started thinking about why exactly it would make sense to ignore sufficiently small probabilities, and came up with this as a somewhat principled answer.
Admittedly, as a principled answer to which probabilities are actually small enough to ignore, this isn't all that satisfying of an answer, since it still depends on a rather arbitrary parameter. But it still seemed to point to some hidden assumptions behind utility maximization as well as raising some very interesting questions about what it is that we actually care about.
First, like was mentioned elsewhere in the thread, bounded utility seems to produce unwanted effects, like we want utility to be linear in human lives and bounded utility seems to fail that.
This is not quite what happens. When you do UDT properly, the result is that the Tegmark level IV multiverse has finite capacity for human lives (when human lives are counted with 2^-{Kolomogorov complexity} weights, as they should). Therefore the "bare" utility function has some kind of diminishing returns but the "effective" utility function is roughly linear in human lives once you take their "measure of existence" into account.
I consider it highly likely that bounded utility is the correct solution.
If you have trouble finding the location, feel free to call me (Vadim) at 0542600919.
We will meet in Google Israel (Electra Tower) on floor 29, as always.
If you have trouble finding the location, feel free to call me (Vadim) at 0542600919.
Zohar Komargodski, a theoretical physicist from the Weizmann Institute of Science, will give a comprehensive review of the physics of black holes:
Physics in the past has progressed by connecting hitherto different concepts, for example: Space & Time, Electricity & Magnetism, Particles & Waves, and many others. We are now in the midst of another such exciting revolution, where Space-Time and Information Theory are being related. Jacob Bekenstein has laid out some of the basic concepts that appear in this surprising new developments. Thought experiments involving Black-Holes were central to the initial leaps that Jacob made.
The goal of the presentation is to describe in an informal fashion (requiring no particular prior knowledge of information theory or physics) what are Black-Holes, what did Jacob realise, what has been understood since his seminal papers, and what are the central remaining (formidable) challenges.
(Facebook event)[https://www.facebook.com/events/1666895726878967/]
First, thanks for having this conversation with me. Before, I was very overconfident in my ability to explain this in a post.
In order for the local interpretation of Sleeping Beauty to work, it's true that the utility function has to assign utilities to impossible counterfactuals. I don't think this is a problem, but it does raise an interesting point.
Because only one action is actually taken, any consistent consequentialist decision theory that considers more than one action is a decision theory that has to assign utilities to impossible counterfactuals. But the counterfactuals you mention up are different: they have to be assigned a utility, but they never actually get considered by our decision theory because they're causally inaccessible - their utilities don't affect anything, in some logical-counterfactual or algorithmic-causal counterfactual sense.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local "consequences" in Savage's theorem, and specifies those causally-inaccessible utilities.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be "okay" to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties. I think this idea might neglect the importance of causal information in deciding what to call an "event."
Different counterfactual games lead to different probability assignments.
Do you have some examples in mind? I've seen this claim before, but it's either relied on the assumption that probabilities can be recovered straightforwardly from the optimal action (not valid when the straightforward decision theory fails, e.g. absent-minded driver, Psy-kosh's non-anthropic problem), or that certain population-ethics preferences can be ignored without changing anything (highly dubious).
In order for the local interpretation of Sleeping Beauty to work, it's true that the utility function has to assign utilities to impossible counterfactuals. I don't think this is a problem...
It is a problem in the sense that there is no canonical way to assign these utilities in general.
In the utility functions I used as examples above (winning bets to maximize money, trying to watch a sports game on a specific day), the utility for these impossible counterfactuals is naturally specified because the utility function was specified as a sum of the utilities of local properties of the universe. This is what both allows local "consequences" in Savage's theorem, and specifies those causally-inaccessible utilities.
True. As a side note, the Savage theorem is not quite the right thing here since it produces both probabilities and utilities while in our situations the utilities are already given.
This raises the question of whether, if you were given only the total utilities of the causally accessible histories of the universe, it would be "okay" to choose the inaccessible utilities arbitrarily such that the utility could be expressed in terms of local properties.
The problem is that different extensions produce complete different probabilities. For example, suppose U(AA) = 0, U(BB) = 1. We can decide U(AB)=U(BA)=0.5 in which case the probability of both copies is 50%. Or, we can decide U(AB)=0.7 and U(BA)=0.3 in which case the probability of the first copy is 30% and the probability of the second copy is 70%.
The ambiguity is avoided if each copy has an independent source of random because this way all of the counterfactuals are "legal." However, as the example above shows, these probabilities depend on the utility function. So, even if we consider sleeping beauties with independent sources of random, the classical formulation of the problem is ambiguous since it doesn't specify a utility function. Moreover, if all of the counterfactuals are legal then it might be the utility function doesn't decompose into a linear combination over copies, in which case there is no probability assignment at all. This is why Everett branches have well defined probabilities but e.g. brain emulation clones don't.
We will meet in Google Israel (Electra Tower) on floor 29, as always.
We will have two talks on Effective Altruism: by Uri Katz and myself.
My talk's abstract:
Effective altruism is a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world. I will give a introductory overview of the ideas of EA and the primary organizations associated with it.
Uri's abstract:
I will discuss my take away from attending EA global this year. One of my objectives going to EAG was to figure out who I should give 10% of my income to this year. In previous years I donated to GiveWell, but I feel that they are well funded and do not need me. Next I considered Effective Animal Evaluators since I am (mostly) a negative utilitarian - I want to elevate as much suffering as possible, and animals beat humans by sheer numbers. I also considers x-risk, and other causes. Along the way I discovered what my motivation for giving was to begin with. By relating my personal thoughts & story I hope to expose how the average effective altruist thinks and lives. Finally I will say a few words about Effective Altruism Israel.
Facebook event: https://www.facebook.com/events/796399390482188/