Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : Giving What We Can at LessWrong Tel Aviv

0 Squark 01 July 2016 02:07PM

Discussion article for the meetup : Giving What We Can at LessWrong Tel Aviv

WHEN: 05 July 2016 05:06:58PM (+0300)

WHERE: Cluster - Disruptive Technologies hub, Yigal Alon 118 Tel Aviv

This Tueday, LessWrong Tel Aviv is proud to host a talk by Erwan Atcheson from Giving What We Can. GWWC is an organization based in the United Kingdom whose mission is promoting donations to effective charities in the global poverty domain. GWWC is associated with the Centre for Effective Altruism and is one of the most famous organizations in the effective altruism movement, known among other things for the pledge to give 10% of one's income to charity which anyone can take to become a member. Erwan will talk about GWWC's mission and work and will take questions from the audience.

As usual, the meetup begins at 19:00 but the talk will only begin around 19:30-19:45. Entrance to the Cluster is from Totseret Ha'aretz street, through a brown door with a doorbell.

See you all there!

Discussion article for the meetup : Giving What We Can at LessWrong Tel Aviv

Meetup : Tel Aviv: Nick Lane's Vital Question

1 Squark 15 January 2016 06:54PM

Discussion article for the meetup : Tel Aviv: Nick Lane's Vital Question

WHEN: 19 January 2016 07:00:00PM (+0200)

WHERE: Yigal Alon 118, Tel Aviv

This time in LessWrong Tel Aviv, Daniel Armak will review Nick Lane's books on evolutionary biology. The abstract of the talk:

"The largest scale of biology contains many unexplained facts. Many important traits are exclusive to eukaryotes: large cell and genome size, internal complexity, multicellularity, sex and the haploid-diploid cell cycle, mitochondria, phagocytosis, and hundreds more. What is the relation between them? How did they evolve, why do all eukaryotes have them (or once had), and why don't any other cells have any of them? What are some of them even for? I will present a summary of Nick Lane's books, themselves a popular summary of other researchers, that attempts to answer many questions with one big answer."

The event will take place in the Cluster. Entrance is from Totzeret Haaretz street. Ring the doorbell on the brown door.

Facebook event: https://www.facebook.com/events/435804276616336/

Contant number: +972542600919 (Vadim Kosoy)

Discussion article for the meetup : Tel Aviv: Nick Lane's Vital Question

Meetup : Tel Aviv Game Night

1 Squark 03 December 2015 07:38PM

Discussion article for the meetup : Tel Aviv Game Night

WHEN: 08 December 2015 07:00:00PM (+0200)

WHERE: Yigal Alon 118, Tel Aviv

We will play board games and have fascinating discussions, as always. Bring your own board games!

If you have trouble finding the place or have other questions, call me (Vadim) at +972542600919.

Discussion article for the meetup : Tel Aviv Game Night

Meetup : Game Night in Tel Aviv

1 Squark 09 November 2015 07:14PM

Discussion article for the meetup : Game Night in Tel Aviv

WHEN: 10 November 2015 07:00:00PM (+0200)

WHERE: Yigal Alon St 98, Tel Aviv-Yafo, Israel

Game night in LessWrong Tel Aviv! Meeting at Electra Tower floor 29 as always. We are going to play board games and socialize. We might also do some impro theater. Bring your games and a good mood. Feel free to come late but we'll probably finish around 22-23.

Facebook event: https://www.facebook.com/events/500002670160746/

If you have trouble finding the place, feel free to call me (Vadim) at 0542600919

Discussion article for the meetup : Game Night in Tel Aviv

Meetup : Tel Aviv: Hardware Verification and FAI

1 Squark 01 October 2015 09:00PM

Discussion article for the meetup : Tel Aviv: Hardware Verification and FAI

WHEN: 28 October 2015 12:59:48AM (+0300)

WHERE: Electra Tower

We will meet at Google Israel on the 29th floor, as always.

The speaker this time is Yoav Hollander, inventor of the "e" hardware verification language and founder of Verisity. His description of the talk:

"I'll (briefly) describe the FAI verification problem, and admit that I don't really know how to solve it. I'll also warn against 'magical thinking', i.e. assuming that because a fool-proof solution is needed, it will somehow appear before the window of opportunity slams on our finger tips.

I'll review what works (and what does not) in HW verification and in autonomous systems verification, and discuss why some of that may be relevant for FAI verification.

I'll then open the room for discussion."

Facebook event: https://www.facebook.com/events/907241922691991/ My phone: 0542600919 (Vadim)

Discussion article for the meetup : Tel Aviv: Hardware Verification and FAI

Meetup : Tel Aviv: Black Holes after Jacob Bekenstein

1 Squark 13 September 2015 06:29AM

Discussion article for the meetup : Tel Aviv: Black Holes after Jacob Bekenstein

WHEN: 24 November 2015 08:00:00AM (+0300)

WHERE: Yigal Alon 98, Tel Aviv

We will meet in Google Israel (Electra Tower) on floor 29, as always.

If you have trouble finding the location, feel free to call me (Vadim) at 0542600919.

Zohar Komargodski, a theoretical physicist from the Weizmann Institute of Science, will give a comprehensive review of the physics of black holes:

Physics in the past has progressed by connecting hitherto different concepts, for example: Space & Time, Electricity & Magnetism, Particles & Waves, and many others. We are now in the midst of another such exciting revolution, where Space-Time and Information Theory are being related. Jacob Bekenstein has laid out some of the basic concepts that appear in this surprising new developments. Thought experiments involving Black-Holes were central to the initial leaps that Jacob made.

The goal of the presentation is to describe in an informal fashion (requiring no particular prior knowledge of information theory or physics) what are Black-Holes, what did Jacob realise, what has been understood since his seminal papers, and what are the central remaining (formidable) challenges.

(Facebook event)[https://www.facebook.com/events/1666895726878967/]

Discussion article for the meetup : Tel Aviv: Black Holes after Jacob Bekenstein

Meetup : Effective Altruism @ LessWrong Tel Aviv

1 Squark 11 September 2015 07:06AM

Discussion article for the meetup : Effective Altruism @ LessWrong Tel Aviv

WHEN: 29 September 2015 07:00:00PM (+0300)

WHERE: Yigal Alon 98, Tel Aviv

We will meet in Google Israel (Electra Tower) on floor 29, as always.

We will have two talks on Effective Altruism: by Uri Katz and myself.

My talk's abstract:

Effective altruism is a philosophy and social movement that applies evidence and reason to determine the most effective ways to improve the world. I will give a introductory overview of the ideas of EA and the primary organizations associated with it.

Uri's abstract:

I will discuss my take away from attending EA global this year. One of my objectives going to EAG was to figure out who I should give 10% of my income to this year. In previous years I donated to GiveWell, but I feel that they are well funded and do not need me. Next I considered Effective Animal Evaluators since I am (mostly) a negative utilitarian - I want to elevate as much suffering as possible, and animals beat humans by sheer numbers. I also considers x-risk, and other causes. Along the way I discovered what my motivation for giving was to begin with. By relating my personal thoughts & story I hope to expose how the average effective altruist thinks and lives. Finally I will say a few words about Effective Altruism Israel.

Discussion article for the meetup : Effective Altruism @ LessWrong Tel Aviv

[LINK] Vladimir Slepnev talks about logical counterfactuals

7 Squark 03 September 2015 06:29PM

Vladimir Slepnev (aka cousin_it) gives a popular introduction to logical counterfactuals and modal updateless decision theory in the Tel Aviv LessWrong meetup.

[https://www.youtube.com/watch?v=Ad30JlVh4dM&feature=youtu.be]

Meetup : Tel Aviv: Board Game Night

1 Squark 17 August 2015 10:33AM

Discussion article for the meetup : Tel Aviv: Board Game Night

WHEN: 17 August 2015 07:00:00PM (+0300)

WHERE: Google Tel-Aviv, Electra Tower, 67891, Israel, 67891 Tel Aviv, Israel

19:00 Israel time, playing board games as usual. This time we will do it in Campus TLV (Floor 34, hackspace - on the right when entering the campus). Call me (Vadim, 0542600919) if you can't find your way.

Discussion article for the meetup : Tel Aviv: Board Game Night

Identity and quining in UDT

9 Squark 17 March 2015 08:01PM

Outline: I describe a flaw in UDT that has to do with the way the agent defines itself (locates itself in the universe). This flaw manifests in failure to solve a certain class of decision problems. I suggest several related decision theories that solve the problem, some of which avoid quining thus being suitable for agents that cannot access their own source code.

 

EDIT: The decision problem I call here the "anti-Newcomb problem" already appeared here. Some previous solution proposals are here. A different but related problem appeared here.

 

Updateless decision theory, the way it is usually defined, postulates that the agent has to use quining in order to formalize its identity, i.e. determine which portions of the universe are considered to be affected by its decisions. This leaves the question of which decision theory should agents that don't have access to their source code use (as humans intuitively appear to be). I am pretty sure this question has already been posed somewhere on LessWrong but I can't find the reference: help? It also turns out that there is a class of decision problems for which this formalization of identity fails to produce the winning answer.

When one is programming an AI, it doesn't seem optimal for the AI to locate itself in the universe based solely on its own source code. After all, you build the AI, you know where it is (e.g. running inside a robot), why should you allow the AI to consider itself to be something else, just because this something else happens to have the same source code (more realistically, happens to have a source code correlated in the sense of logical uncertainty)?

Consider the following decision problem which I call the "UDT anti-Newcomb problem". Omega is putting money into boxes by the usual algorithm, with one exception. It isn't simulating the player at all. Instead, it simulates what would a UDT agent do in the player's place. Thus, a UDT agent would consider the problem to be identical to the usual Newcomb problem and one-box, receiving $1,000,000. On the other hand, a CDT agent (say) would two-box and receive $1,000,1000 (!) Moreover, this problem reveals UDT is not reflectively consistent. A UDT agent facing this problem would choose to self-modify given the choiceThis is not an argument in favor of CDT. But it is a sign something is wrong with UDT, the way it's usually done.

The essence of the problem is that a UDT agent is using too little information to define its identity: its source code. Instead, it should use information about its origin. Indeed, if the origin is an AI programmer or a version of the agent before the latest self-modification, it appears rational for the precursor agent to code the origin into the successor agent. In fact, if we consider the anti-Newcomb problem with Omega's simulation using the correct decision theory XDT (whatever it is), we expect an XDT agent to two-box and leave with $1000. This might seem surprising, but consider the problem from the precursor's point of view. The precursor knows Omega is filling the boxes based on XDT, whatever the decision theory of the successor is going to be. If the precursor knows XDT two-boxes, there is no reason to construct a successor that one-boxes. So constructing an XDT successor might be perfectly rational! Moreover, a UDT agent playing the XDT anti-Newcomb problem will also two-box (correctly).

To formalize the idea, consider a program  called the precursor which outputs a new program  called the successor. In addition, we have a program  called the universe which outputs a number  called utility.

Usual UDT suggests for  the following algorithm:

(1) 

Here,  is the input space,  is the output space and the expectation value is over logical uncertainty.  appears inside its own definition via quining.

The simplest way to tweak equation (1) in order to take the precursor into account is

(2) 

This seems nice since quining is avoided altogether. However, this is unsatisfactory. Consider the anti-Newcomb problem with Omega's simulation involving equation (2). Suppose the successor uses equation (2) as well. On the surface, if Omega's simulation doesn't involve 1, the agent will two-box and get $1000 as it should. However, the computing power allocated for evaluation the logical expectation value in (2) might be sufficient to suspect 's output might be an agent reasoning based on (2). This creates a logical correlation between the successor's choice and the result of Omega's simulation. For certain choices of parameters, this logical correlation leads to one-boxing.

The simplest way to solve the problem is letting the successor imagine that  produces a lookup table. Consider the following equation:

(3) 

Here,  is a program which computes  using a lookup table: all of the values are hardcoded.

For large input spaces, lookup tables are of astronomical size and either maximizing over them or imagining them to run on the agent's hardware doesn't make sense. This is a problem with the original equation (1) as well. One way out is replacing the arbitrary functions  with programs computing such functions. Thus, (3) is replaced by

(4) 

Where  is understood to range over programs receiving input in  and producing output in . However, (4) looks like it can go into an infinite loop since what if the optimal  is described by equation (4) itself? To avoid this, we can introduce an explicit time limit  on the computation. The successor will then spend some portion  of  performing the following maximization:

(4') 

Here,  is a program that does nothing for time  and runs  for the remaining time . Thus, the successor invests  time in maximization and  in evaluating the resulting policy  on the input it received.

In practical terms, (4') seems inefficient since it completely ignores the actual input for a period  of the computation. This problem exists in original UDT as well. A naive way to avoid it is giving up on optimizing the entire input-output mapping and focus on the input which was actually received. This allows the following non-quining decision theory:

(5) 

Here  is the set of programs which begin with a conditional statement that produces output  and terminate execution if received input was . Of course, ignoring counterfactual inputs means failing a large class of decision problems. A possible win-win solution is reintroducing quining2:

(6) 

Here,  is an operator which appends a conditional as above to the beginning of a program. Superficially, we still only consider a single input-output pair. However, instances of the successor receiving different inputs now take each other into account (as existing in "counterfactual" universes). It is often claimed that the use of logical uncertainty in UDT allows for agents in different universes to reach a Pareto optimal outcome using acausal trade. If this is the case, then agents which have the same utility function should cooperate acausally with ease. Of course, this argument should also make the use of full input-output mappings redundant in usual UDT.

In case the precursor is an actual AI programmer (rather than another AI), it is unrealistic for her to code a formal model of herself into the AI. In a followup post, I'm planning to explain how to do without it (namely, how to define a generic precursor using a combination of Solomonoff induction and a formal specification of the AI's hardware).

1 If Omega's simulation involves , this becomes the usual Newcomb problem and one-boxing is the correct strategy.

2 Sorry agents which can't access their own source code. You will have to make do with one of (3), (4') or (5).

View more: Next