Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Meetup : NLP for large scale sentiment detection

0 JoshuaFox 06 September 2016 07:13AM

Discussion article for the meetup : NLP for large scale sentiment detection

WHEN: 13 September 2016 07:00:00PM (+0300)

WHERE: Menachem Begin 125, Floor 36, Tel Aviv

https://www.facebook.com/events/1243291579035965/

Our speaker is Alon Rozental, Lead Data Scientist at Amobee.

The talk:

Sentiment analysis is about the modest task of taking a text and figuring out what it's positive or negative about. Turns out it's rather hard. This talk will discuss one approach to the problem, which issues it tackles (like millions sentences per day and high level negation), where it falls flat, plans to improve the process and how the heck do you deal with an emoticon of a dolphin or and eggplant.

Meeting each other to chat at 19:00, presentation at 19:30

Amobee, Menachem Begin 125, Floor 36, Tel Aviv

Discussion article for the meetup : NLP for large scale sentiment detection

Meetup : Quantum Homeschooling

1 JoshuaFox 09 February 2016 09:30AM

Discussion article for the meetup : Quantum Homeschooling

WHEN: 16 February 2016 07:00:00PM (+0200)

WHERE: Google Campus, Yigal Alon 98, 34th floor, Tel Aviv

This time we have two talks:

Louise Fox will talk about homeschooling in Israel.

Zvi Schreiber will talk about interpretations of quantum mechanics.

It will be at Google Campus, 34th floor, and not the Google lecture hall where we have been meeting until recently.

You can call Joshua at 0545691165 if you have a hard time finding us.

Please RSVP at the Facebook Event https://www.facebook.com/events/1560682680921315/

Discussion article for the meetup : Quantum Homeschooling

MIRIx Israel, Meeting summary

6 JoshuaFox 31 December 2015 07:04AM

Aur Saraf hosted a 3-hour MIRIx meeting on December 18. Yoav Hollander, chip verification pioneer and creator of the e verification language, was there, as well as MIRI Research Associate Vadim Kosoy. Also in attendance were Benjamin Fox, Ofer Mustigman, Matan Mor, Aur Saraf, Eli Sennesh, and Joshua Fox.


Our  discussion had its roots in Eliezer Yudkowsky’s 2008 article, in which he suggested that FAI researchers take an example from “computer engineers prov[ing] a chip valid”. Yet, as Yoav pointed out (in a lecture at LessWrong Tel Aviv in October 2015), there are strong limitations on formal verification at all levels, from the smallest arithmetic component on a chip up to entire systems like an airport. Even something as simple as floating-point division can barely be proven formally; as we move up to higher levels of complexity, any formal proofs always rest on very tightly constrained specifications on the input and environment. It is impossible to prove even a tiny part of the full range of relevant predicates.


Formal verification has a role, but most verification is done dynamically, for example by running a simulation against test cases. The goal of this meeting was to come up with a list of directions for applying ideas from the verification world to FAI research.

The state of FAI research

Vadim Kosoy described the state of the art in FAI research, catching us up  on the last few years of MIRI’s work. FAI research can be divided into three levels: Modules, optimization under self improvement, and the selection of a human-compatible goal function.

 

I. Most basically, we can verify lower level modules that can make up an AGI.


II. Second -- and this is most of the research effort -- we can make sure that future AIs optimize for a given implicit human-compatible goal function, even as they grow in strength.


MIRI is focused on accomplishing this with verifiable goal preservation under self-improvement. Some other ideas include:

  • Agents that are deliberately kept weak.
  • Limited intelligence AIs evaluated on their mathematical ability, but with no  knowledge of physics or our real world. (Such AIs might not be strong in induction given real-world physics, but at least this evaluation procedure might allow the relatively safe development of a certain kind of AI.)
  • AIs locked in cryptographic boxes. They run with homomorphic encryption  that prevents any side effects of their computation from being revealed to the outside world: Only their defined output can reach us.

Such an AI could still accomplish a lot, while keeping potentially dangerous information from us. As an illustration, you might ask it to prove the Riemann Hypothesis, also passing in a proof verifier. Operating under the protection of homomorphic encryption, the AI might find a proof for the Riemann Hypothesis and feeds it to the proof verifier. It outputs a single bit “Yes, there is a proof to the Riemann Hypothesis,” but it never shows us the proof.

  • Negative utility for any act of self-analysis.
  • Corrigibility: Ensuring that the AI will allow us to turn it off if we so wish, typically by carefully defining its utility function. 

III. The third area of FAI research is in choosing a goal function that matches human values, or, less ambitiously, a function that has some characteristics that match human values.

Verification for Autonomous Robots

Yoav Hollander asked to focus on autonomous robots like drones and self-driving cars, given the extreme difficulties in verification for AGIs--while fully recognizing the different natures of the challenges.


The state of the art in validating safety for these systems is pretty basic. There is some work on formal verification of some aspects of autonomous robots (e.g. here), and some initial work on dynamic, coverage-driven verification (e.g. here). The most advanced work in autonomous vehicle verification consists of dynamic verification of the  entire software stack on automatically generated scenarios. These scenarios are based on recordings of video, LIDAR and other sensors taken while driving real roads; interesting events like pedestrians jumping on the road are superimposed on these.


An important model for robot behavior is “Belief, Desire, Intention” (BDI), which is expressed in the AgentSpeak language (among others). The Jason Java-based interpreter (among others) then execute these behavioral models.


We can connect the three areas of FAI research (above) to the work of autonomous robot engineers:

  1. Analyzing the modules is a good idea, though figuring out what the modules would be in an FAI is not easy..

  2. Optimizing goal functions according to our true intentions--Do What I Mean--is relevant for autonomous robots just as it would be for an AGI.

  3. Choosing a utility function looks a bit more difficult if we don't have the AI-under-test to output even a half-page description of its preferences that humans would read. There is no clear way to identify unexpected perversities fully automatically, even in the limited autonomous robots of today.

Safety Tools for AI Development

Aur Saraf suggested a software tool that checks a proposed utility function for ten simple perversities. Researchers are urged to run it before pressing the Big Red Button.


After a few runs, the researcher would start to abandon their facile assumptions about the safety of their AI. For example, the researcher runs it and learns that “Your AI would have turned us all into paperclips”. The researcher fixes the problem, runs the tool and learns “Your improved AI would have cryogenically frozen all us.” Again, the researcher fixes the AI and runs the tool, which answers “Your twice-fixed AI would have turned the universe into a huge blob of molten gold.” At this point, maybe they would start realizing the danger.


If we can create an industry-standard  “JUnit for AGI testing,” we can then distribute safety testing as part of this AGI. The real danger in AGI, as Vadim pointed out, is that a developer has a light finger on the Run button while developing the AI. (“Let’s just run it for a few minutes, see what happens") A generic test harness for AGI testing, which achieves widespread distribution and might be a bother for others  to reimplement, could then be a platform for ensuring much more awareness and care about safety among AI developers, as per the “perversity checker” mentioned by Aur.

________

Notes:

  1. Here is Yoav’s summary of his opinions on this, as formed after the meeting.

  2. Thanks to Aur Saraf for the notes on which this is based and to other participants for comments. Any flaws are mine.

Meetup : Biases and making better decisions

1 JoshuaFox 26 August 2015 02:58PM

Discussion article for the meetup : Biases and making better decisions

WHEN: 01 September 2015 07:00:00PM (+0300)

WHERE: 98 Yigal Alon Street, Tel Aviv

We're going to have a meetup on Tuesday, Sep. 1, at Google Tel Aviv offices, Electra Tower, 98 Yigal Alon St., Tel Aviv.

Yonatan Cale will discuss 2 topics in today's lecture: - Biases: The cognitive mistakes we have as humans. Noticing and overcoming our biases is one of the core topics of LessWrong. - Decision making: A new method designed to let us make better decisions, easily, to advance our own goals.

We'll meet at the 29th floor of the building (not the one with Google Campus) at 19:00. If you arrive and can't find your way around, call Anatoly who's hosting us at 054-245-1060 or Joshua Fox at 0545691165.

If you have FB, please click "Join" at the FB page so we can get a sense of attendance. https://www.facebook.com/events/109945732685336/

Discussion article for the meetup : Biases and making better decisions

Gatekeeper variation

4 JoshuaFox 07 August 2015 01:44PM

A new variation on the Gatekeeper idea, thanks to Zvi Schreiber.

We block all but one IO channel for our AGI.

We create a formally verified proof-verification system as a gatekeeper. We pass the Gatekeeper a decision problem with a  formal proof that it is in the complexity class UP ("Unambiguous Non-deterministic Polynomial-time").  The Gatekeeper verifies the proof, and passes the decision problem to the AGI. If the AGI provides an answer, the Gatekeeper validates that this is the unique valid answer (which it can do in polynomial time) and if so it passes  the answer to us. 

The advantage of this approach over one-bit and other Oracles is that the AI can do some useful problem solving with zero discretion as to the data it communicates. It can either give the unique answer or no answer at all. Such a gatekeeper is feasible with today's technology. 

To keep the AGI from communicating by choosing the timing of its answer, the Gatekeeper  always communicates the answer (or the fact that there it is not yet one from the AGI) after exactly 1 hour.  

(The idea of restricting an Oracle AI to choosing from predetermined list of verifiable answers has been mentioned, but I don't remember anyone adding this formalism.) 

The usual caveats on Oracles apply. Like other Oracles, this is a stopgap rather than a solution to the FAI problem.

  1. The laws of physics still give the AGI a  causal link to its environment, even if it is in a Faraday cage. 
  2. Even a one-bit Oracle AI can exert optimization power when asked multiple questions. 
  3. Answers aren't enough to ensure safety: We may be asking the wrong questions or not understanding the implications of the answers.
Objections specific to this idea:
  1. Only certain problems can be fed to the AGI, and perhaps not the relevant ones. However, UP includes integer factorization and other interesting problems, suggesting that this could be a useful tool.
  2. This idea does not say that the AGI has figured out how to do all UP in polynomial time, just that it has figured out fast solutions for a larger subset of UP than humans have. So the additional optimization power exerted by the AGI may not be very large.

 

Meetup : Logical Counterfactuals, Tel Aviv

1 JoshuaFox 27 July 2015 04:39AM

Discussion article for the meetup : Logical Counterfactuals, Tel Aviv

WHEN: 05 August 2015 03:00:00AM (+0300)

WHERE: 98 Yigal Alon Street, Tel Aviv

Vladimir Slepnev, who is visiting Tel Aviv, will be giving a talk on logical counterfactuals. We'll meet at 19.00 on Tuesday, August 4 Google, 98 Yigal Alon Street, Tel Aviv, 29th floor (not the Google Campus Floor). We will then move to another room. Vladimir has done advanced research, together with MIRI, on reflective decision theory and related topics. You can read some of his work at cousin_it on LessWrong and under his own name at Intelligent Agent Foundations. If you have Facebook, please confirm at the FB Event so we can get a sense of the number of attendees https://www.facebook.com/events/108233322857656/

Discussion article for the meetup : Logical Counterfactuals, Tel Aviv

Meetup : Tel Aviv Meetup: Assorted LW mini-talks

1 JoshuaFox 22 June 2015 12:04PM

Discussion article for the meetup : Tel Aviv Meetup: Assorted LW mini-talks

WHEN: 23 June 2015 07:00:00PM (+0300)

WHERE: Yigal Alon Street 98, Tel Aviv-Yafo, Israel

We're going to have a meetup on Tuesday, June 23 at Google Tel Aviv offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. We will hear and discuss several mini-talks on assorted LessWrong-related topics. Each talk will last 10 to 20 minutes, plus some time for questions and discussion. An approximate list (some details may change): Joshua will talk on the difference between Kurzwellians vs MIRI-ites in their attitudes to technology and likely futures. Liran and Vadim will tell us about their visit to the LessWrong mega-meetup in Europe recently. Vadim will talk about Maximizing Ultra-Long Impact: are there things we can do today that can matter on absurdly long time scales. Anatoly will report on reading some papers related to John Taurek's 1977 challenge to consequentialists: "Should numbers count?". We'll meet at the 29th floor of the building (not the one with Google Campus) at 19:00. If you arrive and can't find your way around, call Anatoly who's hosting us at 054-245-1060. Email at avorobey@gmail.com also works.

Discussion article for the meetup : Tel Aviv Meetup: Assorted LW mini-talks

Meetup : Tel Aviv Meetup: Social & Board Games

1 JoshuaFox 06 June 2015 06:32PM

Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games

WHEN: 09 June 2015 07:00:00PM (+0300)

WHERE: 98 Yigal Alon Street, Tel Aviv

June 9 at 19:00 we're going to have a social meetup! It's going to be a game night full of people talking about physics, friendly AI, and how to effectively save the world. Please bring any games you'd like to play.

The Israeli LessWrong community meets every two weeks, alternating between lectures and social/gaming nights.

Meet at Google, Electra Tower, 98 Yigal Alon Street, Tel Aviv: The 29th floor (not the Google Campus floor). We'll then move to a room.

Contact: If you can't find us, call Anatoly, who is graciously hosting us, at 054-245-1060; or Joshua at 054-569-1165.

Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games

Meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party

1 JoshuaFox 24 February 2015 07:07AM

Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party

WHEN: 14 March 2015 08:30:00PM (+0200)

WHERE: Herzliya, Israel

Yonatan Calé is hosting the Harry Potter and the Methods of Rationality Pi Day Wrap party: Saturday, March 14 at 20:30 in Herzliya.

Harry Potter and the Methods of Rationality will have its final chapter released on Pi Day (3.14), and this is one of the celebrations are being planned around the world.

Contact Yonatan at myselfandfredy@gmail.com for the exact location. Here's the Facebook event where you can be in touch and RSVP https://www.facebook.com/events/432725193554286/

Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party

Podcast: Rationalists in Tech

12 JoshuaFox 14 December 2014 04:14PM
I'll appreciate feedback on a new podcast, Rationalists in Tech. 

I'm interviewing founders, executives, CEOs, consultants, and other people in the tech sector, mostly software. Thanks to Laurent Bossavit, Daniel Reeves, and Alexei Andreev who agreed to be the guinea pigs for this experiment. 
  • The audience:
Software engineers and other tech workers, at all levels of seniority.
  • The hypothesized need
Some of you are thinking: "I see that some smart and fun people hang out at LessWrong. It's hard to find people like that to work with. I wonder if my next job/employee/cofounder could come from that community."
  • What this podcast does for you
You will get insights into other LessWrongers as real people in the software profession. (OK, you knew that, but this helps.) You will hear the interviewees' ideas on CfAR-style techniques as a productivity booster, on working with other aspiring rationalists, and on the interviewees' own special areas of expertise. (At the same time, interviewees benefit from exposure that can get them business contacts,  employees, or customers.) Software engineers from LW will reach out to interviewees and others in the tech sector, and soon, more hires and startups will emerge. 

Please give your feedback on the first episodes of the podcast. Do you want to hear more? Should there be other topics? A different interview style? Better music?

View more: Next