Meetup : Tel Aviv Meetup: Assorted LW mini-talks
Discussion article for the meetup : Tel Aviv Meetup: Assorted LW mini-talks
We're going to have a meetup on Tuesday, June 23 at Google Tel Aviv offices, Electra Tower, 98 Yigal Alon st., Tel Aviv. We will hear and discuss several mini-talks on assorted LessWrong-related topics. Each talk will last 10 to 20 minutes, plus some time for questions and discussion. An approximate list (some details may change): Joshua will talk on the difference between Kurzwellians vs MIRI-ites in their attitudes to technology and likely futures. Liran and Vadim will tell us about their visit to the LessWrong mega-meetup in Europe recently. Vadim will talk about Maximizing Ultra-Long Impact: are there things we can do today that can matter on absurdly long time scales. Anatoly will report on reading some papers related to John Taurek's 1977 challenge to consequentialists: "Should numbers count?". We'll meet at the 29th floor of the building (not the one with Google Campus) at 19:00. If you arrive and can't find your way around, call Anatoly who's hosting us at 054-245-1060. Email at avorobey@gmail.com also works.
Discussion article for the meetup : Tel Aviv Meetup: Assorted LW mini-talks
Meetup : Tel Aviv Meetup: Social & Board Games
Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games
June 9 at 19:00 we're going to have a social meetup! It's going to be a game night full of people talking about physics, friendly AI, and how to effectively save the world. Please bring any games you'd like to play.
The Israeli LessWrong community meets every two weeks, alternating between lectures and social/gaming nights.
Meet at Google, Electra Tower, 98 Yigal Alon Street, Tel Aviv: The 29th floor (not the Google Campus floor). We'll then move to a room.
Contact: If you can't find us, call Anatoly, who is graciously hosting us, at 054-245-1060; or Joshua at 054-569-1165.
Discussion article for the meetup : Tel Aviv Meetup: Social & Board Games
Eliezer Yudkowsky and Bill Hibbard. Here is Yudkowsky stating the theme of their discussion ... 2001
Around 15 years ago, Bill Hibbard proposed hedonic utility functions for an ASI. However, since then he has, in other publications, stated that he has changed his mind -- he should get credit for this. Hibbard 2001 should not be used as a citation for hedonic utility functions, unless one mentions in the same sentence that this is an outdated and disclaimed position.
It's not about the Six Day War. It talks about the Yom Kippur War (1973).
Herzliya, Israel party.
That was the tentative location, but it looks like the party's location is the Google offices at 98 Yigal Alon Street in Tel Aviv. (See FB group.)
Meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party
Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party
Yonatan Calé is hosting the Harry Potter and the Methods of Rationality Pi Day Wrap party: Saturday, March 14 at 20:30 in Herzliya.
Harry Potter and the Methods of Rationality will have its final chapter released on Pi Day (3.14), and this is one of the celebrations are being planned around the world.
Contact Yonatan at myselfandfredy@gmail.com for the exact location. Here's the Facebook event where you can be in touch and RSVP https://www.facebook.com/events/432725193554286/
Discussion article for the meetup : Israel: Harry Potter and the Methods of Rationality Pi Day Wrap Party
A fine idea. I suggest opening each summary with the thesis of the cited post.
I'd like to see the best anti-MWI/Everett article out there.
Here are the best items I have found in my search for anti-MWI reading. Some present anti-MWI arguments but in the end are pro-MWI.
- David Wallace, The Emergent Multiverse (Pro; Anti-Everett arguments in the interludes)
- Steven Weinberg: Lectures on Quantum Mechanics, sec. 3.7 (Seemingly pro-Everett, but in the end saying all current theories are flawed)
- Adrian Kent, "Against Many-Worlds Interpretations" (Anti)
- Stanford Encyclopedia of Philosophy "Many-Worlds Interpretation of Quantum Mechanics " (Mostly Pro, Anti in sec.6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Could someone point me to any existing articles on this variant of AI-Boxing and Oracle AGIs:
The boxed AGI's gatekeeper is a simpler system which runs formal proofs to verify that AGI's output satisfies a simple, formally definable. The constraint is not "safety" in general but rather is narrow enough that we can be mathematically sure that the output is safe. (This does limit potential benefits from the AGI.)
The questions about what the constraint should be remains open, and of course the fact that the AGI is physically embodied puts it in causal contact with the rest of the universe. But as a partial or short-term solution, has anyone written about it? The only one I can think of (though I can't find the specific article) is Goertzel's description of an architecture where the guardian component is separate from the main AGI.