Meetup : LessWrong Sao Paulo monthly meetup

2 leohmarruda 11 February 2016 01:39AM

Discussion article for the meetup : LessWrong Sao Paulo monthly meetup

WHEN: 13 February 2016 02:00:00PM (-0200)

WHERE: Rua Antonio Carlos, 452, São Paulo - SP, Brazil

Encontro mensal de LessWrongers em SP no café Vanilla. Como sempre, novatos são bem vindos!

Discussion article for the meetup : LessWrong Sao Paulo monthly meetup

Meetup : LessWrong Sao Paulo

3 leohmarruda 06 October 2015 01:54AM

Discussion article for the meetup : LessWrong Sao Paulo

WHEN: 10 October 2015 02:00:00PM (-0300)

WHERE: Rua Mourato Coelho, 25 Pinheiros, São Paulo Brazil

At Casa cafe (possibly on the second floor). Monthly meeting of the budding rationalist meetup :) We'll have a short presentation at the beginning and possibly board games. :D Don't be shy! Join our mail list https://groups.google.com/forum/#!forum/racionalidade

Discussion article for the meetup : LessWrong Sao Paulo

Meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

3 Gust 09 June 2015 01:47AM

Discussion article for the meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

WHEN: 27 June 2015 01:00:00PM (-0300)

WHERE: Rua Maestro Elias Lobo, 923, São Paulo, Brazil

Let's have a meetup a talk a bit about rationality and rationalism in Brazil!

This is not a LessWrong only event (afaik, there are not many LWers around here), so any news and discussions about the meetup will happen primarily in the facebook event:

https://www.facebook.com/events/815839175152653/815899785146592/

Please join us there!

Proposed activities are: - Short talks (~20min) about organizations and projects to spread rationality raise the sanity waterline; - Discussion tables about interesting stuff

Again, please join the event on Facebook!

Discussion article for the meetup : São Paulo, Brazil - Meetup at Base Sociedade Colaborativa

Calibration Test with database of 150,000+ questions

37 Nanashi 14 March 2015 11:22AM

Hi all, 

I put this calibration test together this morning. It pulls from a trivia API of over 150,000 questions so you should be able to take this many, many times before you start seeing repeats.

http://www.2pih.com/caltest.php

A few notes:

1. The questions are "Jeopardy" style questions so the wording may be strange, and some of them might be impossible to answer without further context. On these just assign 0% confidence.

2. As the questions are open-ended, there is no answer-checking mechanism. You have to be honest with yourself as to whether or not you got the right answer. Because what would be the point of cheating at a calibration test?

I can't think of anything else. Please let me know if there are any features you would want to see added, or if there are any bugs, issues, etc. 

 

**EDIT**

As per suggestion I have moved this to the main section. Here are the changes I'll be making soon:

  • Label the axes and include an explanation of calibration curves.
  • Make it so you can reverse your last selection in the event of a misclick.

Here are changes I'll make eventually:

  • Create an account system so you can store your results online.
  • Move trivia DB over to my own server to allow for flagging of bad/unanswerable questions.

 

Here are the changes that are done:

  • Change 0% to 0.1% and 99% to 99.9%  
  • Added second graph which shows the frequency of your confidence selections. 
  • Color code the "right" and "wrong" buttons and make them farther apart to prevent misclicks.
  • Store your results locally so that you can see your calibration over time.
  • Check to see if a question is blank and skip if so.

Dragon Ball's Hyperbolic Time Chamber

35 gwern 02 September 2012 11:49PM

A time dilation tool from an anime is discussed for its practical use on Earth; there seem surprisingly few uses and none that will change the world, due to the severe penalties humans would incur while using it, and basic constraints like Amdahl's law limit the scientific uses. A comparison with the position of an Artificial Intelligence such as an emulated human brain seems fair, except most of the time dilation disadvantages do not apply or can be ameliorated and hence any speedups could be quite effectively exploited. I suggest that skeptics of the idea that speedups give advantages are implicitly working off the crippled time dilation tool and not making allowance for the disanalogies.

Master version on gwern.net

Brazilians, unite! and what is IERFH (portuguese)

10 diegocaleiro 28 February 2012 04:05AM

Hi anglophones, this topic is only for brazilians, so someone may post in portuguese and part of this is in portuguese (We will translate it to english if necessary when the time comes).

Hello Brazilians, I'm creating this topic because some misallocated questions were posed on this one.

First, the numbers.In Less Wrong:

Me, Gust, Paulovsk, zecaurubu, Gracunha, Mexamark, dyokomizo.

From IERFH (Instituto Ética, Racionalidade, e o Futuro da Humanidade)

Leo Arruda, João Fabiano, me, Pierre , Jonatas Muller, Pablo,  Lauro (paralelo), Rafael, plus 3 others.

This makes us at least 17, almost one every 10 million people (not great...)

Some of us are in São Paulo. About 10.

Eventually, this topic may attract some others, and we can create a meeting.

Now, regarding the question a few of you did, IERFH's mission:

Gerar alto impacto positivo no longo prazo, produzindo conhecimentos e reunindo pessoas que contribuam para melhor pensar as questões éticas que irão definir o futuro da humanidade.

About: Somos um time comprometido com fazer o mundo melhor, agora e no futuro. Para isso, estamos reunindo a comunidade brasileira de racionalistas, utilitaristas, transhumanistas, e outros entusiastas e tranformando ideais e teorias em ações. Ao lado de grandes organizações internacionais de caridade, tecnologia e ética, nos propomos a ser o vetor dos esforcos brasileiros nesses campos. O IERFH opera em 3 frentes: o estudo do que é bom e deve ser buscado e preservado: a Ética Pura e Aplicada; as maneiras mais eficientes de raciocínio, tomada de decisões e os seus erros mais comuns: a Racionalidade Epistêmica e Prática; e por fim como aplicar estes campos para garantir a plena realização de todo o potencial humano: o Futuro da Humanidade 

 

Racionalidade: Ser racional é conseguir conquistar, com pouco dispêndio de recursos, aquele que se deseja, entre todos os cenários possíveis que poderiam ter ocorrido. Racionalidade epistêmica é a capacidade de entender com o mundo atual; Racionalidade prática, a capacidade de guiar o mundo atual em direção ao mundo desejado. Para ser racional, duas capacidades são fundamentais, a capacidade de desviar dos bias cognitivos, falhas sistemáticas da nossa cognição, e permitir que o conhecimento adquirido atinja todos os campos de nosso conhecimento, integrando a informação aprendida e garantindo que ela tenha um efeito proporcional em nossas vidas. Essa comunidade do IERFH pretende nos guiar nesse sentido.

Futuro da Humanidade: Para guiar o futuro da humanidade numa direção desejável, é necessário, antes de tudo, desviar dos grandes riscos catastróficos de origem tecnológica que estamos criando conforme criamos novas tecnologias. Para tal, é também necessário compreender e corrigir os bias cognitivos aos quais nossos cérebros estão propensos. Finalmente, garantidas a segurança de nossos valores fundamentais, e corrigidos os nossos desvios de racionalidade, podemos seguir adiante na realização de todo o potencial futuro humano, através de biotecnologia, nanotecnologia, inteligência artificial e coordenação global.

The missing part "Ética" isn't written yet. But I think you get the general idea.Think Bostrom, think Utilitarianism.

Se acharam interessante a descrição, talvez gostem desse post. Ou, para os familiarizados com Yudkowsky desse, sobre CEV.

 

We are developing our website, that is why we still don't have one.

Given the broadness of our scope, we are, of course, in need of new people (specially to translate) but we will only post to less wrong about the group in detail (in english) in a few months. If you are interested, contact me in the private message section.  We meet on skype, and rarely in Pinheiros, São Paulo.

I hope we can start to form a Brazilian rationalist community, both of less wrongers general, and within IERFH and thank Gust for the initiative of creating the meeting topic that made me write this one.


The Urgent Meta-Ethics of Friendly Artificial Intelligence

45 lukeprog 01 February 2011 02:15PM

Barring a major collapse of human civilization (due to nuclear war, asteroid impact, etc.), many experts expect the intelligence explosion Singularity to occur within 50-200 years.

That fact means that many philosophical problems, about which philosophers have argued for millennia, are suddenly very urgent.

Those concerned with the fate of the galaxy must say to the philosophers: "Too slow! Stop screwing around with transcendental ethics and qualitative epistemologies! Start thinking with the precision of an AI researcher and solve these problems!"

If a near-future AI will determine the fate of the galaxy, we need to figure out what values we ought to give it. Should it ensure animal welfare? Is growing the human population a good thing?

But those are questions of applied ethics. More fundamental are the questions about which normative ethics to give the AI: How would the AI decide if animal welfare or large human populations were good? What rulebook should it use to answer novel moral questions that arise in the future?

But even more fundamental are the questions of meta-ethics. What do moral terms mean? Do moral facts exist? What justifies one normative rulebook over the other?

The answers to these meta-ethical questions will determine the answers to the questions of normative ethics, which, if we are successful in planning the intelligence explosion, will determine the fate of the galaxy.

Eliezer Yudkowsky has put forward one meta-ethical theory, which informs his plan for Friendly AI: Coherent Extrapolated Volition. But what if that meta-ethical theory is wrong? The galaxy is at stake.

Princeton philosopher Richard Chappell worries about how Eliezer's meta-ethical theory depends on rigid designation, which in this context may amount to something like a semantic "trick." Previously and independently, an Oxford philosopher expressed the same worry to me in private.

Eliezer's theory also employs something like the method of reflective equilibrium, about which there are many grave concerns from Eliezer's fellow naturalists, including Richard Brandt, Richard Hare, Robert Cummins, Stephen Stich, and others.

My point is not to beat up on Eliezer's meta-ethical views. I don't even know if they're wrong. Eliezer is wickedly smart. He is highly trained in the skills of overcoming biases and properly proportioning beliefs to the evidence. He thinks with the precision of an AI researcher. In my opinion, that gives him large advantages over most philosophers. When Eliezer states and defends a particular view, I take that as significant Bayesian evidence for reforming my beliefs.

Rather, my point is that we need lots of smart people working on these meta-ethical questions. We need to solve these problems, and quickly. The universe will not wait for the pace of traditional philosophy to catch up.

Bayes' Theorem Illustrated (My Way)

126 komponisto 03 June 2010 04:40AM

(This post is elementary: it introduces a simple method of visualizing Bayesian calculations. In my defense, we've had other elementary posts before, and they've been found useful; plus, I'd really like this to be online somewhere, and it might as well be here.)

I'll admit, those Monty-Hall-type problems invariably trip me up. Or at least, they do if I'm not thinking very carefully -- doing quite a bit more work than other people seem to have to do.

What's more, people's explanations of how to get the right answer have almost never been satisfactory to me. If I concentrate hard enough, I can usually follow the reasoning, sort of; but I never quite "see it", and nor do I feel equipped to solve similar problems in the future: it's as if the solutions seem to work only in retrospect. 

Minds work differently, illusion of transparency, and all that.

Fortunately, I eventually managed to identify the source of the problem, and I came up a way of thinking about -- visualizing -- such problems that suits my own intuition. Maybe there are others out there like me; this post is for them.

continue reading »

What is Wei Dai's Updateless Decision Theory?

37 AlephNeil 19 May 2010 10:16AM

As a newcomer to LessWrong, I quite often see references to 'UDT' or 'updateless decision theory'. The very name is like crack - I'm irresistably compelled to find out what the fuss is about.

Wei Dai's post is certainly interesting, but it seemed to me (as a naive observer) that a fairly small 'mathematical signal' was in danger of being lost in a lot of AI-noise. Or to put it less confrontationally: I saw a simple 'lesson' on how to attack many of the problems that frequently get discussed here, which can easily be detached from the rest of the theory. Hence this short note, the purpose of which is to present and motivate UDT in the context of 'naive decision theory' (NDT), and to pre-empt what I think is a possible misunderstanding.

continue reading »

Eight Short Studies On Excuses

210 Yvain 20 April 2010 11:01PM

The Clumsy Game-Player

You and a partner are playing an Iterated Prisoner's Dilemma. Both of you have publicly pre-committed to the tit-for-tat strategy. By iteration 5, you're going happily along, raking up the bonuses of cooperation, when your partner unexpectedly presses the "defect" button.

"Uh, sorry," says your partner. "My finger slipped."

"I still have to punish you just in case," you say. "I'm going to defect next turn, and we'll see how you like it."

"Well," said your partner, "knowing that, I guess I'll defect next turn too, and we'll both lose out. But hey, it was just a slipped finger. By not trusting me, you're costing us both the benefits of one turn of cooperation."

"True", you respond "but if I don't do it, you'll feel free to defect whenever you feel like it, using the 'finger slipped' excuse."

"How about this?" proposes your partner. "I promise to take extra care that my finger won't slip again. You promise that if my finger does slip again, you will punish me terribly, defecting for a bunch of turns. That way, we trust each other again, and we can still get the benefits of cooperation next turn."

You don't believe that your partner's finger really slipped, not for an instant. But the plan still seems like a good one. You accept the deal, and you continue cooperating until the experimenter ends the game.

After the game, you wonder what went wrong, and whether you could have played better. You decide that there was no better way to deal with your partner's "finger-slip" - after all, the plan you enacted gave you maximum possible utility under the circumstances. But you wish that you'd pre-committed, at the beginning, to saying "and I will punish finger slips equally to deliberate defections, so make sure you're careful."

continue reading »

View more: Next