Vasili Arkhipov Day

23 wallowinmaya 27 October 2011 06:27AM

Stanislav Petrov is a rather famous person (of course only on Lesswrong, not in the real world).

But there is another Russian who saved the world:  Vasili Alexandrovich Arkhipov.

On this day in 1962, at the height of the Cuban Missile Crisis Vasili Arkhipov prevented the launch of a nuclear torpedo and thus a possible nuclear war.

 

It's strange that Petrov attracts much more attention than Arkhipov. E.g. googling "Stanislav Petrov" produces 101.000 results, "Vasili Arkhipov" only 9.040 results. By contrast searching for "Britney Spears" generates about 295.000.000 results. Sorta depressing.

 

Anyway, let this day be the Vasili Arkhipov Day.  

Podcast Recommendations

5 wallowinmaya 24 October 2011 04:49PM

I know, books or blogs are often more informative than podcasts. But reading a book while going grocery shopping, bicycling or driving is kinda hard. And the last post on this topic didn't generate much discussion.

So, I ask again: Does anyone know of some interesting podcasts out there?

I'll go ahead and list some of my favorites:

- Econtalk by Russ Roberts.

- Conversations from the Pale Blue Dot by Lukeprog.

- Rationally Speaking by Julia Galef and Massimo Pigliucci.

- Singularity 1 on 1 by Nikola Danaylov.

- Big Ideas and TEDtalks are sometimes worthwhile.

Lectures on ItunesU are of course great, too.

Get genotyped for free ( If your IQ is high enough)

34 wallowinmaya 01 October 2011 04:00PM

I've just watched this talk about Genetics and Intelligence by Steve Hsu1, a theoretical physicist and Scientific Advisor to the Cognitive Genomics Lab of BGI (formerly the Beijing Genomics Institute), probably the leading genomics research center in the world.

Apparently, the main reason he gave this talk was to recruit volunteers for a study from the Cognitive Genomics Lab with the goal of investigating the genetics of human cognition.

From their homepage:

We currently seek participants with high cognitive ability. You can qualify for the study if you have obtained a high SAT/ACT/GRE score, or have performed well in academic competitions such as the Math, Physics, or Informatics Olympiads, the William Lowell Putnam Mathematical Competition, TopCoder, etc.

Automatic qualifying criteria include:

  • An SAT score of at least 760V/800M post-recentering or 700V/780M pre-recentering; ACT score of 35-36; or GRE score of at least 700V/800Q.
  • A PhD from a top US program in physics, math, EE, or theoretical computer science.
  • Honorable mention or better in the Putnam competition.

 

If you qualify as a participant, we may send you a DNA saliva kit. After you return this kit, we will genotype your DNA, and the data will eventually be available to you on this website, in a format compatible with many 3rd party interpretational tools.

 

I guess there are quite a few Lesswrongers smart enough to qualify for this study. If you want to advance Science and get genotyped for free check out their website for further information.

 

1: Steve Hsu has an awesome blog called "Information Processing". He writes about the genetics of intelligence, economics, psychometry, career advice for geeks, physics, etc.

 

Meetup : Munich Meetup, Saturday September 10th, 2PM

2 wallowinmaya 03 September 2011 08:03PM

Discussion article for the meetup : Munich Meetup, Saturday September 10th, 2PM

WHEN: 10 September 2011 02:00:00PM (+0200)

WHERE: Munich Central Station, Coffee Fellows cafe, Bahnhofsplatz 2, First floor.

I will be there with a Lesswrong sign. If you can't find it, you can call me: 0160-93132663 .

According to a doodle-survey 5 people (including me) could attend. And of course you are welcome too!

If the cafe sucks, we could easily go elsewhere. I've merely chosen the place, because it's relatively nice, near the central station and easy to find.

Lurkers and newbies are very welcome!

 

P.S. :  Could you please drop a short comment if you're planning to attend?

LINK: Ben Goertzel; Does Humanity Need an "AI-Nanny"?

9 wallowinmaya 17 August 2011 06:27PM

Link: Ben Goertzel dismisses Yudkowsky's FAI and proposes his own solution: Nanny-AI

 

Some relevant quotes:

It’s fun to muse about designing a “Friendly AI” a la Yudkowsky, that is guaranteed (or near-guaranteed) to maintain a friendly ethical system as it self-modifies and self-improves itself to massively superhuman intelligence.  Such an AI system, if it existed, could bring about a full-on Singularity in a way that would respect human values – i.e. the best of both worlds, satisfying all but the most extreme of both the Cosmists and the Terrans.  But the catch is, nobody has any idea how to do such a thing, and it seems well beyond the scope of current or near-future science and engineering.

Gradually and reluctantly, I’ve been moving toward the opinion that the best solution may be to create a mildly superhuman supertechnology, whose job it is to protect us from ourselves and our technology – not forever, but just for a while, while we work on the hard problem of creating a Friendly Singularity.

In other words, some sort of AI Nanny….

The AI Nanny

Imagine an advanced Artificial General Intelligence (AGI) software program with

  • General intelligence somewhat above the human level, but not too dramatically so – maybe, qualitatively speaking, as far above humans as humans are above apes
  • Interconnection to powerful worldwide surveillance systems, online and in the physical world
  • Control of a massive contingent of robots (e.g. service robots, teacher robots, etc.) and connectivity to the world’s home and building automation systems, robot factories, self-driving cars, and so on and so forth
  • A cognitive architecture featuring an explicit set of goals, and an action selection system that causes it to choose those actions that it rationally calculates will best help it achieve those goals
  • A set of preprogrammed goals including the following aspects:
    • A strong inhibition against modifying its preprogrammed goals
    • A strong inhibition against rapidly modifying its general intelligence
    • A mandate to cede control of the world to a more intelligent AI within 200 years
    • A mandate to help abolish human disease, involuntary human death, and the practical scarcity of common humanly-useful resources like food, water, housing, computers, etc.
    • A mandate to prevent the development of technologies that would threaten its ability to carry out its other goals
    • A strong inhibition against carrying out actions with a result that a strong majority of humans would oppose, if they knew about the action in advance
    • A mandate to be open-minded toward suggestions by intelligent, thoughtful humans about the possibility that it may be misinterpreting its initial, preprogrammed goals

Apparently Goertzel doesn't think that building a Nanny-AI with the above mentioned qualities is almost as difficult as creating a FAI a la Yudkowsky.

But SIAI believes that once you can create an AI-Nanny you can (probably) create a full-blown FAI as well.

Or am I mistaken?

Munich Meetup, Saturday September 10th, 2PM

3 wallowinmaya 12 August 2011 02:04PM

When: Saturday, September 10th, 2PM.

Where: Munich Central Station, Coffee Fellows cafe, Bahnhofsplatz 2, First floor. I will be there with a Lesswrong sign. If you can't find it, you can call me: 0160-93132663 .

 

According to this doodle-survey 5 people (including me) could attend. And of course you are welcome too!

If the cafe sucks, we could easily go elsewhere. I've merely chosen the place, because it's relatively nice, near the central station and easy to find.

 

Lurkers and newbies are very welcome!

First LW-Meetup in Germany

5 wallowinmaya 10 July 2011 08:13AM

The surveys from my [first post](http://lesswrong.com/r/discussion/lw/5rz/lesswrongers_from_the_germanspeaking_world_unite/) indicate that

1. the favorite city is Munich,

2. around 3-10 people would attend, (?)

3. the favorite month is August.

Ok, ignoring [the usual advice](http://lesswrong.com/lw/ka/hold_off_on_proposing_solutions/), to get the ball rolling, here is my proposal:

(Let's meet in Munich in August 5th, maybe at a restaurant or a pub.)

ETA: Um, I changed my mind: Munich is still the place to be, but let's meet some time in September!

But if you disagree, please voice your opinion! I'm open to suggestions.

I will add surveys in the comment-section.

Oh, and everyone, including ultimate newbies, is welcome. Yes, *YOU* too!

 

ETA: [Here you can tell us your favorite date.](http://www.doodle.com/u49xxi6z4zqbihqa)

Lesswrongers from the German-speaking world, unite!

14 wallowinmaya 19 May 2011 08:32PM

As far as I can tell, there never has been a LW-meetup in a german-speaking country. This is crazy!

There should be enough Lesswrongers from Germany, Austria or Switzerland to achieve a reasonable group size!

To be a bit more specific, how about meeting in Munich? It's relatively in the middle of the three countries, but it's fine if you propose another city!

I'm not sure about the date, but how about sometime in July or August? We would have plenty of time until then. But I'm happy if you propose another date!

I don't know if I'm merely shouting in the void, but I hope some aspiring rationalists are out there!

So If you're interested in such a meeting, please make a comment!

 

Added: Oh, and if you are not from a German-speaking country you are welcome, too!

View more: Prev