Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ArisKatsaris 02 November 2017 12:35:34AM 0 points [-]

Meta Thread

Comment author: gwern 04 November 2017 12:13:16AM 1 point [-]
Comment author: VipulNaik 31 October 2017 05:35:43AM *  0 points [-]

I tried looking in the IRS Form 990 dataset on Amazon S3, specifically searching the text files for forms published in 2017 and 2016.

I found no match for (case-insensitive) openai (other than one organization that was clearly different, its name had openair in it). Searching (case-insensitive) "open ai" gave matches that all had "open air" or "open aid" in them. So, it seems like either they have a really weird legal name or their Form 990 has not yet been released. Googling didn't reveal any articles of incorporation or legal name.

Comment author: gwern 02 November 2017 12:33:08AM 1 point [-]

As I said, their 2016 From 990 is not yet available (so their 2017 one definitely isn't) and I have already asked them so there can be no confusion on the matter.

Comment author: Tenoke 15 October 2017 09:21:08AM *  1 point [-]

Yeah, this survey was pretty disappointing - I had to stop myself from making a negative comment after I took it (though someone else had). I am glad you realized it, too I guess. Even things like starting with a bunch of questions about the new lesswrong-inspired site, and the spacing between words were off, let alone the things you mention.

I am honestly a little sad that someone more competent in matters like these like gwern didn't take over (as I always assumed will happen if yvain gave up on doing it), because half-hearted attempts like this probably hurt a lot more than help - e.g. someone coming back in 4 months and seeing how we've went down to only 300 (!) responders in the annual survey is going to assume LW is even more dead than it really is. This reasoning goes beyond the survey.

Comment author: gwern 30 October 2017 07:55:41PM 2 points [-]

I did intend to take over the survey if Yvain stopped, although I didn't tell him in the hopes he would keep doing it rather than turn it over immediately. I'm not sure I would take it over now: the results seem increasingly irrelevant as I'm not sure the people taking the survey overlap much anymore as with the original LW surveys in 2009.

Comment author: gwern 30 October 2017 07:53:25PM 1 point [-]

Just going to make a minor point that OpenAI does not have $1b (and anyway, they don't spend most of their money on AI risk but generic AI research), they have only a pledge for $1b from Musk. I've asked them several times for their Form 990, which would show how much money they actually have, but their 2016 one is still unavailable.

Comment author: gwern 20 October 2017 01:45:08AM 3 points [-]

If anyone wants more details, I have extensive discussion & excerpts from the paper & DM QAs at https://www.reddit.com/r/reinforcementlearning/comments/778vbk/mastering_the_game_of_go_without_human_knowledge/

Comment author: MaryCh 15 October 2017 11:09:19AM 1 point [-]

Warning: please don't read if you are triggered by a discussion of post-mortem analysis (might come up in the comments).

I want to have my body donated to science, well, afterwards, and to convince my twin sister to organize the same thing; there's probably a dearth of comparative post-mortem studies of adult (aged) human twins. However, my husband said he wouldn't do it. I don't want to argue with him about something we both hope won't be an issue for many years to come, so, in pure scientific interest:

what would you think it would be interesting to study in such a setting?

Sorry if I offended you, it wasn't my intention. Just can't ask this on facebook, my Mom would eat me alive.

Comment author: gwern 17 October 2017 08:21:24PM 1 point [-]

You could look into joining a twin registry. Discordant-twin designs are fairly powerful, but still need n>50 or something like that to be worth doing. Plus if you keep your own novel set of data, people will be less interested in analyzing it compared to a twin registry using a familiar set of questionnaires/scales/measures. (One of the reasons you see so much from twin registries or the UK Biobank: consistent measurements.) It would've been best if you two had been enrolled as kids, but perhaps better late than never.

In response to comment by gwern on Magical Categories
Comment author: gwern 01 October 2017 07:30:26PM *  1 point [-]

Another version is provided by Ed Fredkin via Eliezer Yudkowsky in http://lesswrong.com/lw/7qz/machine_learning_and_unintended_consequences/

At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the "neural net" had merely trained itself to recognize the difference between a bright picture and a dim picture.

This is still not a source because it's a recollection 50 years later and so highly unreliable, and even at face value, all Fredkin did was suggest that the NN might have picked up on a lighting difference; this is not proof that it did, much less all the extraneous details of how they had 50 photos in this set and 50 in that and then the Pentagon deployed it and it failed in the field (and what happened to it being set in the 1980s?). Classic urban legend/myth behavior: accreting plausible entertaining details in the retelling.

In response to comment by gwern on Magical Categories
Comment author: gwern 17 October 2017 08:17:35PM 2 points [-]

I've compiled and expanded all the examples at https://www.gwern.net/Tanks

Comment author: ArisKatsaris 01 October 2017 02:08:25AM 0 points [-]

Meta Thread

Comment author: gwern 17 October 2017 07:05:04PM 1 point [-]
Comment author: ciphergoth 16 July 2015 06:04:24PM *  14 points [-]

Karl Sims evolved simple blocky creatures to walk and swim (video). In the paper, he writes "For land environments, it can be necessary to prevent creatures from generating high velocities by simply falling over" - ISTR the story is that in the first version of the software, the winning creatures were those that grew very tall and simply fell over towards the target.

[Edited]

Comment author: gwern 16 October 2017 01:31:52AM *  2 points [-]

Yes: "The Power of Simulation: What Virtual Creatures Can Teach Us", Katherine Hayles 1999:

The designer's intentions, implicit in the fitness criteria he specifies and the values he assigns to these criteria, become explicit when he intervenes to encourage "interesting" evolutions and prohibit "inelegant" ones ("3-D Morphology", pp. 31, 29). For example, in some runs creatures evolved who achieved locomotion by exploiting a bug in the way conservation of momentum was defined in the world's artifactual physics: they developed appendages like paddles and moved by hitting themselves with their own paddles. "It is important that the physical simulation be reasonably accurate when optimizing for creatures that can move within it," Sims writes. "Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited by the evolving creatures," ("Evolving Virtual Creatures," p. 18). In the competitions, other creatures evolved to exceptionally tall statures and controlled the cube by simply falling over on it before their opponents could reach it ("3-D Morphology," p. 29.) To compensate, Sims used a formula that took into account the creature's height when determining its starting point in the competition; the taller the creature, the further back it had to start. Such adjustments clearly show that the meaning of the simulation emerges from a dynamic interaction between the creator, the virtual world (and the real world on which its physics is modeled), the creatures, the computer running the programs, and in the case of visualizations, the viewer watching the creatures cavort. In much the same way that the recursive loops between program modules allow a creature's morphology and brain to co-evolve together, so recursive loops between these different components allow the designer's intent, the creatures, the virtual world, and the visualizations to co-evolve together into a narrative that viewers find humanly meaningful...compared to artificial intelligence, artificial life simulations typically front-load less intelligence in the creatures and build more intelligence into the dynamic process of co-adapting to well-defined environmental constraints. When the environment fails to provide the appropriate constraints to stimulate development, the creator steps in, using his human intelligence to supply additional adaptive constraints, for example when Sims put a limit on how tall the creatures can get.

Comment author: gwern 16 October 2017 01:15:16AM 2 points [-]

There was something else going on, though. The AI was crafting super weapons that the designers had never intended. Players would be pulled into fights against ships armed with ridiculous weapons that would cut them to pieces.

Checking into this one, I don't think it's a real example of learning going wrong, just a networking bug involving a bunch of low-level stuff. It would be fairly unusual for a game like Elite Dangerous to have game AI using any RL techniques (the point is for it to be fun, not hard to beat, and they can easily cheat), and the forum post & news coverage never say it learned to exploit the networking bug. Some of the comments in that thread describe it as random and somewhat rare, which is not consistent with it learning a game-breaking technique. Eventually I found a link to a post by an ED programmer Mark Allen who explains what went wrong with his code: https://forums.frontier.co.uk/showthread.php?t=256993&page=11&p=4002121&viewfull=1#post4002121

...Prior to 1.6/2.1 the cached pointer each weapon held to its data was a simple affair pointing at a bit of data loaded from resources, but as part of the changes to make items modifiable I had to change this so it could also be a pointer to a block of data constructed from a base item plus a set of modifiers - ideally without the code reading that data caring (or even knowing) where it actually came from and therefore not needing to be rewritten to cope. This all works great in theory, and then in practice, up until a few naughty NPC's got into the mix and decided to make a mess. I'll gloss over a few details here, but the important information is that a specific sequence of events relating to how NPCs transfer authority from one players' machine to another, combined with some performance optimisations and an otherwise minor misunderstanding on my part of one of the slightly obscure networking functions got the weapon into an odd state. The NPC's weapon which should have been a railgun and had all the correct data for a railgun, but the cached pointer to its weapon data was pointing somewhere else. Dangling pointers aren't all that uncommon (and other programmers may know the pains they can cause!) but in this case the slightly surprising thing was that it would always be a pointer to a valid WeaponData...It then tells the game to fire 12 shots but now we're outside the areas that use the cached data, the weapon manager knows its a railgun and dutifully fires 12 railgun shots :) . Depending on which machine this occurred on exactly it would either be as a visual artefact only that does no damage, or (more rarely but entirely possible) the weapon would actually fire 12 shots and carve a burning trail of death through the space in front of it. The hilarious part (for people not being aimed at) is that the bug can potentially cause hybrids of almost any two weapons... In my testing I've seen cases of railguns firing like slugshots, cannons firing as fast as multicannons, or my favourite absurd case of a Huge Plasma Accelerator firing every frame because it thought it was a beam laser... Ouch.

(I would also consider the mascara example to not be an example of misbehaving but dataset bias. The rest check out.)

View more: Next