Vladimir_Golovin comments on New Year's Predictions Thread - Less Wrong

18 Post author: MichaelVassar 30 December 2009 09:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (426)

You are viewing a single comment's thread.

Comment author: Vladimir_Golovin 01 January 2010 02:59:10PM *  12 points [-]

I'm 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn't apply to humanoid robots (1%) and video game characters (5%).

Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.

Comment author: James_K 02 January 2010 07:28:45AM 5 points [-]

Is there a reason Avatar doesn't count as crossing the threshold already?

Comment author: stevage 02 January 2010 09:46:12AM 4 points [-]

Because the giant blue Na'vi people are not human.

Comment author: timtyler 02 January 2010 11:14:07AM 8 points [-]

You mean you didn't notice the shots with the simulated humans in Avatar? ;-)

Comment author: Vladimir_Golovin 03 January 2010 09:13:02AM 3 points [-]

Avatar and Digital Emily are the reasons why I'm so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley -- or at least found the way across it -- I just don't think that this is proof enough for general audience and critics.

Comment author: MatthewB 03 January 2010 09:33:11AM 4 points [-]

I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn't Sci Fi, or fantastic in its setting or characters.

Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).

Comment author: Vladimir_Golovin 03 January 2010 09:48:05AM 0 points [-]

I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn't Sci Fi, or fantastic in its setting or characters.

Exactly. I was thinking about something like an Elvis Presley biopic, but your example will do just fine (except that I don't think that vanilla westerns are commercially viable today).

Comment author: MatthewB 03 January 2010 12:32:05PM -1 points [-]

Vanilla Westerns?!? There is Nothing Vanilla about a Sergio Leone Western! And Clint Eastwood's Unforgiven was an awesome western, as were Silverado and 3:10 to Yuma (and there are even more that have made a fair killing at the box office).

Westerns are not usually thought of as Block-Busters though, but they do draw a big enough crowd to be profitable.

If one were to draw together Lee Van cleef, Clint Eastwood, and Eli Wallach from their Sergio Leone days together with some of the Big names in Action flics today to make a period western that starred all of these people... I think you'd have a near Block-Buster...

However, the point is really that using this technology one would be able to draw upon stage or film actors of any period or genre (where we had a decent image and voice recording) and to be able to mix actors of the past with those of today.

I just happen to have a passion for a decent Horse Opera. Pity that Firefly was such crap... decent Horse Opera is really no different from a decent Space Opera. Something like Trigun or Cowboy Bebop

Comment author: MatthewB 03 January 2010 09:29:37AM 4 points [-]

You don't think that the Valley will be crossed for video games in the next ten years?

Considering how rapidly the digital technologies make it from big screen to small, I'm guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).

Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.

Comment author: Christian_Szegedy 06 January 2010 08:03:48PM 1 point [-]

I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.

IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.

Comment author: Vladimir_Golovin 03 January 2010 10:00:12AM 0 points [-]

My reply is here. BTW, major CG packages like Autodesk Maya and 3DS Max were at the level of $5000 and below for over a decade.

Comment author: MatthewB 03 January 2010 12:35:25PM 0 points [-]

I've been out of circulation for a while. Last time I priced Autodesk, was in the early 90s, and it was still tens of thousands. I'm just now getting caught up to basic AutoCAD, and I hope to begin learning 3DS Max and Maya in the next year or so. I am astounded at how cheap these packages are now (and how wrong one of my best friends is/was about how quickly these types of software would be available. In 1989, he said it would be 30 to 40 years before we saw the types of graphics displays & software that were pretty much common by (I have discovered) 1995)... Thanks for the head's up though.

Comment author: Bindbreaker 02 January 2010 10:09:38AM 2 points [-]

In a way, the uncanny valley has already been crossed-- video game characters in some games are sufficiently humanlike that I hesitate to kill them.

Comment author: Vladimir_Golovin 03 January 2010 10:04:04AM *  2 points [-]

I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don't play them myself, but I've seen a couple.)

Comment author: dfranke 01 January 2010 08:43:21PM 2 points [-]

Why such a big gulf between your confidence for cinema and your confidence for video games?

Comment author: Vladimir_Golovin 01 January 2010 09:01:15PM *  7 points [-]

Movies are 'pre-computed' so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don't think that it can be realistically imitated by blending between pre-recorded animations like it's done today with mo-cap animations -- e.g. you can't pre-record eye movement for a game character.

As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.

Comment author: Chronos 01 January 2010 09:33:32PM 0 points [-]

The obvious answer would be "offline rendering".

Even if the non-interactivity of pre-rendered video weren't an issue, games as a category can't afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie -- typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can't afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.

Comment author: mattnewport 06 January 2010 07:24:40AM 4 points [-]

Rendering is not the problem. I would say that the uncanny valley has already been passed for static images rendered in real time by current 3D hardware (this NVIDIA demo from 2007 gets pretty close). The challenge for video games to cross the uncanny valley is now mostly in the realm of animation. Video game cutscenes rendered in real time will probably cross the uncanny valley with precanned animations in the next console generation but doing so for procedural animations is very much an unsolved problem.

(I'm a graphics programmer in the video games industry so I'm fairly familiar with the current state of the art).

Comment author: Chronos 11 January 2010 04:53:30AM 1 point [-]

I wasn't even considering the possibility of static images in video games, because static images aren't generally considered to count in modern video games. The world doesn't want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters' faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).

Comment author: mattnewport 11 January 2010 08:34:59AM 1 point [-]

By 'static images rendered in real time' I meant static images (characters not animated) rendered in real time (all 3D rendering occurring at 30+ fps). Myst consisted of pre-rendered images which is quite different.

It is possible to render 3D images of humans in real time on current consumer level 3D hardware that has moved beyond the uncanny valley when viewed as a static screenshot (from a real time rendered sequence) or as a Matrix style static scene / dynamic camera bullet time effect. The uncanny valley has not yet been bridged for procedurally animated humans. The problem is no longer in the rendering but in the procedural animation of human motion.

Comment author: gwern 21 August 2010 09:21:01AM 1 point [-]

How would you verify a crossing of the uncanny valley? A movie critic invoking it by name and saying a movie doesn't trigger it?

Comment author: Vladimir_Golovin 21 August 2010 11:19:16AM 3 points [-]

An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.

Comment author: timtyler 21 August 2010 11:34:08AM 1 point [-]

There isn't much financial incentive to CGI a human - if they are just acting like a regular human. That's what actors are for.

Comment author: gwern 21 August 2010 11:04:52PM 2 points [-]

I suppose Avatar is a case in point - it's worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.

Comment author: timtyler 22 August 2010 07:21:32AM 1 point [-]

''There are a number of shots of CGI humans,'' James Cameron says. ''The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there's a threshold of proximity to the camera that we didn't feel comfortable going beyond. We didn't get too close.''

Comment author: xamdam 02 July 2010 05:39:10PM 1 point [-]

Interesting, it seems that they are currently ahead with image synthesis than voice/speech synthesis.