You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ShardPhoenix comments on June 2014 Media Thread - Less Wrong Discussion

5 Post author: ArisKatsaris 01 June 2014 03:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: ShardPhoenix 02 June 2014 01:47:33AM *  3 points [-]

Just a matter of time. We're already a long way from assembler, and the indie games represent the low end which will gradually eat the high end's lunch.

It might be possible to move on with a "sufficiently advanced compiler" for Haskell or the like, but barring that I think game devs for performance-intensive games will still want more precise control over memory usage. I predict we'll see widespread use of "functional C++" (eg Rust, or C++ 11's functional features) before "low-level Haskell" (eg ???).

Given Haskell's excellent concurrency support, I'm not sure that's true.

As far as I know Haskell's performance isn't considered predictable/reliable (mainly due to GC and lazy evaluation) enough for games. If even multi-threaded C++ is still too slow for what you want to do (eg have 10000 people on one server in real-time), Haskell isn't going help.

At one point Epic was looking at Haskell, but recently they've actually gone the other way, abandoning embedded scripting languages in the newest version of the Unreal engine in favour of doing everything in C++ (although I think that was as much about consistency as performance).

(By the way, as a Scala dev I'd personally rather write Haskell than C++, and C++ is one of the things keeping me out of the gaming industry, but personally I'm not optimistic).

Comment author: gwern 02 June 2014 09:22:07PM 1 point [-]

barring that I think game devs for performance-intensive games will still want more precise control over memory usage

What is 'performance-intensive' is constantly changing. I don't think that languages like C# or JavaScript which sometimes get used in game development these days have sufficiently-advanced compilers, but they still get used. (Although at least in the case of Haskell, we really do have the promised 'sufficiently advanced compiler' in the form of GHC and all the research put into optimizing lazy pure languages; I think the estimate I saw floating around somewhere was that a modern GHC-optimized binary of an ordinary Haskell program will run something like 1000x faster than the best that could be done in the early '90s.)

If even multi-threaded C++ is still too slow for what you want to do (eg have 10000 people on one server in real-time), Haskell isn't going help.\

Haskell's pure functions, green threads, and STM are great for concurrency, so I think your argument may work in the other direction.

Comment author: ShardPhoenix 02 June 2014 11:22:43PM *  2 points [-]

Although at least in the case of Haskell, we really do have the promised 'sufficiently advanced compiler' in the form of GHC

The "sufficiently advanced compiler" I was referring to is one that makes high level languages as fast as hand-tuned C++ (thus eliminating the need for said hand-tuning), not just one that's faster than it used to be. Such a thing is probably possible but it doesn't exist now or in the immediately foreseeable future. Things like precise control of memory layout can make an order of magnitude difference to performance or more.

Haskell's pure functions, green threads, and STM are great for concurrency, so I think your argument may work in the other direction.

They might make it easier but they don't make it faster which is currently the limiting factor for performance-intensive servers. Making it easier would certainly help - apparently the latest Battlefield game has a lot of bugs due to hard-to-diagnose threading issues in the client. But it wouldn't be viable to write that game in Haskell due to GC and lazy eval, even if the basic performance was good enough which it probably isn't.

edit: Also as far as I know it's possible to avoid some of these issues in Haskell with careful optimization of the code, but of course the more you have to do that the less you benefit from things "just working".

I'm not saying we'll never see real-time performance-intensive apps commonly written in functional languages, I guess I'm just not as optimistic about it happening soon.

Comment author: gwern 03 June 2014 01:28:26AM 0 points [-]

They might make it easier but they don't make it faster which is currently the limiting factor for performance-intensive servers. Making it easier would certainly help - apparently the latest Battlefield game has a lot of bugs due to hard-to-diagnose threading issues in the client. But it wouldn't be viable to write that game in Haskell due to GC and lazy eval, even if the basic performance was good enough which it probably isn't.

What you can write determines how fast it will run. If you don't have green threads, but must use OS-level threads, that's going to be a problem. If you have to be constantly locking because of mutability and can't use STM, that's going to be a problem. And yes, correctness does matter so that's a problem too.

Comment author: ShardPhoenix 03 June 2014 01:47:07AM *  0 points [-]

Fast lock-free thread-safe mutable data structures (eg ConcurentLinkedQueue) have been written in languages like Java (and apparently even C++ but I'm less familiar).

Also, STM isn't necessarily much better than locks in practice eg quick Googled example: http://nbronson.github.io/scala-stm/benchmark.html (Don't know how the Haskell equivalent compares)

(where "medium" granularity locks were just as good perf. wise and STM's GC pressure was higher)

Comment author: lmm 02 June 2014 07:40:00AM 1 point [-]

I've worked with videoconferencing software written in Haskell. Realtime performance is certainly possible, though whether the industry will accept that is another question.

Comment author: 4hodmt 02 June 2014 08:59:31PM 1 point [-]

Videoconferencing uses fairly consistent processing/memory over time. The load on the garbage collector has low variance so it can be run at regular inteverals while maintaining very high probability that the software will meet the next frame time. Games have more variable GC load, so it's more difficult to guarantee no missed frames without reserving an unacceptably high time for garbage collection.