Apple Ditching OpenGL?
Wow, finally something interesting to say about Apple for a change! Apparently at Apple’s recent WWDC event they announced a “competitor” to DirectX called “Metal” … as in coding DIRECT to the METAL… get it?
That only took 20 years… Aside from their usual mopping the floor with Microsoft blather this may be a hugely significant announcement because it actually heralds the end of OpenGL as a gaming API. Why? At the time that DirectX was created Microsoft also largely controlled the OpenGL standard, the success of DirectX in gaming caused Microsoft to largely lose interest in OpenGL leaving it in the hands of the 3D chip OEM’s to advance as an open standard that they could use to pressure Microsoft to implement features they wanted in DirectX. Apple’s adoption of OpenGL and subsequent active participation on the ARB breathed life into OpenGL in mobile gaming because Apple’s iPhone implementation of OpenGL became the “standard” that other mobile phone and chip vendors had to emulate if they wanted to get IOS games ported to their devices. Without some form of “standard” hardware definition, OpenGL drivers and media drivers in general are often just a grab bag of broken inconsistent functionality. Of course in the absence of a “DirectX” like media API of their own design, Apple had little choice but to jump on the OpenGL bandwagon and ride it for gaming, but clearly it is not to Apple’s competitive advantage to be propping up OpenGL support in mobile for all of their competitors. Just as it did for Microsoft, creating their own proprietary gaming API’s is entirely in Apple’s strategic interest…. Why help Android siphon off their game developers by propping up OpenGL?
With Apple abandoning OpenGL in favor of “Metal” which they will surely promote very aggressively to their developer base, what will anchor OpenGL in other mobile spaces? Samsung?
But this isn’t just about the possible demise of OpenGL as a 3D gaming API. It’s possible that the API names will stay the same but that the 3D architectures are about to rapidly evolve away from the traditional linear CPU driven 3D pipeline. The problem is that GPU’s have become so fast and powerful that the CPU is holding back their performance in traditional 3D graphics. The need to break up the 3D pipeline and interact with it more is torturing the first generation 3D API’s because they were never meant to work that way. Their performance potential came from a very narrow definition of how 3D worked and massive linear pipelining to enable efficient hardware scalability. The way game 3D worked was actually a rather horrible perversion of the actual physics of light which are so computationally intensive we had no hope of computing them in real-time on consumer hardware without drastically simplifying the simulation. The introduction of programmable shaders were another abomination because in the absence of real light simulation we resorted to manually programming the lighting properties of every material interaction with every light source. In one respect it was a tremendous testament to human ingenuity that we could make 3D appear to work in real-time at all, but also like playing a giant game of Jenga we were stacking fragile architectural blocks precariously one atop the next and praying they didn’t all fall over. The fact that 3D works at all in modern games is probably largely attributable to a small handful of geniuses with sufficient IQ to create tools that have made it accessible to everybody else… with tremendous effort.
As 3D hardware as evolved to become vastly more versatile and the 3D graphics pipeline has become increasingly jointed and cumbersome to adapt to it, it’s little surprise to me that the market hungers to return to a programming paradigm in which the developer can interact much more DIRECTLY with the hardware without a bloated CPU bound OS or graphics API in the way. With this announcement from Apple I think we’re seeing that Jenga tower starting to topple.
- ATI wins all console chip deals with a common GPU architecture, OpenGL holds their chip performance back so they introduce Mantle… BUMP!
- Microsoft under pressure from the press and consumers over disappointing XBOX ONE performance and pressure over Mantle announce Mantle like features in DirectX 12… BUMP!
- Nvidia seeing the potential use of their GPU’s for supercomputing generalize their shader programming language with CUDA… BUMP!
- Apple realizing that their leadership with OpenGL is aiding their competition abandon it for a DirectX/Mantle like API of their own, Metal… BUMP!
Now it’s certainly possible that API’s like DirectX and OpenGL will get overhauled with time to look much more like Nvidia’s CUDA + media capabilities and still be called by the same name, but their present architecture is definitely breaking down. Although these API’s highly pipelined structure still affords tremendous performance advantages, it’s hard to ignore how much simpler and richer 3D development might become if it were practical to embrace more generalized physics and light simulation techniques such as ray-tracing. One of the tremendous benefits of unifying game physics with game graphics is of course realism. In the real-world the light we see is a byproduct of the same physics that govern everything else we encounter. The intimate link between light physics and other physics is largely broken in modern games because the graphics pipeline largely abstracts the visual elements of physics simulation from other aspects of physics forcing developers to awkwardly attempt to recouple them in the game by manually stitching them together. Although we have largely mastered the laws of Newtonian scale physics simulation, the sheer volume of math is still illusive even on modern GPU’s, however that is not necessarily the case in a cloud based game world where almost any arbitrary volume of compute power can be efficiently deployed as needed on any given aspect of a compute problem in real-time. Cloud based games open the door to the idea that ANYTHING can now be done in real-time. Impossible client side computations are no longer the barrier to game design, the NEW game design question will be how much did computing a given frame in 1/60th of a second cost? Personally I’m looking forward to building games that way which is why I have been such an avid early CUDA adopter. Maybe other “open standards” or API’s will come along but at present CUDA is the only tool available to explore this new kind of game design. While the rest of the game community is trying to adopt Mantle, DirectX 12 or Metal, I’ll be re-learning my ray-tracing and quantum physics because I believe those roads all ultimately lead to a more CUDA like API for cloud based game design. It will just take the market a while to realize that.
If you think I’m exaggerating… just ask yourself… what other Apple API’s or Microsoft API’s get their own brand logo?
“No API is an island, entire of itself; every API is a piece of the solution, a part of the market. If support for an API be washed away by Apple, the 3D market is the less, as well as if the ARB abandoned it, as well as if the game code of thy friend’s or of thine own were: any API’s death diminishes me, because I am involved in all game design, and therefore never send to know for whom the bells tolls; it tolls for OpenGL.”
–With apologies to John Donne–