Higher dimensional games
I’m bored with 3D games. They’re all the same run, jump, shoot, cast, basic AI dialog… snore… Games need another technology revolution.
Back in 1994 I started work on the project that culminated in the creation of the Direct3D API which enabled a massive consumer market for incredibly high performance, low-cost, parallel processors specifically designed to accelerate 3D graphics. Today all modern computers, mobile devices and game consoles include these chips. Fortunately they’re not really even 3D chips anymore. They’re also used for displaying and rendering all of our 2D graphics and video and over the years they’ve become generalized and are used for physics processing and neural networking with are both forms of higher dimensional computation. The truth is that 3D chips were never really 3D… in other words… Three DIMENSIONAL Processors, they were always just parallel processors. Parallel processors a great for simulating physics because physics, as we experience it, happens simultaneously everywhere.
What we call “3D” graphics aren’t even really 3D. Physical 3D objects have interiors that exist, the graphics we see in video games are made out of one sided triangles with no interior. They’re really two dimensional objects with only one side. We visualize 3D objects in video games as having an “inside” but mathematically there is actually nothing inside a 3D game object, not even the other sides of the polygons we see in a game. If there can be said to be a third dimension involved it’s time. We animate or deform these one sided 2D objects over time to create the illusion of motion. The graphics we enjoy in video games are the result of hollow two dimensional marionettes getting animated by an invisible puppeteer pulling their strings mathematically to bring them to life. In a video game the invisible puppeteers are the army of human programmers who manually coded all of the games apparent intelligence, story and dynamics into it. In the real world the forces that give us ACTUAL life arise from several other dimensions that games don’t attempt to model. Things have an actual interior in the real-world and the contents of an objects interior dictate how it interacts with other objects. Complex machines and living objects have another important dimension to their interiors which is scale. Objects like living organisms can be said to have interiors with physics and behavior rules governed by smaller objects which themselves have interiors composed of smaller objects ad-infinitum (At least to the Plank Length). We are approaching an era when fully simulating the interior physics of many objects to achieve much greater realism is practical.
Video games don’t try to model these properties of three dimensional objects because;
- It’s hard
- We don’t have the computational power
Back in the 1980’s when I was first studying 3D graphics at Siggraph Conferences we used to say the same thing about achieving real-time 3D rendering. I attended one lecture where the speaker (correctly) observed that following Moore’s law it would take 20-30 years for Humans to invent computers that could render (ray-trace) an interactive 3D scene in real-time. By 1995 the same luminaries who I had learned from at Siggraph now worked for Microsoft research and proudly demonstrated a stuttering Phong shaded 3D cube spinning on a dual-core Pentium 90mhz processor to Bill Gates as a testament to their remarkable progress towards achieving real-time interactive 3D rendering. It was kids from the video game industry who of course solved the problem. They discarded ray-tracing and a lot of the academic 3D dogma and adopted innovative new data structures and tricks to achieve the illusion of real-time interactive 3D without bothering with all the computation. Direct3D enabled a mass market for consumer hardware that accelerated texture mapping, simple lighting models and zbuffers and suddenly GPU’s began increasing in performance at Moore’s Law^2 speeds. It will be almost 25-30 years since I attended that Siggraph session predicting that real-time interactive 3D ray-tracing will become computationally practical.
What comes next is higher dimensional computing in real-time. If we want to create games that are more than marionette shows, we need to increase the dimensionality of our game engines to roughly five dimensions. Actual three dimensions in space such that objects have working interiors, one of time and less obviously, a dimension of scale. How much computing power will such a game engine require to run in real-time? The amount of computation can generally said to increase from today’s D^3 to D^5 where D is the amount of processing power required to compute a given dimension of data. *Note that in practice a dimension of time or scale has different computing requirements than a dimension of space. Generally that’s a big jump and Moore’s Law will deliver the processing power long before the market can deliver games that can use it. This simple mathematical observation gives us a real clue about when and where in the market the first 5D games will emerge. If 1000 Nvidia GPU cores can deliver a modern 3D game experience I enjoy today, then D in this case is the third root of 1000, which is 10. To achieve 10^5 processing power (Assuming all dimensions are equal), I simply require the processing power of 100 modern Nvidia GPU’s or wait 7-10 years to get that much processing power on a single chip. A lot can happen in gaming technology in 7 to 10 years.
Starting today I would need to build a cloud gaming solution. I obviously would want to support more than one concurrent players so we’re talking about a cloud composed of millions of GPUs. Amazon, Microsoft and IBM are actively investing in that kind of cloud fabric. If I need 100 GPU’s per simultaneous player that game is going to be too expensive to play at modern cloud rates, but again the math says the necessary computing scale becomes affordable in under seven years.
So what does a 5D game engine look like?
- It’s delivered as interactive streaming video so that it can play on any device without depending on local hardware.
- It looks real. A genuine 5D physics simulation will output ray-traced imagery as a byproduct of the physics engine
- Terrain will be grown out of guided physics not designed by traditional game artists
- Vegetation, shellfish, simple organisms will be grown out of guided genetic simulations
- Weather, Earthquakes, plate tectonics, etc will be generated from large scale physics simulations
- Materials will have real physical and chemical properties that may be augmented for game-play
What will be missing? Higher order life forms and their behaviors can’t be grown (initially) because in addition to requiring vast computing power, we don’t actually know how to simulate these things yet. It’s more likely that higher order organisms and their behaviors will still have to be designed but some new realism properties may be added. Current technology game characters exhibit unnatural muscle deformations in their face and body movements unless they are carefully hand authored to specific movements. Animating 3D organisms is like squeezing a water balloon. In order to look right the squeezed balloon has to maintain a constant volume continuously under every pressure change. Bodies have to do the same when they move and when muscle flexes under the skin. It’s not a problem that modern 3D game authoring tools and motion kinematic solutions address.
A 5D engine will obey physics automatically and such realism problems and limits to animation quality will vanish. The bones, internal body fluid and organ density distributions of a living thing will exist in a 5D character and correctly contribute to the characters appearance and interactions with other objects when it is in motion. Many simple traditional hand authored behaviors associated with balance and collision will be automated so that characters move and interact naturally with their environment. Characters will support a fully dynamic range of motion behaviors without animation constraints.
It’s hard to anticipate how in-game AI will evolve but it will certainly improve dramatically. One of the things that limits in-game AI now is that there is really no “world” for the AI to exist in and learn from. Unfortunately adding this kind of AI to games may involve adding several more orders of magnitude in computer power per player. Given this problem it’s likely that the first 5D games will be MMOG’s with the AI and complex behaviors provided by other players. With “real” physics, in-game resources, minerals, chemistry and vegetation it will be possible for the political intrigues and warfare of stone and bronze age civilizations to evolve organically within the game.
A 5D game engine would enable entirely new kinds of game-play and entirely new approaches to game design but would not be without new challenges. If a 5D game designer hoped to see these kinds of social dynamics evolve naturally inside a nearly completely simulated world they would have to capture the human dynamics of reproductive competition and loyalty to hereditary lineage. Powerful genetic and physics simulations would give rise to incredible looking and behaving new worlds and environments. The AI and social engineering challenges would be profound but the discovery of revolutionary new game design approaches within these constraints is exciting to contemplate.
The user control challenges are also hugely problematic. People naturally assume that a VR experience is ideal. The problem is that we have no technology to take natural input from the entire body… without it actually moving. Having people running around in VR goggles and motion sensing body suits is not a very practical solution. I’m not optimistic for a near term solution to this challenge because I think we are still more than 7-10 years away from thinking that anesthetizing ourselves and running physical wires into our brains is a great entertainment idea, but as things are trending I wouldn’t be surprised if the next generation of kids thought it was a great and perfectly reasonable idea for a game controller.
Fortunately there are great games to be made in the transition zone from our current extremely primitive 3D marionette based games to 5D cloud worlds with rich enough physics for us to feel at home living in. I’m looking forward to seeing and working on next generation 5D cloud based game engines that can support these kinds of worlds. The observation that so little is said about the need for these new kinds of game engines in the VR community gives us a pretty clear idea about how far the market really is from making the mental leap to recognizing the classic 3D tools and engines are not capable of delivering the experiential leap people will expect from a believable immersive world.