Higher dimensional games

Posted on December 28, 2016 by TheSaint in Artifical Life, DirectXFiles, GPU Programming, Graphics

I’m bored with 3D games.  They’re all the same run, jump, shoot, cast, basic AI dialog… snore… Games need another technology revolution.

Back in 1994 I started work on the project that culminated in the creation of the Direct3D API which enabled a massive consumer market for incredibly high performance, low-cost, parallel processors specifically designed to accelerate 3D graphics.  Today all modern computers, mobile devices and game consoles include these chips.  Fortunately they’re not really even 3D chips anymore.  They’re also used for displaying and rendering all of our 2D graphics and video and over the years they’ve become generalized and are used for physics processing and neural networking with are both forms of higher dimensional computation.  The truth is that 3D chips were never really 3D… in other words… Three DIMENSIONAL Processors, they were always just parallel processors.  Parallel processors a great for simulating physics because physics, as we experience it, happens simultaneously everywhere.

What we call “3D” graphics aren’t even really 3D.  Physical 3D objects have interiors that exist, the graphics we see in video games are made out of one sided triangles with no interior.  They’re really two dimensional objects with only one side.  We visualize 3D objects in video games as having an “inside” but mathematically there is actually nothing inside a 3D game object, not even the other sides of the polygons we see in a game.  If there can be said to be a third dimension involved it’s time.  We animate or deform these one sided 2D objects over time to create the illusion of motion.  The graphics we enjoy in video games are the result of hollow two dimensional marionettes getting animated by an invisible puppeteer pulling their strings mathematically to bring them to life.  In a video game the invisible puppeteers are the army of human programmers who manually coded all of the games apparent intelligence, story and dynamics into it.  In the real world the forces that give us ACTUAL life arise from several other dimensions that games don’t attempt to model.  Things have an actual interior in the real-world and the contents of an objects interior dictate how it interacts with other objects.  Complex machines and living objects have another important dimension to their interiors which is scale.  Objects like living organisms can be said to have interiors with physics and behavior rules governed by smaller objects which themselves have interiors composed of smaller objects ad-infinitum (At least to the Plank Length).  We are approaching an era when fully simulating the interior physics of many objects to achieve much greater realism is practical.

Video games don’t try to model these properties of three dimensional objects because;

  1.  It’s hard
  2. We don’t have the computational power

Back in the 1980’s when I was first studying 3D graphics at Siggraph Conferences we used to say the same thing about achieving real-time 3D rendering.  I attended one lecture where the speaker (correctly) observed that following Moore’s law it would take 20-30 years for Humans to invent computers that could render (ray-trace) an interactive 3D scene in real-time. By 1995 the same luminaries who I had learned from at Siggraph now worked for Microsoft research and proudly demonstrated a stuttering Phong shaded 3D cube spinning on a dual-core Pentium 90mhz processor to Bill Gates as a testament to their remarkable progress towards achieving real-time interactive 3D rendering.  It was kids from the video game industry who of course solved the problem.  They discarded ray-tracing and a lot of the academic 3D dogma and adopted innovative new data structures and tricks to achieve the illusion of real-time interactive 3D without bothering with all the computation.  Direct3D enabled a mass market for consumer hardware that accelerated texture mapping, simple lighting models and zbuffers and suddenly GPU’s began increasing in performance at Moore’s Law^2 speeds.  It will be almost 25-30 years since I attended that Siggraph session predicting that real-time interactive 3D ray-tracing will become computationally practical.

What comes next is higher dimensional computing in real-time.  If we want to create games that are more than marionette shows, we need to increase the dimensionality of our game engines to roughly five dimensions.  Actual three dimensions in space such that objects have working interiors, one of time and less obviously, a dimension of scale.  How much computing power will such a game engine require to run in real-time?  The amount of computation can generally said to increase from today’s D^3 to D^5 where D is the amount of processing power required to compute a given dimension of data.  *Note that in practice a dimension of time or scale has different computing requirements than a dimension of space.  Generally that’s a big jump and Moore’s Law will deliver the processing power long before the market can deliver games that can use it.  This simple mathematical observation gives us a real clue about when and where in the market the first 5D games will emerge.  If 1000 Nvidia GPU cores can deliver a modern 3D game experience I enjoy today, then D in this case is the third root of 1000, which is 10.  To achieve 10^5 processing power (Assuming all dimensions are equal), I simply require the processing power of 100 modern Nvidia GPU’s or wait 7-10 years to get that much processing power on a single chip.  A lot can happen in gaming technology in 7 to 10 years.

Starting today I would need to build a cloud gaming solution.  I obviously would want to support more than one concurrent players so we’re talking about a cloud composed of millions of GPUs.   Amazon, Microsoft and IBM are actively investing in that kind of cloud fabric.  If I need 100 GPU’s per simultaneous player that game is going to be too expensive to play at modern cloud rates, but again the math says the necessary computing scale becomes affordable in under seven years.

So what does a 5D game engine look like?

  1. It’s delivered as interactive streaming video so that it can play on any device without depending on local hardware.
  2. It looks real.  A genuine 5D physics simulation will output ray-traced imagery as a byproduct of the physics engine
  3. Terrain will be grown out of guided physics not designed by traditional game artists
  4. Vegetation, shellfish, simple organisms will be grown out of guided genetic simulations
  5. Weather, Earthquakes, plate tectonics, etc will be generated from large scale physics simulations
  6. Materials will have real physical and chemical properties that may be augmented for game-play
Image result for Rogue one death star general

If you watched Rogue One you may not have noticed that General Tarkin was CGI, the original actor is long dead. As good as his CGI rendering was, he still exhibited the wooden motion and waxy skin typical of current state-of-the-art character rendering.

What will be missing?  Higher order life forms and their behaviors can’t be grown (initially) because in addition to requiring vast computing power, we don’t actually know how to simulate these things yet.  It’s more likely that higher order organisms and their behaviors will still have to be designed but some new realism properties may be added.  Current technology game characters exhibit unnatural muscle deformations in their face and body movements unless they are carefully hand authored to specific movements.  Animating 3D organisms is like squeezing a water balloon.  In order to look right the squeezed balloon has to maintain a constant volume continuously under every pressure change.  Bodies have to do the same when they move and when muscle flexes under the skin.  It’s not a problem that modern 3D game authoring tools and motion kinematic solutions address.

A 5D engine will obey physics automatically and such realism problems and limits to animation quality will vanish.  The bones, internal body fluid and organ density distributions of a living thing will exist in a 5D character and correctly contribute to the characters appearance and interactions with other objects when it is in motion.  Many simple traditional hand authored behaviors associated with balance and collision will be automated so that characters move and interact naturally with their environment.  Characters will support a fully dynamic range of motion behaviors without animation constraints.

It’s hard to anticipate how in-game AI will evolve but it will certainly improve dramatically.  One of the things that limits in-game AI now is that there is really no “world” for the AI to exist in and learn from.  Unfortunately adding this kind of AI to games may involve adding several more orders of magnitude in computer power per player.  Given this problem it’s likely that the first 5D games will be MMOG’s with the AI and complex behaviors provided by other players.  With “real” physics, in-game resources, minerals, chemistry and vegetation it will be possible for the political intrigues and warfare of stone and bronze age civilizations to evolve organically within the game.

A 5D game engine would enable entirely new kinds of game-play and entirely new approaches to game design but would not be without new challenges.  If a 5D game designer hoped to see these kinds of social dynamics evolve naturally inside a nearly completely simulated world they would have to capture the human dynamics of reproductive competition and loyalty to hereditary lineage.  Powerful genetic and physics simulations would give rise to incredible looking and behaving new worlds and environments.  The AI and social engineering challenges would be profound but the discovery of revolutionary new game design approaches within these constraints is exciting to contemplate.

The user control challenges are also hugely problematic.  People naturally assume that a VR experience is ideal.  The problem is that we have no technology to take natural input from the entire body… without it actually moving.  Having people running around in VR goggles and motion sensing body suits is not a very practical solution.  I’m not optimistic for a near term solution to this challenge because I think we are still more than 7-10 years away from thinking that anesthetizing ourselves and running physical wires into our brains is a great entertainment idea, but as things are trending I wouldn’t be surprised if the next generation of kids thought it was a great and perfectly reasonable idea for a game controller.

Fortunately there are great games to be made in the transition zone from our current extremely primitive 3D marionette based games to 5D cloud worlds with rich enough physics for us to feel at home living in.  I’m looking forward to seeing and working on next generation 5D cloud based game engines that can support these kinds of worlds.  The observation that so little is said about the need for these new kinds of game engines in the VR community gives us a pretty clear idea about how far the market really is from making the mental leap to recognizing the classic 3D tools and engines are not capable of delivering the experiential leap people will expect from a believable immersive world.

 

Comments

comments

Tagged as ,

12 Comments

  1. One thing I’d like to see in future games is complex fluid dynamics since explosions and other particle effects still look really bad in many modern games. It’d also be interesting to know how much internet speeds would have to improve to reduce input lag from streaming games, because at current speeds nobody will use it.

    • I wrote a blog a couple years ago on the subject of fluid dynamics in video games. Let me see if I can find the link for you.
      http://www.alexstjohn.com/WP/2014/02/15/cuda-6-0-rc/
      I know how they achieved the water effects but it’s not a technique that is applicable to gaming. Nvidia has done some brilliant work in the area using particle systems. The problem with particle system water is that it always tends to look like flowing warm spit. This is why I’m pretty confident that a 5th scale dimension is essential to solving the problem for realistic fluid dynamics. The equations that describe fluid dynamics produce infinities in turbulent flow situations. Particle systems won’t produce those infinities which results in turbulent water that looks syrupy. You need a particle system with a fractal scale structure that increases it’s resolution in turbulent regions that are likely to produce very high energy particle collisions. In other words the simulation needs to dynamically factor high energy particle collisions into thousands or millions of sub-particles and run a finer grained simulation on those to get real ocean spray and foam. That kind of multi-scale simulation exists in ray-tracing but I have seldom seen it in other approaches to physics simulation.

      Internet speeds do not necessarily have to improve for it to be a great experience, the reason interactive streaming video games works badly over today’s internet is because the games are not designed for it. I have several patents coming through the US patent office on how to achieve zero latency gaming over a network using game engines designed to overcome the problem. The solution (since I now have the patents) is to render and send all possible user selected game states before the user makes an action decision. At 60FPS, most user actions will be to do nothing and the rest will be highly predictable. A simple example is you are running down a hallway that forks in two directions and you have to pick a direction or go back the way you came. The game could have rendered or partially rendered and delivered over the network all three choices before you made them. When you choose a direction, the rendered result of that choice has already been computed and cached on your computer, the other choices are discarded and you experience ZERO network latency. You don’t send the game stream as traditional video, you send it as a stream of partially rendered and heavily cached primitives that enable your client to quickly finish rendering your chosen game state with minimal redundant data overhead. For this to work the game client and the game engine has to be designed to maintain many possible simultaneous user state choices and resolve them later. In other words you need to make game engines that don’t process your actual clicks, they process your most likely next clicks before you make them and throw away the scenarios that turn out to be irrelevant later. It’s time travel in game engine design. You can use computing power to achieve zero latency for a tiny amount of time by using it to predict future events.

      • Even fluids which flow like ugly goop would be quite the novelty or game mechanic, but I can’t recall seeing it in a game where it stood out. As you said though it’s far better to have more realism and efficiency. It would add a lot of skill to a game if you had to be careful where to throw a grenade because if it was in a tight space, even around a corner the blast would damage and push you based on its flow and your orientation, but there’s no way that explosion would look acceptable unless it had many of the 5D features you describe. My fluids expertise is minimal though.

        Your zero latency streaming idea is something I hadn’t considered and sounds quite useful, especially in the combined method you proposed where the user’s computer is still responsible for partial rendering. Hopefully companies see the opportunity and pay to license your patent, rather than letting it hold the industry back (like how Capcom patented loading screen games for 17 years and didn’t use it, the patent expired a year ago but companies still haven’t taken advantage). I get why you did it though, if you didn’t patent it then someone else would.

        • They don’t use particle waves in real games, they’re too compute intensive, what you see are generally noise functions applied to sheets of water with some particle effects for sea foam. Exactly, it’s hard to describe the leap in game realism and dynamics you would get until you experienced it.

          I just collect patents to get it on the record that I invented some of these ideas, then I generally sell them. I’m not a big fan of software patents but it’s necessary to collect them if you run VC funded companies.

  2. I think you may have fallen a little out of touch with what gamers really need to feel like the game is simulating what you’re proposing Alex. Forget trying to stream rendered video data to the users’ device – You don’t need to. The devices are capable of rendering perfectly acceptable graphics to the average gamer nowadays, and the roof is only getting lower. What I think you have correctly identified however is the need to utilize cloud-based infrastructure to simulate much more active and immersive worlds.

    There is literally no reason what so ever that a game today cannot have a fully simulated flora and fauna system running. Nothing is stopping a developer doing that except time and resources.

    Your idea of the environments being built not out of polygon shells but actual objects with defined insides that can be exposed is already available in rudimentary voxelisation engines – The idea is solid and it is simply that no one has taken the time to expose it to decent cloud infrastructure to really scale it out and build real interactable components into the rest of the world simulation.

    Another user mentioned fluid dynamics – Again it is simply a process of implementing in cloud-based infrastructure to make it truly real-time. There is plenty of fucking fantastic looking fluid sim approximations that run almost real-time on commodity enduser hardware. Move it to the cloud and push the data over the network – Problem solved. It is simply that no developer has bothered yet – There hasn’t been a good enough reason to yet.

    You don’t need anywhere near the cloud infrastructure you’re suggesting to do any of this, at least not per player. As far as I’m concerned the biggest issue a developer faces right now is that all of that simulation, all of those algorithms, all of the research put into genetic/evolution/AI approximations and algorithms are locked away in black boxes in other fields, in papers with little practical examples or in other developers wishing to keep their magic secret. I guess what I’m saying is that there are currently very few giants to stand on the shoulders of – The cloud infrastructure stuff is thankfully a solved problem and is utterly trivial to build and scale out with. But all those interacting simulations – Where are they? Where are the open source implementations we can prototype with and being building real games on top of?

    • The code doesn’t exist because it would depend on the existence of the kind of physics based voxelization engines discussed. You are vastly mistaken about the amount of local computing power necessary to creating those kinds of graphics. The problem is that the graphics you see that look “good enough” now look that way because they have been highly and expensively hand crafted and are largely pre-rendered. Even if those graphics look great you have to discard that approach to get to physics based lighting and geometry which adds a couple powers to the amount of computation required to generate them. The examples of turbulence you describe are still based on clever hackery to achieve an effect that isn’t properly simulated. It’s not that it doesn’t look great in a game, but that to GAIN the new benefit you would get from completely emergent game physics you have to sacrifice almost all hackery no matter how good it looks.

      The game content also MUST be delivered in a streaming form albeit as a meta-object stream because nobody will have a device capable of containing a model of the scene big enough to render it. The rendering will have to be performed at the source no matter how powerful the client because it is too big to ship.

      It’s also not true that modern cloud or datacenter infrastructure can even remotely handle these kinds of game engines. They’re all massively network I/O bound relative to the incredible performance of the GPU. We were testing one of these engines on a Minsky system IBM lent us just before the Xmas break and the 4 processor system with 4 P100 GPU’s was moving fully processed data at ~120GB/s (No PCI bus barriers anymore). How do I get that over a network to another machine to build a large distributed cluster? I can’t, the networking fabric to make a cluster of nodes for that kind of processing volume doesn’t exist today. We’ve designed an architecture that solves the problem but it’s a radical departure for traditional IT. We replace all the RAID storage controllers with GPUS to create massive parallel local storage caches that store data at memory speeds. Each local node thinks it has say 1PB of local RAM cache, which makes it plausible for each node to have a large enough cache to mitigate the physical network I/O bottlenecks. *I’ll write about how it works when we’re ready to launch it publicly.

      Again I’m not saying that there aren’t ways to cheap your way to truly generalized physics universes with existing techniques, in fact we would never get to the kinds of engines I’m describing without a practical path up the ladder, I’m just describing what the destination ultimately looks like. You are describing the 1994 DOOM of 3D graphics using rotating sprites to simulate 3D objects compared to today’s game graphics. Yes they worked great to get us here.

      • mmm I think the problem here is the vast disconnect about what you and I envision as ‘acceptable’ – The fidelity you speak of will of course, eventually, become reality. But that is the long-long-term. So long-term that I would cast doubt on any work done today towards that goal due simply to the fact that the tech will likely be so different.

        When I spoke above, I suppose I was speaking of what is practical in the medium-term. I’m finding it a bit hard to follow you here on your various points as you’re jumping around, so it may be that we’re just not on the same page – But if I may read between the lines a little I think the primary disagreement I would have with you is that in order to create a GAME that players will actually enjoy and play, one does not have to simulate anywhere near the fidelity of reality in order to bring about emergent behavior.

        I would posit that the voxel engines available today have a reasonable resolution as a first step. The problem is, no one, literally no one, has bothered to even apply any kind of AI behavior towards the information available in such an environment. No one in the current gamedev industry really knows what their doing there at the moment. We’re all stuck trying to get as much of the basics in before the release schedule is up let alone playing around with emergment and highly reactive behavior.

        I guess what I’m saying here is this: The underlying tech is the easy part. The hard part, the time consuming part is the actual game software – We need some simulations and algorithms. We can figure out how to scale out when it is proven that players actually care about such things, and to what extent.

        • Yes, I’m betting 7-10 years to the kind of game engines I’m talking about. I’ve taken several passes at writing them and every time I’ve attempted to chimp an element of the engine to make it easier to get there I’ve ended up making it harder to fit a complete system together. I finally concluded that you just have to embrace the paradigm and go for it and once you do you end up in a very different kind of game engine. I tried running the physics system as a giant voxel based particle world and then rendering it using conventional techniques but that turned out to be a mistake. The amount of hacky work to convert the particles into conventional 3D stuff was more trouble and uglier than just accepting that your ray-tracing should emerge organically from a comprehensive physics simulation. The other thing I realized is that the GPU data structure you need to process a huge world is really an N dimensional voxel space, but because a voxel generally describes a Cartesian spacial geometry I call the structure a hyperspace partition. In a massively distributed game engine one of your dimensions is time itself because you need your computing resources to scale very organically in order to hit a constant frame rate of physics calculations of unknown complexity. Say you want to jump from a nice static shore environment into a turbulent river? You still want to hit 60fps but the computations required to render a frame just jumped by 10,000X and it needs to be able to do that almost instantly. A voxel like structure that includes time would very quickly be able to determine the amount of compute resources needed to achieve 60fps even with enormous variability in processing load.

          The hyperspace partition that includes time also gives you a powerful generalized way to cache and optimize such a world. Suppose 1000 people all jump in the turbulent river in the same place at the same time? The water particles making up your physics sim are ordered like wavelet coefficients in a hyperspace partition. The low frequency points are near the top of the structure and the high frequency points are deep in the structure. If I have a processing emergency I can instantly drop the fidelity of elements of the sim to meet the real-time requirement, hopefully without the player noticing it happened.

          In terms of the organics algorithms, Stephen Wolfram’s “A New Kind of Science” is still my bible for this stuff but there are several books from the 1990’s that did an amazing job of comprehensively tackling plants including flowers, shellfish, terrain of course, and snow. The challenge is unifying the techniques into one type of physics engine which I believe a hyperspace partition accomplishes. Turbulent systems for weather and water are an open challenge.

          • I’m thinking quite a bit more than 10 years. I mean maybe, MAYBE, the tech will be there in 7-10 years at the very bleeding edge but certainly not the industry – It’ll cost too much and why bother when approximations that costs a fraction to run do just as well. You have to remember that no player is really asking for this kind of solution. They only want more immersion and it can be done via approximations insanely well.

            Let’s take a real world example and draw further from that – Battlefield 4 is an online FPS game that can support 64 active players on server. Some maps are played out on an island surrounded by sea. Over the course of the game the sea goes from relatively calm to highly stormy with massive crashing waves. The players in this game can traverse the water volume by boat and swimming and the waves themselves are big enough to hide whole boats. The water has to be interactable – The players can collide on the surface of the water volume and do things like jump boats over others after hitting the crest of the wave. The solution here is really simple of course – It is a simple real-time SPH implementation with a synchronized time dimension between players. The data transmitted from the server to the clients is extremely simple in this case – Just the initialization values and then the typical lag-aware related time sync stuff that you’re no doubt an expert on.

            That was done 5 years ago now, and it is still one of the few well done fluid sim approximations done in the industry thus far. It is definitely not the bleeding edge now and wasn’t even then – But it solved the problem they had which was interactable water volumes in an online context. One could combine that with a more data-oriented server-based water and environment simulation so that AI and logic can actually inspect, interact with, consume, etc the actual water volume rather than just a crude surface representation. But this does not and probably should not be accurately represented to the client in real-time – Why would you? The client is just a renderer and there would be nothing to render. You don’t want to actually render a true simulation of water particles as a result of the cow drinking from the edge of a lake nor do you want to tell the client the microscopic displacement made to the lake. You don’t care and the player certainly doesn’t care. You react to that in the data sim, but on the client you simply play an animation and run a simple particle effect at the desired locations.

            Feel free to go as far as you wish with the real data and simulations that make up the world the players and other agents live within. But I get the feeling you may be creating or at least illustrating problems with any kind of solution due to tieing the client/render with the true backend simulation. The emergence of voxel engines over the past 10 years is a great example of this – Taking much more detailed data and churning out a surface volume for the client to see. Some tried and failed to make sure that surface volume was an utterly accurate representation of the underlying data, but it wasn’t needed and the problems coming from that didn’t need to be solved. In the example of the rivers you provided the client only cares about what the river should look like and simple surface fluid shaders can provide a highly realistic representation of the river. The player can move between the calm and heavy river states with a fraction of a frame and the waveforms can even be syncronized between players in an online context. The environment and other agents can still interact with the highly detailed data sitting in ‘The Cloud’ and the outputs of which can be easily and quickly represented on the client.

            I hope this makes sense. If there is disagreement with what I’m saying here then my main question to you would be – can you draw further from my example here about a problem that would not be solvable practically?

            P.S. Thanks for mentioning Wolfram’s book! I had that on a list to read and forgot about it. This too is a subject I am passionate about, given that I sit directly in the industry, and I’ll definitely be reading it now.

          • He.. he.. you sound like the game industry guys I used to work with when I was trying to sell everybody on making 3D PC games and MMOGs. It happened instantly once Direct3D was introduced and became standard when people like John Carmack and Tim Sweeney made general purpose engines broadly available to the market of the developers who didn’t have the resources to build their own. It was infinitely far away in the future until it happened overnight. Yes I agree that it takes an enabling solution. The same was true of MMOG’s… read my blog article about trying to persuade Richard Garriot to make an MMOG.

            http://www.alexstjohn.com/WP/2013/06/21/ultima-online-and-directx/

            We also had to persuade Blizzard to make Diablo multiplayer for Windows when it first came out, suddenly a few years later they believed in MMOGs. I heard it all… Nobody wants MMOG’s, LAN gaming is good enough, we don’t need 3D hardware when Carmack has shown that we can fake 3D in software just fine… heard it all before. Mike Abrash and Seamus Blackley who ended up making Direct3D 8 for the first XBOX were anti-3D acceleration when I first met them. People can’t want it until they experience it because they have no capacity to visualize entirely new gaming experiences and technologies. The one thing the history of the game industry has shown is that the moment a new gaming paradigm is introduced it sweeps the market and becomes the norm. Farmville? Come on… who would have thought THAT was a brilliant game idea before Zynga did it?

            Those Battlefield 4 waves are made exactly the way I described previously. They look great and the game is well designed around their constraints. They can be beaten by 1000% and be vastly more interesting to interact with for having done so. We just don’t have the compute power to support it yet. I want you to be able to step into a strong current, have it surround and spray off your body naturally, sweep your unattached equipment away, force you to adjust your center of gravity to keep your footing and soak your clothing without an animator and team of engineers having to manually contrive all of it. In other words… can you make this game?

            http://il4.picdn.net/shutterstock/videos/6918919/thumb/1.jpg
            or this game…
            http://www.surfertoday.com/images/stories/addictivesurfing.jpg

            without hand authoring it?

            It’s a great book. Wolfram’s writing style can be a little challenging to wade through but it’s genius stuff. Here are some of my other favorite books on these subjects.
            https://www.amazon.com/Algorithmic-Beauty-Shells-Virtual-Laboratory/dp/3540921419
            *These are fun to grow in Mathematica

            https://www.amazon.com/Algorithmic-Beauty-Seaweeds-Sponges-Corals/dp/3540677003/ref=pd_bxgy_14_img_2?_encoding=UTF8&psc=1&refRID=CSSZGGWWNF2A1XGQKGBK

            https://www.amazon.com/Algorithmic-Beauty-Plants-Virtual-Laboratory/dp/0387972978/ref=sr_1_1?s=books&ie=UTF8&qid=1483135522&sr=1-1&keywords=the+algorithmic+beauty+of+plants

          • Hey I’m happy to be wrong about my doubts of your timeline. But that discussion is just a game of betting. What I think you DO have wrong is your opinion on how the simulation aught to be represented to the player – I’m happy to accept that in 7-10 years we’ll be simulating environments in such a way as you describe in a remote infrastructure but I don’t think directly representing the output to them as some kind of dotpoint cloud engine, raytracing engine or prerendered frames over network is the going to be the right solution.

            I still believe that artists and engineers will be holding the hands of largely automated and procedural processes to create the art used to represent the simulation. Why? Simply because we’ll want control over it. What WILL change is how the art is made – Tools and algorithms will continue to take mundane work off the artists’ hands like the initial generation of plants and forests, etc. But at the end of the day we’ll want to tweak it and add to it.

            Beyond that, there is the fact that you’re trying to push extremely high resolution data to a device that is only going to show it at a comparitively very low resolution – And to be clear I don’t mean screen resolution. I mean the data itself – The device/player can’t do anything of note with individual droplets of water or the dirt under the grass. Surely you’ll want to optimise that? At this point in time that the whole idea behind turning surface voxels into polygonal shells.

          • Well I think you would probably smash distant details into textures, reduce geometry to match the target devices processing power. The smashed textures will contain the ray-tracing from the cloud engine. Assuming we are talking about a third person game I think you end up rendering the character and finishing the ray-tracing and character shadow casting on the client to put the player into the scene at the last step. That trick also ensures the illusion of constant real-time player responsiveness. I suspect there are clever things you can do to cache a lot of environment details within the players immediate locale to minimize the instantaneous traffic you have to send. Ideally you’re sending enough scene information to produce several credible frames of graphics within a narrow range of possible camera angles on the client before the scene needs more data. Fast swinging camera motion is your friend because the motion blur in the scene allows you to really drop the environmental detail and all of the character detail is cached locally. Particle/droplet affects you smash to animated textures but you carry their physics across to affect the character rendering.

Pingbacks

Leave a Reply

Follow

Get every new post delivered to your Inbox

Join other followers:

%d bloggers like this: