This weekend I explored the viability of adding shaders in 2D game art to achieve a more dynamic look when lighting. I focused mostly on the normal map because it seems to have the biggest impact, but also interested in applying specular, opacity, and emission maps as they might be useful as well. The inspiration came from this article. Grégoire's forum post is very informative, and shows the wide range of possibilities by spitting/deferring shading. This post is my not-so-subtle "emulation" of his testing grounds. Albedo, Normal, Specular Maps + Light = Pretty Neat. Separating graphics into a few shader maps yields a drastically different look, and adds a wide range of variable lighting. It even makes basic 2D pixel art look pretty neat. I'm still working on establishing my Blender workflow for creating 3D assets, and the 2D graphics rendered. Similar to keeping sizing and mesh quality consistent, I want to be thoughtful of renders that might be needed later on. So, whether or not I end up using them, it won't hurt to have higher resolution versions, albedo, normal, and other maps in addition to the final image. Step one of this test was to create a scene, then render a baseline sample. Next, I figured out how to render a proper albedo & normal map. I'd like to eventually have a Blender scene or script setup to output all these maps at once. Rendering a specular map eluded me unfortunately. It might be easier than I think, I just seem to get bombarded with texture baking tutorials geared toward 3D artists. The Baseline - This is a typical image that I'd render for use in a 2D isometric game. It's textured & shaded. When combined in game, the lighting always has this top down 3:00PM appearance, and reflective surfaces are frozen in place despite what the lighting or surrounding conditions are. If I combine the two images below correctly, I should end up with something similar to this. Step two is to combine the albedo & normal maps. I didn't want to invest too much time installing and then learning a game engine like Unity or Godot just to test this one image out. So after searching high and low for a lightweight program that would combine these with some simple lighting tools I realized... I could just fake a 2D game engine's lighting right in Blender: Not too shabby for two flat images. Though, the metal doughnut looks like a concrete doughnut. This highlights the importance of a specular map image in this shader combination. Adding these maps makes a world of difference in a 2D game. With the addition of projected shadows, dark scenes take on a whole new life. I imagine wearing shiny plate armor could look drastically different, or areas lit by flickering torch light But, the problem with implementing this in Faldon is that we don't have normal maps. We need them for most everything in game for it to look consistent. There are thousands of frames of art when including all the props, animations, characters, and creatures. I remember we had this problem when we added the alpha channel to the newest game version. Luckily we handled this with a combination of clipping black edges, re-rendering new versions, and adding an alpha by channel in an image editor. Obstacle: No Normal Maps... A possible solution is to generate the normal from the already existing 2D graphics. But, it wont' look nearly as good as those rendered from a 3D engine. And it would require some fine tuning. I found a free tool called Laigter, and tried out some sample Faldon art: This might work. The specular map generation shown at the middle of the video with the wall weapons & shield is a nice touch. This is an example of what the metal doughnut was missing, and how metal reflective surfaces can respond to the environment. These generated normals have limitations and should probably be used to provide the texture, and not the overall geometric shape. The second video below shows why. It tests two separate normals for the same wall piece, the highly geometrical one to the right doesn't generate the correct top, left & right distinction like it should. There might be a workaround, and other tools and software to address this but definitely shows the snags that might occur in a mass normal map generation for an entire game.
0 Comments
Blender (and other free open source software) It's been about three months since I delved back into 3D graphics and game design. Three months is somewhat arbitrary since this is a hobby that I spend a few hours at a time when I feel like it. I've since familiarized myself with several tools and resources. I'm particularly impressed with the quality of the free & open source programs and assets that are available. I'll likely make regular use of these favorite picks so far:
Below are some images of my progress in Blender. I can't gush enough at the power of this software suite. It certainly lacks in some areas, but the momentum of the development is hard to ignore. It's been my intention to create some promotional/title graphics for Faldon. Sadly, It's ground to a slow crawl in progress. I find myself frequently needing to learn how to do something, then I'll be on tangents after an initial web tutorial search. Usually, I discover that I'd done something completely wrong or, distracted by another tutorial for something unrelated. While this is bad for completing something that I want to complete, I've learned so many ideas, techniques, and understanding of practices that I simply muddled through in my past. Work In Progress - The top image is part of a promotional & title image rendered in Lightwave many years ago. Below, is the rough draft rework redone in Blender. Obviously there's not a huge distinction between these two, but this my workshop for getting immersed into media creation. The 100 mistakes and learning moments do not shine through. CC0 / Public Domain, and Navigating Licensing Pitfalls Continuing on with my freebie theme... There is an absolute treasure trove of free assets (models & textures) available. This is made possible by a combination of generous individuals, asset marketplaces offering promotional freebies, and museums & organizations providing free public domain 3d models of artifacts and architecture. In an earlier post I mentioned that I wanted to accomplish more, with less time (who wouldn't?). Especially if it's a background element like a tree, plant, or rock. I didn't realize some of the richness of the materials available. The downside might be the ubiquity of these models being overused. My approach with them would probably be that they're either a secondary element, reference photos, textures, or just a starting mesh to build upon. Another very important thing to consider, is paying close attention to the license of "free" materials. The difference between CC0, CC with Attribution, free for educational use, and the dreaded viral GPL type licenses are stark. If the intention is to own or sell the derived work (or at least have the option) it's important to pay close attention, understand and document the material sources. And, question that they come from a reputable source. My personal preference is to stick to CC0, or public domain materials. I don't have reservations about using a stone relief carving that was made 1000 years ago, and published by a museum for example. But having to keep track of sources and attributing the authors is not ideal. They certainly deserve the credit, but it's a more simplified workflow if I can keep my asset library unquestionably my own. "Foundations for Media Consistency" This section is going to be painfully obvious to most adept graphics artists, or anyone seriously considering working on a creative project with thousands of moving parts. As I look through my old work, I'm noticing how inconsistent it is. It was several years ago, and created over a long period of time. The folder structures are pretty sloppy, I rarely saved scenes, and textures after I was happy with the renders. The 3D models themselves were made with a single purpose in mind, and scene lighting varied with individual renders that eventually ended up in the same game. The sizes of the 3d models vary incredibly. I have a 6ft tall apple, 2 inch tall tree. This was no problem at the time, because I'd just resize everything and estimate the proportions to how I saw fit. My 3d models are also somewhat sloppy for my current taste. I left behind quite a few stray points/vertices, edges, used tris and quads in combination as well as N-gons. And, I didn't think I'd reuse them for anything else. So, if I was going to render from just one angle, I didn't care very much about the other side or consider I may want to use it somewhere else for that matter. After thoroughly critiquing my past practices, I thought it would pay off later on if I stuck to some basic principles moving forward:
How I establish consistency actually dovetailed into another task I created for myself - establishing a generic isometric scene in Blender. Not only a generic scene, but defining consistent proportions and other conditions like lighting. I really like isometric graphics. Whether or not I work on Faldon or something similar, I have a soft spot for the style. I started by sketching out some of the basics for the scene, and ideal proportions: Initial sketch - I knew I wanted to create a basic starting scene with some reference props and measurements to keep everything relative. All of the original Faldon wall and floors were 2" squared (in 3d), and 64x32 rendered base unit. As I further sketched things out, I'd prefer to use 1 meter squared 128x64 tiles. I can render them smaller and re-proportion things if they're too small to see for Faldon. In Blender - I have made the generic wall and floor patterns. Next, I'd like to add in a few other reference props to keep sizing and animations in line. I'm sure more will come to mind, but I'll need to have a standard human size, and then templates for things like tables, chairs, doorways just to name a few. |
Wu
Archives
October 2020
Categories
|