Monday, 13 April 2015

Blender: Context Part III - Normals and Normal Transformation Orientation

In the Context Part II tutorial we looked at four of the five transformation orientations. Today we look at the final one: "Normal." To do that, we first need to talk about what the word "normal" means in the world of 3D modeling.

What is a Normal?

Unlike its meaning in every day language, in 3D software the term "normal" comes from the field of geometric optics where it has a rather complicated, fancy definition. For the purposes of 3D software, though, Blender's glossary provides a suitable and somewhat more comprehensible explanation: a normal is the normalized vector perpendicular to the plane that a face lies in. Each face has its own normal.

Granted, that still doesn't sound very easy but in practice it is. As with many things, it's actually easier to show you rather than describe it to you so let's take a look.

In Object mode, add a new UV Sphere to the scene. You can do this by clicking its button on the toolshelf's "Create" tab or via the menu with Add > Mesh > UV Sphere.

Immediately after adding it, use the Most Recent Actions section of the toolshelf to reduce the number of segments and rings to 16 and 8 respectively. This will give us a lower resolution mesh to look at which will make things a bit easier to see.

Now switch into Edit mode (Tab) and on the 3D View pane's properties panel look for the section called "Mesh Display." If the section is currently hidden, expand it by clicking the little triangle beside its name.

About half way down this section you'll see three little clickable boxes with the label "Normals" above them and a size value beside them. Click the 3rd of the three boxes called "Display Face Normals as Line" (you might want to adjust the size value to something a little higher which will make the lines longer) and suddenly your sphere looks like a pincushion.

Each face on your sphere now has a little blue line extending outward from the center of the face, perpendicular to the plane of that specific face's surface. Every single face of every single mesh, no matter what its shape is or how many edges it has, has a normal.

The simplest of all possible faces is a triangle (usually just called a "tri") defined by the position of three vertices and their connecting edges. One nice advantage of triangular faces is they're always perfectly flat so the normal is extremely easy (which means fast) for software to calculate. This is the reason that everything in the Opensim world must be made using meshes with only triangular faces (that includes all prims, your avatar, and scultpies too). It helps to speed up your graphics processing.

This also means anything we want to import into Opensim has to be converted into that form, so our Blender sphere, which is made mostly of "quads" (four-sided faces) can't be imported in-world without making this conversion. Happily, Blender can do that automatically for us (in several ways) so we don't have to worry about it when we're modeling -- we just need to remember to export it with all faces converted to tris first. In Blender it's also possible to have faces with even more edges (usually just given the generic name "n-gon") but we usually try to avoid that when possible by breaking up an n-gon into tris or quads.

With a quad or n-gon, it's very common for the face not to be perfectly flat, so to calculate the normal for that face Blender has to do some significantly more complex math to average out that curvature into a single perpendicular line. Our sphere has only 128 faces, and 32 of them are already tris, so it's child's play for Blender to calculate the normals for the other 96. In a complex scene (like displaying an entire region full of objects in-world) the number of faces can be enormous which, as I said above, is why Opensim forces us to use tris just so our graphics cards don't have to handle millions of extra complex calculations every second to display the scene to us.

Because we're working in 3-dimensional space, it's also important to realize that each face has two possible sides: inside and outside. By default, Blender assumes that when we created our sphere it was a solid, round object where the surface is facing outward. This is just like rezzing a sphere (or any prim) in world. The normal of a face is always shown for its outside even though in theory it is an infinitely long line that goes straight through the face.

I'm sure you've all, at one time or another, scaled a prim up to a large size (without making it hollow) and found yourself standing inside it and not seeing anything. That's because in-world a face can only have one side: an outside. The inside face is invisible. To be able to stand there and see a surface you have to hollow the prim. What that actually does is add a duplicate mesh to the prim that's a bit smaller and where the direction of the faces has been flipped to make them inwardly facing so you can see them.

By default, Blender allows you to see both sides of a face and simply shades them a little differently. This is typical of most advanced 3D software where there's plenty of time (and CPU horsepower) available to do that. Real-time environments like Opensim don't have that luxury so they only display the "outside" face (even if it's pointing inward) which cuts the work of the graphics card in half. Luckily Blender also gives us complete control over which direction a face is pointing so we have the ability to ensure that the "outside" is facing the direction we want it to be.

Vertices also have normals. You can see these for your sphere by clicking the first of the three little boxes in the Mesh Display section. They're displayed using a darker blue line but you might want to toggle the face normal lines back off by clicking the third box again.

The vertex normals are calculated by averaging the normals of all adjacent faces. With a nice, smooth, symmetrical object like our sphere they're all nicely pointing out like outwards in much the same pincushion effect that our face normals were.

Blender doesn't have a button to show you edge normals but they exist too. An edge's normal is the average of the normals for the two vertices that define the edge.

If you'd like to see what a more complex object's normals look like, try doing the above section again but add the mesh monkey "Suzanne" to your scene (Add > Mesh > Monkey) instead of the sphere.

What Does a Normal Do?

Now that we've seen what a normal is, it's logical to wonder what it does. Why do we care about normals at all?

For Opensim I've already (obliquely) given half of the answer: the outside normal of a face is the one that is visible in-world. It's the "surface" that your textures get applied to that allows you to see an object in your viewer. The actual world in Opensim software is really just made up of wireframe meshes where each face has a texture assigned to it. That data is sent to your viewer which, with the help of your graphics card, "renders" this into a screen image. If you're curious about what the "real" world looks like, fire up your viewer, log in, then press the viewer's hotkey combination to display wireframe (usually Ctrl + Shift + R toggles between normal and wireframe). Welcome to The Matrix!

Part of my Hedonism dance club shown as a blend from
wireframe view (left) to normal view (right) in my viewer

I say "half the answer" because there's a little bit more to it than that. Without getting into the specific details, there's a little extra information associated with normals to indicate whether to try to display a face normals as "smooth" or "flat." By default Blender creates objects with "flat" normals on all faces which is why our sphere doesn't look nearly as smooth at the moment as the sphere you're used to seeing in-world. We can change that though.

Switch back into Object mode (Tab) and on the Tools tab of the toolshelf in the Edit section you'll see a pair of buttons to set the shading (globally) for a selected object to either "Smooth" or "Flat". If you click "Smooth" your sphere's faces' normals will all be told to try to appear smooth and your sphere will now look pretty close to the way an in-world sphere appears. It will still be a little bit less smooth because our sphere has fewer faces than a prim sphere.

Blender allows you to set an object's normals globally, as we've just done, which changes every face of the object to be either smooth or flat. In edit mode you can do this selectively to only portions of an object which is what I did to make the upper half of the 3rd sphere in the above picture appear smooth and the lower half appear flat.

This extra data about normals is preserved when you export it to Collada format for upload, so what you see in the screen in Blender will be what you see in your viewer when you rez the object in-world.

Once you've tried playing with this a bit for yourself, set your sphere back to "Flat" shading and return to Edit mode.

Normal as a Context for Transformation Orientation

I was careful to say "in world" in the above section about why we would be interested in normals. That's because when modeling normals take on additional significance and usefulness. One of these extra uses is as a context for transformation orientation.

Switch to face selection mode (you have to be in Edit mode to be able to do this) and select a single face somewhere on your sphere. Then switch your transformation orientation to "Normal".

You'll see that your manipulator widget now changes to position itself at the center of your selected face with the z-axis pointing in the exact direction of the normal and the x- and y-axes pointing along the surface of that face.

Any transformations you now do will use this as the frame of reference. While this might not seem all that exciting when working with a sphere, I can assure you that being able to manipulate things rapidly and easily using this as the orientation is incredibly useful when working with almost any sort of mesh more complex than a basic prim shape.

When you select more than one face, the orientation used will be the average of the normals of all selected faces.

You can also switch to edge selection or vertex selection mode and manipulate them along their normals as well, both individually or with multiple edges or vertices selected.

You might want to try this with the Suzanne monkey mesh too, just to get a bit of a feel for it as you begin to experiment.

Normal Transformation Orientation in Object Mode

For the purposes of Opensim content creation, the Normal transformation orientation context is only applicable in edit mode. It becomes almost meaningless in Object mode when you have multiple objects selected, and when you want to work with a single object you're almost always going to switch into edit mode anyway.

Feel free to experiment with it, though, and if you discover a use for it in Object mode when working with mesh objects please let me know because I sure haven't.

Proportional Editing Context

When I originally planned this set of posts about context I had intended to include an introduction to proportional editing because it is also a tool that can be used to manipulate the way that transformations are handled.

After having made a few attempts to express it "novice-level" terms I realize that it's something that really can't be done without assuming a more advanced level of modeling knowledge; so at some point in the not-too-distant future I will do a separate "moderate level" tutorial for it instead. It's typically needed only for more complex modeling anyway, so a new user won't be hampered by waiting a while to learn about it. For the time being, simply leave it at its default "disabled" setting and add it to your list of "things to learn about" as your proficiency increases.


  1. THANK YOU. I had never really grasped the relationship between my lovely quads in Blender to the mess of tris in OpenSim. I'll be changing my nooby workflow immediately :) and converting before I export.

    1. If at all possible do all your work in Blender with quads since those are the only faces that loopcut cleanly and are also far easier to later decimate if you need to reduce you mesh resolution.

      As your final step prior to export you have 3 options:

      - select the object, switch to edit mode, then Mesh > Faces > Triangulate Faces

      - when exporting to Collada (.dae) format there's an area on the left with Collada export options, one of which is a "triangular" check box

      - at the bottom of your modifier stack add a triangulate modifier

      The first two methods triangulate the faces using shortest diagonal possible which isn't always the optimum method.

      Using the modifier allows you to choose between several methods depending on what's best for your model and also allows you to specify a different method for quads vs n-gons.

      In theory you can check the export option to apply all modifiers prior to the creation of the dae but in practice it seems to me that they aren't always reliably all applied (though maybe it's my imagination?) so I have gotten into the habit first saving my .blend file, then manually applying my modifers stack for all objects I'm exporting, then export to collada.