Difference between revisions of "Arcane University:Mesh Export to NIF"

The Beyond Skyrim Wiki — Hosted by UESP
Jump to: navigation, search
(Cleaned up some of the bulleted lists to use the markup.)
(PyNifly)
 
(55 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Meshes can be exported by the following applications:
+
{{Trail|3D Art}}[[Category:Arcane University-3D Art-Mesh Export to NIF| ]]
 +
In order for your mesh to be used in Skyrim, it must be in [[AU:NIF Data Format|.nif]] format. There are several ways to do this, depending on what 3D software you use.
  
* Blender export plugin (Version 2.69 and 2.7x)
+
* Any 3D programs (see requirements)
* 3DS MAX export plugin (Versions xxx)
+
** CK-CMD
* CK-CMD
+
** Outfit Studio
 +
* Blender
 +
** Blender Niftools Addon
 +
** PyNifly
 +
* 3DS MAX
 +
** 3DS Max Nif Plugin
  
 +
A number of these methods do not directly support Special Edition nifs. However, this can be circumvented using [https://www.nexusmods.com/skyrimspecialedition/mods/4089/ SSE Nif Optimizer] or [https://www.nexusmods.com/skyrimspecialedition/mods/23316 Cathedral Assets Optimizer], both of which can convert LE meshes to SE format and the other way around.
  
== General NIF structure ==
+
== CK-CMD ==
 +
[[AU:CK-CMD for Skyrim|CK-CMD]] is an independent command-line software you can use to, among many other things, convert FBX files to NIF and the other way around. You can download it on its [https://github.com/aerisarn/ck-cmd GitHub page]. The advantage of this method is that it allows for exporting for any 3D program that can export to FBX. Limitations: Does not directly support SE nif files.
  
BSShaderProperties and BSTextureSet
+
== Outfit Studio ==
NIF (Netimmerse Gamebryo) files in Skyrim are the files that hold the 3D Model information necessary to display objects in the game. When you open a NIF file in NifSkope, every NiTriShape represents a 3D Object instance. A single NIF file can have multiple instances of these and they can reside under various parent nodes.
+
The process for exporting using Outfit Studio is documented on [[AU:Outfit Studio Export|Outfit Studio Export]]. The advantage of this method is that it allows for exporting for any 3D program which can export to OBJ. The disadvantage is that you have to work with intermediate files, which makes it harder to quickly test out small edits in-game. Limitations: The OBJ format cannot store vertex weights or vertex colors, so any rigging will need to be done within Outfit Studio.
A NiTriShape can have the following child properties among others:
 
* NiTriShapeData is the actual mesh.
 
* BSDismemberSkinInstance is the information necessary to handle limbs being dismembered from the base mesh. Those nodes hold all skinning/rigging data. They tell the mesh how to deform during animation
 
* BSLightingShaderProperty holds lighting and texture information.
 
* NiAlphaProperty tells the game how to handle transparency in the mesh.
 
Nifskope Gamebryo NIF branch hierarchy example (object window):
 
1.1. Bethesda
 
  
[[File:branches1.PNG]]
+
== Blender Niftools Addon ==
 +
You can export directly from Blender using the [[AU:Blender Niftools Addon|Blender Niftools Addon]]. Which version you should use (and how you export) depends on which version of Blender you are using. It allows for directly exporting from Blender to a full NIF, but you will need to merge the created NIF with separately created collision using [[AU:Niftools/Chunkmerge for Skyrim|ChunkMerge]]. The advantage of this method is that you do not need any intermediate files and can export directly from Blender. Limitations: No collision handling, and does not directly support SE nif files. Both of these limitations can be handled with workarounds.
  
== Setting up the Nif-specific collisions and physical properties (mass center, material data, inventory marker, nif cleaning) ==
+
=== Blender 2.7x ===
 +
The process for direct export from Blender 2.7x is documented on [[AU:Blender 2.7x Export|Blender 2.7x Export]].
  
== Setting up the Nif-specific shader properties and texture sets==
+
=== Blender 2.8+ ===
 +
The process for direct export from Blender 2.8 or up is documented on [[AU:Blender 2.8+ Export|Blender 2.8+ Export]].
  
=== Shader Types ===
+
== PyNifly ==
 +
[[AU:PyNifly for Skyrim|PyNifly]], like the Blender Niftools Addon, allows for direct import/export of nif files. Unlike the Blender Niftools Addon, it has support for SE nif files and handles simple collisions (non-MOPP).
  
{| class="wikitable"
+
== 3DS MAX Nif Plugin ==
! Number !! Name !! Use !! naming convention !!
+
The process for direct export from 3DS MAX export using especially made plugin is documented on [[AU:3DS MAX Export|3DS MAX Export]]. It allows for direct export from 3DS MAX to a full NIF. The advantage of this method is that you do not need any intermediate files and can export directly from 3DS MAX. The disadvantage is that it does not work for every version of 3DS MAX, and will therefore need to be updated when updating 3DS MAX. Limitations: Does not directly support SE nif files.
|-
 
| SLOT 1 || diffuse map || diffuse (RGB) + transparency (Alpha) || d.dds ||
 
|-
 
| SLOT 2 || normal map || normal (RGB) + greyscale Specular (Alpha) || n.dds ||
 
|-
 
| SLOT 3 || glow map || emissive (RGB) / subsurface color (RGB) || sk.dds ||
 
|-
 
| SLOT 4 || bump map || greyscale height / parallax (RGB) || g.dds ||
 
|-
 
| SLOT 5 || env. map / cube map || environment (mirror) / cube map (RGB) || e.dds ||
 
|-
 
| SLOT 6 || material map || environment / reflection mask (RGB) || m.dds ||
 
|-
 
| SLOT 7 || - map || inner layer diffuse (RGB) + inner layer depth (Alpha) || -.dds ||
 
|-
 
| SLOT 8 || sst.m. || subsurface teint outer colour || s.dds ||
 
|-
 
| SLOT 9 || - || - || - ||
 
|}   
 
  
 +
== See Also ==
 +
* To review the specifics of the NIF data format used for meshes, check [[Arcane_University:NIF Data Format|Nif Data Format]].
 +
* To review the specifics of the DDS data format used for textures, check [[Arcane_University:DDS Data Format|DDS Data Format]].
  
1.1. Bethesda Shader Properties
+
[[Category:Arcane University-3D Art| ]] [[Category:Arcane University-Method| ]]
1.1.1. Environment/Reflection
 
Requirements:
 
cube map (also called) environment map
 
BSLightingShaderProperty => Skyrim
 
 
 
* '''Default Shader'''
 
 
 
* '''Environmental Shader'''
 
[[File:shading1.PNG]]
 
 
 
General Structure of a Mesh
 
 
 
 
 
Make sure, that you visualize it as tree diagram.
 
 
 
 
 
Note that there is a hierarchical order, similar to shoulder -> arm -> hand -> finger.
 
In theory, your mesh could be sorted differently, but most meshes look like this. If you see slight variations, don’t worry too much,
 
the system behind is the same.
 
 
 
The top level is most commonly a BSFadeNode (static bodies) or a Ninode (Skinned rigged meshes) or a BSLeafAnim/tree node (plants).
 
The BSX Flag below is for activating the collision. The bhkCollision object is the mesh of the collision.
 
 
 
The first sublevel has the Nitrishape, which is the “body” of the mesh. There can be as many as you want.
 
 
 
Each Nitrishape then has a second sublevel, which contains a NirtiShape Data tab and the LightingShaderProperty.
 
This is where we set up the way the mesh is rendered. Here, we set the shader type and the shader flags and controllers.
 
 
 
If you double click on the BSLightingshaderProperty, you find the third sublevel, the TextureSet, which contains your texture slots.
 
For detailed information on DDS formate texture files, please click here.
 
For detailed information on texture sets/texture slots of shaders, click here.
 
 
 
 
 
Meshes can be rendered in Skyrim according to their BSLightingShaderProperty (this article) or the BSEffectProperty (more here).
 
 
 
Most commonly used BSLightingShaderProperties are Default shader type (“0”) and Environment Map shader type (“1”).
 
You set your mesh shader property type in BSLightingshaderproperties. Note that each Nitrishape can only have one shader type and a small range of optional shader flags and use specific controllers(see below).
 
 
 
But, if needed, you can simply use multiple Nitrishapes combined on top of each other. As an example, it might make sense to split your mesh into subsections by material (metal, leather) and export each as a different Nitrishape (first sublevel) into the same mesh. Consider a leather armor for instance. I would suggest having an organic leather/cloth part (which usually absorbs light to a high degree and would use the default shader type) and the metal elements (which would reflect light stronger and profit from cube maps used in the environment map shader type).
 
 
 
 
 
Available Shader Flags, Controller Settings, and Texture Slots
 
 
 
BSLightingShaderProperty Shader Types
 
BSShaderFlags
 
Controller Settings
 
BSTextureSets
 
They have their own unique requirements. They need specific Texture Slots to be filled, depending on their character.
 
 
 
 
 
 
 
Most Common Shader Types
 
 
 
Default (0) Shader Type
 
Default Shader Type
 
 
 
use:
 
Assets with color, height, and specularity.
 
common examples:
 
rocks, clothes, furniture, stones, plants, sometimes metals if specularity is done in a way that can pretend metallic reflections.
 
necessary slots:
 
1) base color/diffuse map
 
2) normal map with specularity in alpha channel (optional)
 
necessary flags:
 
those who are marked green above under shader flags, for export; and additionally what you need
 
(vertex paint, specularity for example, if you learned how to apply these)
 
 
 
Example picture above, and shader settings of another example below.
 
 
 
 
 
The textures: 1) diffuse (base color) in slot 1, and 2) normal map with optional specularity in the alpha channel in slot 2 of the BSTextureSet.
 
 
 
 
 
 
 
Environment Map (1) Shader Type
 
Comment regarding Parallax shaders
 
 
 
=== Texture Types and the DTX formate ===
 
 
 
*'''What is the DDS file formate?'''
 
*DXT is a set of compression algorithms or codecs applied to raster/bitmap images. The set is also known as S3 Texture Compression (S3TC). They all convert 4×4 blocks of pixels to either 64-bits or 128-bits depending on the codec. All are lossy algorithms. The original codecs were created by S3 Graphics, but many non-encumbered alternatives exist now. There are many different codecs: DXT1, DXT3, DXT5, etc. I’ll talk more about them in a moment. DDS (Direct Draw Surface) is an image file format, rather like a container for storing image data compressed using one of the DXT codecs. It was developed by Microsoft and introduced with DirectX 7. Together, a DDS file containing data organized using a DXT code creates an image file that can be used in most graphical applications, at least those that support DDS natively or via a plug-in. DDS files are very common in the game industry, where advantages in loading speed and video memory savings outweigh disadvantages.
 
 
 
*'''How does it work?'''
 
*DXT sees images as collections of 4×4 blocks of pixels called “texels”. For every texel, DXT selects two colors from the texel, each determining one end of a color range of 4 colors. The middle two colors are interpolated. The sixteen pixels of the texel are then assigned a 2-bit index (0-3) that maps them to the color range. The two representative colors are stored as 16-bit RGB values (5:6:5). So each texel requires 2×16 bits for the colors, plus 16×2 bits for the indices, giving a total of 64 bits for each texel, which equates to 4 bits per pixel. So for any set of images having the same dimensions, compressed size will always be the same.
 
 
 
*'''What if the image has an alpha channel?''' How DXT handles this depends on the codec used. In DXT5, the alpha channel is encoded using a second set of 64 bits for each texel. DXT5 stores alpha information in a way that is almost the same as color information. Two alpha values are selected and *used as the extremes for a range of transparency values. The alpha values are represented by 8 bits each, and the range indices by 3 bits each, allowing for gradients of up to 8 shades. DXT3 handles the alpha channel a little differently. Each pixel gets 4 bits to represent its alpha, for a total of 16 unique values of transparency. This allows the alpha channel to be represented more accurately than DXT5, but with less subtle transitions. So, for an image with no alpha using DXT1, compression results in an image using 4 bpp (bits per pixel). For an image using DXT3 or DXT5 incorporating an alpha channel, the requirements will be 8 bpp. Note that the actual image size will likely be larger, as it will frequently include mipmap data.
 
 
 
*'''What are the advantages of DDS files and DXT compression?'''
 
*Fast load times. DDS files are ready to be used by the graphics system and can be read straight into graphics memory with little overhead. In situations where many files are being constantly swapped in/out of the graphics unit, this can be a substantial savings and can reduce “lag”, especially with big texture files. Mipmaps can be pre-generated and included in the DDS files. This is another savings in load times and gives the graphic designer control over mipmap construction. More on mipmaps in a moment.
 
Data remains compressed in video memory. All image formats except DDS/DXT are loaded into graphics memory in flat, uncompressed state. (And uncompressing them takes time and resources.) DDS/DXT files remain in their compressed state in video RAM, using special algorithms on the video card to retrieve data on demand. Compression ratio is 6:1 if no alpha channel is used, or 4:1 if an alpha channel is used. This can result in huge video memory savings.
 
 
 
*'''What are the disadvantages?'''
 
*DXT codecs are lossy. What is stored as compressed data is not the same as the original image, and on a fine level it may not even be close. Images with high contrast regions such as print or cartoon-like colors and borders will likely generate visible artifacts, particularly with smaller resolutions. For this reason DXT can be problematic when used with normal maps, though there are work-arounds. I’ll come back to this issue. Never use DDS files for editing and archiving. The compression degrades the original colors. It is not a good format where retention of the true color is critical, particularly in those situations where fine differences may have a large impact. DXT generates images using a 16 bit color depth. Where formats like JPG or PNG use 8 bits per channel per pixel (RGB 8:8:8) for a 24-bit color depth (32 bits with PNGs having an alpha channel), DXT reduces the spectrum down to 5:6:5 bits using an interpolation algorithm to arrive at the new color values. In short, you lose much of the original color range. But many graphic applications in games and sims don’t require tight control over the colors, and a good choice of the color palette can eliminate any visible results of this DXT effect. File size on disk can be large for DDS. For example, a 1024×1024 image with an alpha channel and mipmaps will result in a 1.37MB file. But file size is a relatively small consideration these days. Game/simulation performance matters far more.
 
 
 
*''' Are there any image dimension limitations? '''
 
*DDS/DXT images can be of any dimension expressed in powers of two up to the limits of your application or hardware, but no dimension can be less than 4. So 1024×512 is fine, but 2048×2 is not. Since DXT compression works with texels and each texel is a 4×4 block of pixels, it follows that DXT can’t work with an image with a dimension smaller than 4.
 
 
 
*'''DDS and Mipmaps'''
 
*An object seen up close needs a relatively high-resolution texture so that it doesn’t look pixelated. But an object seen in the distance needs much less resolution to look reasonable, and smaller resolutions require less resources. A mipmap is a set of pre-calculated versions of the same image progressively decreasing in size until the image reaches a dimension of 1×1. The idea is to have a handy set of smaller images that can be used to increase rendering speed and reduce aliasing effects while allowing the creator to optimize the appearance of the image at each stage. With mipmaps, the render engine can use the smaller, pre-processed version. This speeds things up significantly when you consider a scene of many objects at varying distances from the viewer. With most image file formats, the render engine must generate the mipmaps when the image is loaded. This takes time, and the render engine may not make the best choices for the appearance of the image. DDS files give you control over this process, allowing you to pre-generate mipmaps and store them ready-to-go along side the source image in the same DDS file. There is of course a penalty. Mipmaps take up space, increasing data size by 33% beyond that required to store the source image. This increases both size on disk and size in video memory. Unless your primary concern is file size, the benefits of mipmapping more than outweigh the costs.
 
 
 
*'''DXT Flavors Egad! There are tons of DXT codecs! Which should I use?'''
 
*DXT comes in a lot of flavors. Fortunately most are specialty applications. For common situations you’ll probably need to consider only three: DXT1, DXT3 and DXT5. Let’s briefly overview each one to get an idea when they might be used.
 
 
 
* DXT1 (BC1)
 
** RGB, 4 bits per pixel, no alpha or 1 bit (black or white) alpha
 
** DXT1 is a fixed 8:1 compression ratio
 
If your image does not require an alpha channel, use the no-alpha DXT1. It uses the same compression algorithm as DXT3/5 for color data, and will get you half the file size. I’ve never used this one with an alpha channel. Sometimes referred to as BC1 (Block Compression 1).
 
 
 
* DXT3 (BC2)
 
** ARGB, 8 bits per pixel, explicit alpha
 
** DXT3 is a fixed 4:1 compression ratio
 
DXT3’s method for storing alpha is better for files that have clearly delineated defined alpha regions and values. It may result in banding artifacts if used on images with smooth blended alpha regions– use DXT5 for these cases. I rarely use DXT3.
 
 
 
* DXT5 (BC3)
 
** ARGB, 8 bits per pixel, interpolated alpha
 
** DXT5 is same 4:1 compression ratio as DXT3
 
DXT5 is the go-to codec for most images that include an alpha channel. The cost is double the file size of a DXT1 image. If you do not need an alpha channel (and I suggest that you not include an alpha channel unless you absolutely must), use DXT1.
 
 
 
'''DDS Files: editing and archiving'''
 
* DDS images are just regular images with or without an alpha channel. They can be created in GIMP or Photoshop or whatever you have. To export your image as a DDS file, you’ll need a plugin. NVIDIA offers a free tool for importing/exporting DDS files into Photoshop:
 
[https://developer.nvidia.com/nvidia-texture-tools-adobe-photoshop]
 
* A plugin for GIMP can be found here: [http://code.google.com/p/gimp-dds/]
 
Learning to use these plugins is beyond the scope of this article, but I have a few observations. It depends on the nature of the image.
 
* I’ve sometimes worked with images that simply refuse to export to DXT1.
 
* Exporting to DXT5 tends to be much faster and less problematic.
 
* Don’t include an alpha channel if you don’t absolutely need one. An alpha channel will double the file size of your DDS file and significantly add to game or simulation video processing workload. I commonly find images created by others that include an unnecessary, fully-transparent alpha channel. Don’t do that.
 
* Since DDS files are lossy and rather badly so, you should not use them as an archive format. Don’t edit DDS files if there is any possibility of using a non-lossy source, especially if the DDS file is serving as a normal map. Always begin your edits by opening a file saved in a non-lossy format. If you must edit a DDS file as your only option, first save the file in a non-lossy format, thus preserving at least that version as a standard, and create all subsequent versions from that source. If you do not, subsequent saves and editing will quickly corrupt the image into something unusable.
 
 
 
'''DXT and Normal Maps'''
 
A normal map stores information that allows lighting calculations to be done at the per-pixel level rather than interpolating between vertices. This can make lighting much smoother and more graduated, or it can make significant changes how lighting affects a given point. The common use of a normal map is to tweak surface data to give the illusion of having extra detail. It’s not real detail– it doesn’t actually change the geometry, but it does change the lighting and can give very realistic results without adding polygonal complexity.
 
Normal map data is essentially a normal vector for each pixel. The normal gives an indication of the “facing” of the pixel. Normal maps use the RGB color space to store the data needed for the effect– the 24 bits for each pixel are used to encode the pixel’s vector. Given that a color in this context is really a vector, changing a color can significantly alter the scale and direction of that vector, thus altering the effects at any given location. Since DXT compression does not exactly preserve the colors, DXT images can make a real mess of normal maps.
 
 
 
You can still use DXT compression for normal maps. In fact it’s common to do so in games and sims, where resolutions are fairly low and resources come at a premium. Just be aware that the end result is likely to be rather mangled. For low-res situations it’s usually not a big problem. But consider saving the normal map in a non-lossy format. You’ll lose the load-time and video RAM advantages of DDS, but you can compensate by going to a quarter-size resolution. In general for normal maps, you’ll get better appearance with a non-lossy format one-quarter the resolution of the DXT image. For example, a 512×512 PNG normal map will likely look better than a 1024×1024 DDS/DXT normal map.
 
 
 
[[Category:Arcane_University]]
 

Latest revision as of 20:19, 11 July 2022

< Arcane University:3D Art

In order for your mesh to be used in Skyrim, it must be in .nif format. There are several ways to do this, depending on what 3D software you use.

  • Any 3D programs (see requirements)
    • CK-CMD
    • Outfit Studio
  • Blender
    • Blender Niftools Addon
    • PyNifly
  • 3DS MAX
    • 3DS Max Nif Plugin

A number of these methods do not directly support Special Edition nifs. However, this can be circumvented using SSE Nif Optimizer or Cathedral Assets Optimizer, both of which can convert LE meshes to SE format and the other way around.

CK-CMD[edit]

CK-CMD is an independent command-line software you can use to, among many other things, convert FBX files to NIF and the other way around. You can download it on its GitHub page. The advantage of this method is that it allows for exporting for any 3D program that can export to FBX. Limitations: Does not directly support SE nif files.

Outfit Studio[edit]

The process for exporting using Outfit Studio is documented on Outfit Studio Export. The advantage of this method is that it allows for exporting for any 3D program which can export to OBJ. The disadvantage is that you have to work with intermediate files, which makes it harder to quickly test out small edits in-game. Limitations: The OBJ format cannot store vertex weights or vertex colors, so any rigging will need to be done within Outfit Studio.

Blender Niftools Addon[edit]

You can export directly from Blender using the Blender Niftools Addon. Which version you should use (and how you export) depends on which version of Blender you are using. It allows for directly exporting from Blender to a full NIF, but you will need to merge the created NIF with separately created collision using ChunkMerge. The advantage of this method is that you do not need any intermediate files and can export directly from Blender. Limitations: No collision handling, and does not directly support SE nif files. Both of these limitations can be handled with workarounds.

Blender 2.7x[edit]

The process for direct export from Blender 2.7x is documented on Blender 2.7x Export.

Blender 2.8+[edit]

The process for direct export from Blender 2.8 or up is documented on Blender 2.8+ Export.

PyNifly[edit]

PyNifly, like the Blender Niftools Addon, allows for direct import/export of nif files. Unlike the Blender Niftools Addon, it has support for SE nif files and handles simple collisions (non-MOPP).

3DS MAX Nif Plugin[edit]

The process for direct export from 3DS MAX export using especially made plugin is documented on 3DS MAX Export. It allows for direct export from 3DS MAX to a full NIF. The advantage of this method is that you do not need any intermediate files and can export directly from 3DS MAX. The disadvantage is that it does not work for every version of 3DS MAX, and will therefore need to be updated when updating 3DS MAX. Limitations: Does not directly support SE nif files.

See Also[edit]

  • To review the specifics of the NIF data format used for meshes, check Nif Data Format.
  • To review the specifics of the DDS data format used for textures, check DDS Data Format.