Mapping between HLSL and GLSL

It's 2016 and we're still stuck with various shading languages - the current contenders being HLSL for Direct3D, and GLSL for OpenGL and as the "default" front-end language to generate SPIR-V for Vulkan. SPIR-V may become eventually the IL of choice for everything, but that will take a while, so right now, you need to convert HLSL to GLSL or vice versa if you want to target both APIs.

I won't dig into the various cross-compilers today - that's a huge topic - and focus on the language similarities instead. Did you ever ask yourself how your SV_Position input is called in GLSL? Then this post is for you!

Note

This is by no means complete. It's meant as a starting point when you're looking to port some shaders between GLSL to HLSL. I'm omitting functions which are the same for instance.

System values & built-in inputs

Direct3D specifies a couple of system values, GLSL has the concept of built-in variables. The mapping is as following:

HLSL GLSL
SV_ClipDistance gl_ClipDistance
SV_CullDistance gl_CullDistance if ARB_cull_distance is present
SV_Coverage gl_SampleMaskIn & gl_SampleMask
SV_Depth gl_FragDepth
SV_DepthGreaterEqual layout (depth_greater) out float gl_FragDepth;
SV_DepthLessEqual layout (depth_less) out float gl_FragDepth;
SV_DispatchThreadID gl_GlobalInvocationID
SV_DomainLocation gl_TessCord
SV_GroupID gl_WorkGroupID
SV_GroupIndex N/A
SV_GroupThreadID gl_LocalInvocationID
SV_GSInstanceID gl_InvocationID
SV_InsideTessFactor gl_TessLevelInner
SV_InstanceID gl_InstanceID & gl_InstanceIndex (latter in Vulkan with different semantics)
SV_IsFrontFace gl_FrontFacing
SV_OutputControlPointID gl_InvocationID
N/A gl_PatchVerticesIn
SV_Position gl_Position in a vertex shader, gl_FragCoord in a fragment shader
SV_PrimitiveID gl_PrimitiveID
SV_RenderTargetArrayIndex gl_Layer
SV_SampleIndex gl_SampleID
The equivalent functionality is available through EvaluateAttributeAtSample gl_SamplePosition
SV_StencilRef gl_FragStencilRef if ARB_shader_stencil_export is present
SV_Target layout(location=N) out your_var_name;
SV_TessFactor gl_TessLevelOuter
SV_VertexID gl_VertexID & gl_VertexIndex (latter Vulkan with different semantics)
SV_ViewportArrayIndex gl_ViewportIndex

This table is sourced from the OpenGL wiki, the HLSL semantic documentation and the GL_KHR_vulkan_glsl extension specification.

Atomic operations

These map fairly easily. Interlocked becomes atomic. So InterlockedAdd becomes atomicAdd, and so on. The only difference is InterlockedCompareExchange which turns into atomicCompSwap.

Shared/local memory

groupshared memory in HLSL is shared memory in GLSL. That's it.

Barriers

HLSL GLSL
GroupMemoryBarrierWithGroupSync groupMemoryBarrier and barrier
GroupMemoryBarrier groupMemoryBarrier
DeviceMemoryBarrierWithGroupSync memoryBarrier, memoryBarrierImage, memoryBarrierImage and barrier
DeviceMemoryBarrier memoryBarrier, memoryBarrierImage, memoryBarrierImage
AllMemoryBarrierWithGroupSync All of the barriers above and barrier
AllMemoryBarrier All of the barriers above
N/A memoryBarrierShared

Texture access

Before Vulkan, this is bundled and not trivial to emulate. Fortunately, this changes with Vulkan, where the semantics are the same as in HLSL. The main difference is that in HLSL, the access method is part of the "texture object", while in GLSL, they are free functions. In HLSL, you'll sample a texture called Texture with a sampler called Sampler like this:

Texture.Sample (Sampler, coordinate)

In GLSL, you need to specify the type of the texture and the sampler, but otherwise, it's similar:

texture (sampler2D(Texture, Sampler), coordinate)
HLSL GLSL
CalculateLevelOfDetail & CalculateLevelOfDetailUnclamped textureQueryLod
Load texelFetch and texelFetchOffset
GetDimensions textureSize, textureQueryLevels and textureSamples
Gather textureGather, textureGatherOffset, textureGatherOffsets
Sample, SampleBias texture, textureOffset
SampleCmp samplerShadow
SampleGrad textureGrad, textureGradOffset
SampleLevel textureLod, textureLodOffset
N/A textureProj

General math

GLSL and HLSL differ in their default matrix interpretation. GLSL assumes column-major, and multiplication on the right (that is, you apply \(M * v\)) and HLSL assumes multiplication from left (\(v * M\)) While you can usually ignore that - you can override the order, and multiply from whatever side you want in both - it does change the meaning of m[0] with m being a matrix. In HLSL, this will return the first row, in GLSL, the first column. That also extends to the constructors, which initialize the members in the "natural" order.

Various functions

HLSL GLSL
atan2(y,x) atan, with parameters swapped
ddx dFdx
ddx_coarse dFdxCoarse
ddx_fine dFdxFine
ddy dFdy
ddy_coarse dFdyCoarse
ddy_fine dFdyFine
EvaluateAttributeAtCentroid interpolateAtCentroid
EvaluateAttributeAtSample interpolateAtSample
EvaluateAttributeSnapped interpolateAtOffset
frac fract
lerp mix
mad fma
saturate clamp(x, 0.0, 1.0)

Anything else I'm missing? Please add a comment and I'll update this post!

Setting up your own mailserver

Today we're going to look at yet another use for your own home server - handling all your emails. There's a couple of reasons why you want to do this, so let's start with some motivation. I always used to download my emails from my email account using POP3. That works fine as long as you only use one machine to access it, as the emails get immediately deleted after fetching. This didn't use to be an issue at first for me, but once you get access to your mail account from your mobile phone and notebook it starts to become an issue.

The easy solution is to change it to IMAP and just let all emails on the server. While that works, I did accumulate a lot of emails over the years - my inbox contains several tens of thousands of emails, and uses a couple GiB of disk space. Even though space is not so much of an issue these days, it's still slow to sync with an inbox that has thousands of emails, and there's not much reason for me to keep a lot of the old emails around.

With a home server in place, it was time to consolidate this. My goal was to achieve the following:

  • Have all emails on my server, so I don't need to worry about backups. That means the server needs to pull them from my web accounts.
  • Have all emails in an easy to process format - something where I can search through them manually, if ever needed.
  • Remove old emails from my web accounts after a grace period.

Turns out, all of this is possible, with the help of a couple tools and some scripting. Let's dive in how to do this!

The mail server

First we need a mail server. The mail server is the thing we'll connect to in the future - my Thunderbird at home doesn't connect to my web account any more, but to my home server instead. My server of choice is dovecot which you can install on Ubuntu using apt install dovecot-imapd. Next we need to configure it. There's a couple of files we need to edit. In the following, I'll assume we're going to store our emails on /tank/mail in a per-user directory.

In /etc/dovecot/dovecot.conf, add protocols = imap as the last line to enable IMAP. Next, you'll need to edit /etc/dovecot/conf.d/10-mail.conf where you specify the mail_location. I'm using mail_location = maildir:/tank/mail/%u/.maildir - maildir ensures the emails are stored in the maildir format, which you can for instance read using the Python mailbox module.

We also need to setup a way to connect to the server, so we edit /etc/dovecot/conf.d/10-auth.conf and enable the passwdfile authentication - just remove the # at the beginning of the !include auth-passwdfile.conf.ext line.

Before we continue, I'm going to set up a new user which will own all the emails - this user will be called vmail. This is straightforward:

adduser --disabled-password --shell=/bin/false vmailadduser --disabled-password --shell=/bin/false vmail
chown -R vmail /tank/mail

We also need the user-id and the group-id of that user - check it using:

id vmail

With that, we can finally add users to access our email server. I have a couple of mailboxes and one user per mailbox. I'll be using a simple password file here. In /etc/dovecot/conf.d/10-auth.conf, check that auth_mechanisms = plain is set, as well as disable_plaintext_auth = no. By default, dovecot only allows secure connections, but here we're connecting within our local network only, so we don't have to bother setting up SSL. Now we're ready to add the users - put them into /etc/dovecot/users, with one line per user like this:

username:{plain}passwort:vmail-user-id:vmail-group-id::

vmail-user-id and vmail-group-id is the user and group id of the vmail user, respectively. Phew! You can actually try to connect now using the username and (plain-text) password you just specified.

Note

If you're not comfortable with plain because you're not the only root on your server, you can also use encrypted passwords.

Getting email

The next part is to load the emails from your web account into dovecot. The easiest solution I've found is to stuff them directly into dovecot using getmail. getmail can write into a maildir directly, and dovecot - as we've set it up above - stores all emails in a maildir, so let's have getmail fetch it right in there!

Installing getmail is simple, just use apt install getmail, and then we need to set up one file per mail account we want to fetch. Those files go into /etc/getmail, and look like this (let's call it your-email_example_com.conf):

[retriever]
type = SimpleIMAPSSLRetriever
server = mymail.server.com
username = your-email@example.com
password = swordfish

[destination]
type = Maildir
path = /tank/mail/your-email_example_com/.maildir/
user = vmail

[options]
verbose = 2
delete = false
messsage_log_syslog = true
read_all = false
delivered_to = false
received = false

All that is left is to set up a cron job to fetch the emails. I'm running it every 5 minutes - this is how my /etc/cron.d/get-mail job looks like:

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"

*/5 * * * * root getmail --rcfile your-email_example_com.conf --getmaildir /etc/getmail/

You can pass in as many --rcfile options into getmail as you want. Notice that it's also possible to use IMAP idle to get instant emails - it's a bit tricky to set up and I didn't bother with it as I don't need this.

All right, emails are coming in, now we only need to clean up the mail folder somehow!

Cleaning up

Unfortunately, I didn't find a solution for mail cleanup, so I ended up writing my own script for it. It works very similar to getmail: For every account, you set up a configuration file. For the account above, we'd put the following content into /etc/delmail/your-email_example_com.conf:

[mailbox]
server = mymail.server.com
username = your-email@example.com
password = swordfish

[backup]
type = Maildir
path = /tank/mail/your-email_example_com/.backup/

[options]
min_age = 28

min_age specifies the age of emails before they get deleted. I'm also erring on the side of safety, so all emails the script is about to delete will be backed up into the path specified in the [backup] section.

All that is left is now a cron job for this, which doesn't have to run with the same frequency. Here's the contents of my /etc/cron.d/delete-mail:

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"

2 */4 * * * vmail delete-old-emails.py -config /etc/delmail/your-email_example_com.conf

Notice that the server only handles receiving emails, not sending them. For sending, I still go directly to my web email server, but I store the copy locally if sending from home.

And that's it! I've been using this setup since a couple of years now, and so far, I'm very pleased. While traveling, I can always look up my recent emails. At home, I have access to all emails I have ever received, and I have every backed up nicely. My web accounts run with 100 MiB of storage instead of several GiB as they used, and if I fire up my mobile phone or notebook occasionally, there's only a couple of emails it needs to synchronize because the web inbox is nearly empty all the time.

Moving from Wordpress to Nikola

If you're a regular reader, you will have noticed that this blog looks remarkably different since today. What happened? Well, I moved the blog from Wordpress to a static page generator -- Nikola. That might come as a surprise as I've been using Wordpress for 11 years now, so let's dig into the reasons.

Quo vadis, Wordpress?

The most important one for me is that I often add code snippets into my blog posts, and the experience of adding code to Wordpress is horrible. I ended up writing my own plugin for Wordpress to teach it code highlighting the way I wanted, but even then, moving between the "visual" and the "text" editor would mess up source formatting. Also, I would frequently run into HTML escaping problems, where my code would end up being full of >.

The other issue I had is backing up my blog. Sure, there's the Export as XML function, which does work, but it doesn't backup your images and so on, so you need to grab a dump of the wp-content folder in addition to the XML export. I'm not sure how many of you do this regularly -- for me it was just a hassle.

Next up is theming. Wordpress used to be an easy to theme system when I started -- but over time, making a theme became more and more complicated. With any update, Wordpress would add minor tweaks which required upgrading the theme again. Eventually, I ended up using the sub-theming capabilities, but even then, theme development in Wordpress remained a hassle. My "solution" was a local docker installation of Wordpress and MySQL. That worked, but it was a big hassle to set up, and frankly, I lost all motivation to do theme related changes.

The final nail in the coffin were encoding issues which left a lot of my posts with “ and other funny errors. Fixing this in Wordpress turned out to be a huge pain, especially as there is no easy way to bulk-process your posts, or for the matter, do anything with your posts.

Not everything is bad with Wordpress though. The visual editor is pretty slick, and when you stick to the WSYIWYG part of Wordpress, it is actually an awesome tool. It's just failing short on my use cases, and judging from the last years of development, it's moving further away into a direction which has very little value add for me. Recently, they added for instance another REST based web-editor, so you have now two visual editors for Wordpress. Unfortunaly, source code is still not a "solved" issue, as is authoring files in something else than HTML.

Going static

All of those problems became big enough over time to make me bite the bullet and go to a static page generator. I do like static page generation -- having even written my own static page generator a couple of years back. What I didn't want to do though is to write an importer for Wordpress, deal with auto-updating, theming, and so on, so I looked for a static page generator which would suite me. I'm partial to reStructuredText, and Python, so I ended up with Nikola, which is a Python based static page generator with first-class support for reStructuredText.

It comes with an import_wordpress command, which seemed easy enough, but it turns out you need a bit more post-processing before you can call it a day. Let's start with the importing!

Import & cleanup

Ingesting everything through import_wordpress will give you the content as HTML. Even though the files are called .md, they just contain plain HTML (which is valid Markdown ...). To convert them to "proper" Markdown, I used pandoc:

find . -name *.md | xargs -I {} pandoc --from html --to markdown_strict {} -o {}

That cleanups up most of it, but you'll end up with weird source code. My source code was marked up with [source lang=""], so I had to go through all files with source code and fix them up manually. Sounds like a lot of work, but it's usually quite straightforward as you can just copy & paste from your existing page.

In retrospective, converting everything to reStructuredText might have been a better solution, but frankly, I don't care too much about the "old" content. For new content, I'm using reStructuredText, for old content -- I don't care.

Redirections

Next up is redirecting your whole blog so your old links continue to work. I like to have "pretty" urls, that is, for a post named /my-awesome-post, I want an URL like /blog/my-awesome-post. This means there has to be a /blog/my-awesome-post/index.html page. By default, the imports will be however /posts/my-awesome-post.html. In order to solve this, you need to do two things:

  • Turn on pretty URLs using: PRETTY_URLS = True
  • Fix up the redirection table, which is stored in REDIRECTIONS in conf.py

To fix the redirections table, I used a small Python script to make sure that old URLs like /my-awesome-post were redirected to /blog/my-awesome-post -- I also used the chance to move all blog posts to a /blog subdirectory. Nikola will then generate /my-awesome-post/index.html with a redirection to the new URL.

Comments

Finally, the comments - I had a couple hundred in Wordpress, and Nikola, being a static page generator, doesn't have any idea of comments. The solution here is to import them to Disqus which is straightforward. First, you create an account at Disqus, install the Disqus Wordpress plugin, and import your comments into Disqus. Be aware: This will take a while. Finally, you need to teach Disqus the new URLs. This is done using an URL remapping, which is a simple CSV file that contains the original URL and the new one. Again, same exercise as above -- you'll probably want to reuse the REDIRECTIONS for this and dump it out into a CSV.

Closing remarks

Voilà, there you go -- you've ported your blog from Wordpress to Nikola. The remaining steps you'll want to do:

  • Set up some revision control for your blog. I just imported it wholesale into Mercurial with the largefiles extension to store all attachments. Backups: Check!
  • Set up a rsync to upload the blog. By its nature, Nikola generates all files, and you need to synchronize -- some scripting will be handy for this.
  • Fix up all URLs to use / as the prefix. I just did a search-and-replace for everything https://anteru.net/ which didn't continue with wp-content, redirected that to /blog/, and then fixed up the /wp-content ones.

That's it -- let's see if Nikola will serve me for the next 11 years, just like Wordpress did :)

Designing C APIs in 2016

It's 2016, and C APIs are as popular as always. Many libraries are written in C, or provide C APIs, and there are tons of bindings for any language making C the de-facto standard for portable APIs. Yet, a lot of C APIs fail basic design guidelines, and there doesn't seem to have been much progress in recent years in the way we design those APIs. I've been working on a modern C API recently, and then there's also Vulkan with a few fresh design ideas. High time we take a look at what options we have when designing a C based API in 2016!

Design matters

API design matters a lot. I've written about it before, and I still get to use a lot of APIs where I'd like to get onto a plane and have a serious chat with the author. Today we're not going to talk about the basic issues, which are ABI compatibility, sane versioning, error handling and the like - instead we'll look at ways you can expose the API to the client.

My assumption here is you're designing an API which will live in a shared object/DLL. The basic approach to do this in C is to expose your API entry points directly. You just mark them as visible, decide on some naming scheme, and off you go. The client either links directly against your library, or loads the entry points manually, and calls them. This is practically how all C APIs have been designed so far. Look at sqlite, libpng, Win32, the Linux kernel - this is exactly the pattern.

Current problems

So what are the problems with this approach? Well, there's a couple:

  • API versioning
  • API loading
  • Extensibility

Let's tackle those one-by-one.

API versioning

For any API, you'll inevitably run into the issue that you're going to update a function signature. If you care about API and ABI compatibility, that means you need to add a new entry point into your API - the classic reason we see so many myFunctionEx or myFunctionV2. There's no way around this if you expose the entry points directly.

It also means you can't remove an entry point. Client applications can solve that issue if you provide a way to query the API version, but then we're going to run into the next problem - API loading.

In general, a direct C API has no really good way to solve this problem, as every version bump means either new entry points or more complicated loading. Adding a couple new entry points doesn't sound like a big issue, but over time, you'll accumulate lots of new versions and it'll become unclear for developers which one to use.

API loading

API loading covers the question how a user gets started with your API. Often enough, you just link directly against an import library, and then you expect a shared object or DLL exporting the same symbols. This makes it hard to dynamically use the library (i.e. if you want to use it only if needed.) Sure, you can do lazy loading tricks using the linker, but what if you don't have the import library to start with? In this case, you'll end up loading some kind of dispatch library which loads all entry points of your API. This is for instance what the OpenCL loader does, or GLEW. This way, your client is 100% isolated from the library, but it's quite some boilerplate to write.

The solutions for this aim at reducing that boilerplate. GLEW generates all load functions from an XML description, OpenCL just mandates the clients expose a single entry point which fills out a dispatch table. Which brings us to the last topic, extensibility.

Extensibility

How do you extend your API? That is, how can someone add something like a validation layer on top of it? For most C APIs, extensions mean just more entry point loading, but layering is completely ignored.

Vulkan explicitly attacks the layering problem. The solution they came up with allows layers to be chained, which is, layers call into underlying layers. To make this efficient, the chaining can skip several layers, so you don't pay per layer loaded, just per layer that is actually handling an API call. Extensions are still handled using the normal way of querying more API entry points.

Vulkan also has a declarative API version stored in the vk.xml file, which contains all extensions, so they can generate the required function pointer definitions. This reduces the boilerplate a lot, but still requires users to query entry points - though it would be possible to autogenerate a full loader like GLEW does.

Dispatch & generation focused APIs

Thinking about the issues above, I figured that ideally, what we want is:

  • As few entry points as possible, ideally one. This solves the dynamic loading issue, and makes it easy to have one entry point per version.
  • A way to group all functions for one version together. Switching a version would then result in compile-time errors.
  • A way to layer a new set of functions on top of the original API - i.e. the possibility to replace individual entry points.

If you think C++ classes and COM, you're not far off. Let's take a look at the following approach to design an API:

  • You expose a single entry point, which returns the dispatch table for your API directly.
  • The dispatch table contains all entry points for your API.
  • You require clients to pass in the dispatch table or some object pointing to the dispatch table to all entry points.

So how would such an API look like? Here's an example:

struct ImgApi
{
    int (*LoadPng) (ImgApi* api, const char* filename,
        Image* handle);
    int (*ReadPixels) (ImgApi* api, Image* handle,
        void* target);
    // or
    int (*ReadPixels) (Image* handle, void* target);

    // Various other entry points
};

// public entry points for V1_0
int CreateMyImgIOApiV1_0 (ImgApi** api);
int DestroyMyImgIOApiV1_0 (ImgApi* api);

Does this thing solve our issues? Let's check:

  • Few entry points - two. Yes, that works, for dynamic and static loading.
  • All functions grouped - check! We can add a ImgApiV2 without breaking older clients, and all changes become compile-time errors.
  • Layering - what do you know, also possible! We just instantiate a new ImgApi, and link it to the original one. In this case, the only difficulty arises from chaining through objects like Image, for which we'll need a way to query the dispatch table pointer from them.

Looks like we got a clear winner here - and indeed, I recently implemented a library using such an API design and the actual implementation is really simple. In particular if you use C++ lambdas, you can fill out a lot of the redirection functions in-line, which is very neat. So what are the downsides? Basically, the fact that you need to call through the dispatch table is the only one I see. This will yield one more indirection, and it's a bit more typing.

Right now my thinking is that if you really need the utmost performance per-call, your API is probably too low-level to start with. Even then, you could still force clients to directly load that one entry point, or provide it from the dispatch table. The more typing issue is generally a non-issue: First of all, any kind of autocompletion will immediately identify what you're doing, and if you really need to, you can auto-generate a C++ class very easily which inlines all forwarding and is simply derived from the dispatch table.

This is also where the generation part comes into play: I think for any API going forward, a declarative description, be it XML, JSON or something else, is a necessity. There's so much you want to auto-generate for the sake of usability that you should be thinking about this from day one.

Right now, this design, combined with a way to generate the dispatch tables, looks to me like the way to go in 2016 and beyond. You get an easy to use API for clients, a lot of freedom to build things on top, while keeping all of the advantages of plain C APIs like portability.

Storing vertex data: To interleave or not to interleave?

Recently, I've been refactoring the geometry storage in my home framework. Among other things, I also looked into vertex attribute storage, which we're going to dive into today.

When it comes to storing vertex data, there's basically two different schools of thought. One says interleave the attributes, that is, store "fat" vertices which contain position, normal, UV coordinates and so on together. I'll refer to this as interleaved storage, as it interleaves all vertex attributes in memory. The other school says all attributes should remain separate, so a vertex consists of multiple streams. Each stream stores one attribute only with tight packing.

Why care?

Let's look where the vertex attribute storage matters:

  • On disk, as compression and read performance may be affected.
  • In memory, as some algorithms prefer one order or the other.
  • At render time, as it affects the required bandwidth and impacts performance on GPUs.

Rendering

We'll start by looking at the last usage first, which is GPU rendering, as it's the easiest to explain. On the GPU, all APIs allow sourcing vertex attributes from multiple streams or from a single stream. This makes experiments very simple - and also highlights a few key differences.

The first thing that is affected is access flexibility. I have a geometry viewer, which may or may not have all attributes present for one mesh. With interleaved data, it's hard to turn off an attribute, as the vertex layout needs to be adjusted. With de-interleaved data, it's as easy as binding a null buffer or using a shader permutation which just skips the channel. One point for de-interleaved data.

The next use case is position-only rendering, which is very common for shadow maps. Again, de-interleaved data wins here, due to cache efficiency. It's quite easy to see - if you only need positions, you get the best cache and bandwidth utilization if you separate it from the other attributes. With interleaved data, every cache line fetches some other attributes which you throw away immediately. Another point for de-interleaved data.

Vertices packed with gaps into a cache line.
Unpacked reads from a cache line. Three attributes are fetched, but most of the cache line is not used. Only 4 vertices can be fetched before the next cache line must be touched.

The last point is actually quite important for GPUs. On a GPU compute unit, you have very wide vector units which want to fetch all the same data in a given cycle, for instance, the position. If you have the data de-interleaved, they can fetch it into registers and evict the cache line immediately. You can see that in the figure above. In the first iteration, the red x coordinate is read, then y, and finally z. It takes thus three reads to consume a whole cache line, and it can be evicted right away. For interleaved data, the data has to remain in cache until everything has been read from it, polluting the already small caches - so de-interleaved data will render slightly faster due to better cache utilization.

Vertices tightly packed in a cache line.
Packed reads from a cache line. 10 vertices can be read before moving to the next cache line.

Is there actually a good reason to use interleaved data for rendering? Actually, I can't think of one, and as it turns out, I changed my geometry viewers to de-interleaved data back a few years ago already and never looked back :)

In the offline rendering world, attributes also have been long specified separately as a ray-tracer mostly cares about positions. For this use case, cache efficiency is most important, so you want to have them separate as well, even on the CPU.

Processing

Here's the more interesting part. During the recent refactoring, I changed the mesh view abstraction to take advantage of de-interleaved data when fetching a single attribute. So all algorithms I had in place needed to be refactored to work with both interleaved and de-interleaved data, giving me a good idea of the advantages and disadvantages of each.

Turns out, there's only one algorithm in my toolbox which actually needs interleaved data so much for performance that it will re-interleave things if it encounters a de-interleaved mesh. This algorithm is the re-indexer, which searches for unique vertices, by storing a hash to the vertex and a pointer so it can do exact comparisons.

Except for that algorithm, all others were working on one attribute only to start with, mostly position, and will be now slightly more cache efficient for de-interleaved data. I briefly measured performance, but it turns out, for "slim" vertices with position, normal and maybe one or two more attributes, the cache efficiency differences on CPUs are rather minimal - I'd expect more gains with heavy multi-threading and in bandwidth-restricted cases. The good news is that nothing got slower.

I'd call it a tie, due to the re-indexer. As I expose a pointer and stride to all algorithms now, it's basically trivial to swap between the representations. For the re-indexer, I'm thinking that there must be a better way to represent a vertex than a pointer and the hash, which would also resolve that issue (maybe a stronger hash which does not collide will be enough ...)

Storage

So here comes the interesting part. My geometry storage is LZ4 compressed, and with compression, you'd expect interleaved data to loose big time against non-interleaved. After all, all positions will have similar exponent, all normals will have the same exponent, etc., and if they are stored consecutively, a compressor should find more correlation in the data.

Turns out, with the default LZ4 compression, this is not quite true, and interleaved data actually compresses quite a bit better. For testing, I used the XYZRGB Asian dragon, and converted it to my binary format which stores position as 3 floats, and normals as 3 floats as well.

Storage No Idx/Compressed Idx/Compressed Idx/Compressed (HC)
Interleaved 169 MiB 138 MiB 135 MiB
Deinterleaved 189 MiB 138 MiB 132 MiB
It seems that LZ4 is actually able to get a better compression for interleaved data, which duplicates whole vertices and not just a single attribute. With indexed data, it's a wash, and only with the high compression setting, the de-interleaved data pulls ahead.

This is actually really surprising for me and it looks like more analysis is warranted here. One thing that obviously got improved are loading times, as I need to de-interleave for rendering, but the difference is just a couple of percent. This is mostly due to the fact that I bulk load everything into memory, which dominates the I/O time.

So on the storage side, it's one point for de-interleaved data in terms of performance, but one point for interleaved data for basic compression. I guess we can call it a tie!

Verdict

Overall, the advantages of having a full de-interleaved pipeline outweigh the disadvantages I found on the storage and algorithmic front. As mentioned, except for one algorithm, everything got slightly faster, and storage space is cheap enough for me that I don't care about the few percent bloat there in the general case. For archival storage, I get some benefit with de-interleaved data, so de-interleaved it is :)