Setting up your own mailserver

Today we're going to look at yet another use for your own home server - handling all your emails. There's a couple of reasons why you want to do this, so let's start with some motivation. I always used to download my emails from my email account using POP3. That works fine as long as you only use one machine to access it, as the emails get immediately deleted after fetching. This didn't use to be an issue at first for me, but once you get access to your mail account from your mobile phone and notebook it starts to become an issue.

The easy solution is to change it to IMAP and just let all emails on the server. While that works, I did accumulate a lot of emails over the years - my inbox contains several tens of thousands of emails, and uses a couple GiB of disk space. Even though space is not so much of an issue these days, it's still slow to sync with an inbox that has thousands of emails, and there's not much reason for me to keep a lot of the old emails around.

With a home server in place, it was time to consolidate this. My goal was to achieve the following:

  • Have all emails on my server, so I don't need to worry about backups. That means the server needs to pull them from my web accounts.
  • Have all emails in an easy to process format - something where I can search through them manually, if ever needed.
  • Remove old emails from my web accounts after a grace period.

Turns out, all of this is possible, with the help of a couple tools and some scripting. Let's dive in how to do this!

The mail server

First we need a mail server. The mail server is the thing we'll connect to in the future - my Thunderbird at home doesn't connect to my web account any more, but to my home server instead. My server of choice is dovecot which you can install on Ubuntu using apt install dovecot-imapd. Next we need to configure it. There's a couple of files we need to edit. In the following, I'll assume we're going to store our emails on /tank/mail in a per-user directory.

In /etc/dovecot/dovecot.conf, add protocols = imap as the last line to enable IMAP. Next, you'll need to edit /etc/dovecot/conf.d/10-mail.conf where you specify the mail_location. I'm using mail_location = maildir:/tank/mail/%u/.maildir - maildir ensures the emails are stored in the maildir format, which you can for instance read using the Python mailbox module.

We also need to setup a way to connect to the server, so we edit /etc/dovecot/conf.d/10-auth.conf and enable the passwdfile authentication - just remove the # at the beginning of the !include auth-passwdfile.conf.ext line.

Before we continue, I'm going to set up a new user which will own all the emails - this user will be called vmail. This is straightforward:

adduser --disabled-password --shell=/bin/false vmailadduser --disabled-password --shell=/bin/false vmail
chown -R vmail /tank/mail

We also need the user-id and the group-id of that user - check it using:

id vmail

With that, we can finally add users to access our email server. I have a couple of mailboxes and one user per mailbox. I'll be using a simple password file here. In /etc/dovecot/conf.d/10-auth.conf, check that auth_mechanisms = plain is set, as well as disable_plaintext_auth = no. By default, dovecot only allows secure connections, but here we're connecting within our local network only, so we don't have to bother setting up SSL. Now we're ready to add the users - put them into /etc/dovecot/users, with one line per user like this:

username:{plain}passwort:vmail-user-id:vmail-group-id::

vmail-user-id and vmail-group-id is the user and group id of the vmail user, respectively. Phew! You can actually try to connect now using the username and (plain-text) password you just specified.

Note

If you're not comfortable with plain because you're not the only root on your server, you can also use encrypted passwords.

Getting email

The next part is to load the emails from your web account into dovecot. The easiest solution I've found is to stuff them directly into dovecot using getmail. getmail can write into a maildir directly, and dovecot - as we've set it up above - stores all emails in a maildir, so let's have getmail fetch it right in there!

Installing getmail is simple, just use apt install getmail, and then we need to set up one file per mail account we want to fetch. Those files go into /etc/getmail, and look like this (let's call it your-email_example_com.conf):

[retriever]
type = SimpleIMAPSSLRetriever
server = mymail.server.com
username = your-email@example.com
password = swordfish

[destination]
type = Maildir
path = /tank/mail/your-email_example_com/.maildir/
user = vmail

[options]
verbose = 2
delete = false
messsage_log_syslog = true
read_all = false
delivered_to = false
received = false

All that is left is to set up a cron job to fetch the emails. I'm running it every 5 minutes - this is how my /etc/cron.d/get-mail job looks like:

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"

*/5 * * * * root getmail --rcfile your-email_example_com.conf --getmaildir /etc/getmail/

You can pass in as many --rcfile options into getmail as you want. Notice that it's also possible to use IMAP idle to get instant emails - it's a bit tricky to set up and I didn't bother with it as I don't need this.

All right, emails are coming in, now we only need to clean up the mail folder somehow!

Cleaning up

Unfortunately, I didn't find a solution for mail cleanup, so I ended up writing my own script for it. It works very similar to getmail: For every account, you set up a configuration file. For the account above, we'd put the following content into /etc/delmail/your-email_example_com.conf:

[mailbox]
server = mymail.server.com
username = your-email@example.com
password = swordfish

[backup]
type = Maildir
path = /tank/mail/your-email_example_com/.backup/

[options]
min_age = 28

min_age specifies the age of emails before they get deleted. I'm also erring on the side of safety, so all emails the script is about to delete will be backed up into the path specified in the [backup] section.

All that is left is now a cron job for this, which doesn't have to run with the same frequency. Here's the contents of my /etc/cron.d/delete-mail:

PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin"

2 */4 * * * vmail delete-old-emails.py -config /etc/delmail/your-email_example_com.conf

And that's it! I've been using this setup since a couple of years now, and so far, I'm very pleased. While traveling, I can always look up my recent emails. At home, I have access to all emails I have ever received, and I have every backed up nicely. My web accounts run with 100 MiB of storage instead of several GiB as they used, and if I fire up my mobile phone or notebook occasionally, there's only a couple of emails it needs to synchronize because the web inbox is nearly empty all the time.

Moving from Wordpress to Nikola

If you're a regular reader, you will have noticed that this blog looks remarkably different since today. What happened? Well, I moved the blog from Wordpress to a static page generator -- Nikola. That might come as a surprise as I've been using Wordpress for 11 years now, so let's dig into the reasons.

Quo vadis, Wordpress?

The most important one for me is that I often add code snippets into my blog posts, and the experience of adding code to Wordpress is horrible. I ended up writing my own plugin for Wordpress to teach it code highlighting the way I wanted, but even then, moving between the "visual" and the "text" editor would mess up source formatting. Also, I would frequently run into HTML escaping problems, where my code would end up being full of >.

The other issue I had is backing up my blog. Sure, there's the Export as XML function, which does work, but it doesn't backup your images and so on, so you need to grab a dump of the wp-content folder in addition to the XML export. I'm not sure how many of you do this regularly -- for me it was just a hassle.

Next up is theming. Wordpress used to be an easy to theme system when I started -- but over time, making a theme became more and more complicated. With any update, Wordpress would add minor tweaks which required upgrading the theme again. Eventually, I ended up using the sub-theming capabilities, but even then, theme development in Wordpress remained a hassle. My "solution" was a local docker installation of Wordpress and MySQL. That worked, but it was a big hassle to set up, and frankly, I lost all motivation to do theme related changes.

The final nail in the coffin were encoding issues which left a lot of my posts with “ and other funny errors. Fixing this in Wordpress turned out to be a huge pain, especially as there is no easy way to bulk-process your posts, or for the matter, do anything with your posts.

Not everything is bad with Wordpress though. The visual editor is pretty slick, and when you stick to the WSYIWYG part of Wordpress, it is actually an awesome tool. It's just failing short on my use cases, and judging from the last years of development, it's moving further away into a direction which has very little value add for me. Recently, they added for instance another REST based web-editor, so you have now two visual editors for Wordpress. Unfortunaly, source code is still not a "solved" issue, as is authoring files in something else than HTML.

Going static

All of those problems became big enough over time to make me bite the bullet and go to a static page generator. I do like static page generation -- having even written my own static page generator a couple of years back. What I didn't want to do though is to write an importer for Wordpress, deal with auto-updating, theming, and so on, so I looked for a static page generator which would suite me. I'm partial to reStructuredText, and Python, so I ended up with Nikola, which is a Python based static page generator with first-class support for reStructuredText.

It comes with an import_wordpress command, which seemed easy enough, but it turns out you need a bit more post-processing before you can call it a day. Let's start with the importing!

Import & cleanup

Ingesting everything through import_wordpress will give you the content as HTML. Even though the files are called .md, they just contain plain HTML (which is valid Markdown ...). To convert them to "proper" Markdown, I used pandoc:

find . -name *.md | xargs -I {} pandoc --from html --to markdown_strict {} -o {}

That cleanups up most of it, but you'll end up with weird source code. My source code was marked up with [source lang=""], so I had to go through all files with source code and fix them up manually. Sounds like a lot of work, but it's usually quite straightforward as you can just copy & paste from your existing page.

In retrospective, converting everything to reStructuredText might have been a better solution, but frankly, I don't care too much about the "old" content. For new content, I'm using reStructuredText, for old content -- I don't care.

Redirections

Next up is redirecting your whole blog so your old links continue to work. I like to have "pretty" urls, that is, for a post named /my-awesome-post, I want an URL like /blog/my-awesome-post. This means there has to be a /blog/my-awesome-post/index.html page. By default, the imports will be however /posts/my-awesome-post.html. In order to solve this, you need to do two things:

  • Turn on pretty URLs using: PRETTY_URLS = True
  • Fix up the redirection table, which is stored in REDIRECTIONS in conf.py

To fix the redirections table, I used a small Python script to make sure that old URLs like /my-awesome-post were redirected to /blog/my-awesome-post -- I also used the chance to move all blog posts to a /blog subdirectory. Nikola will then generate /my-awesome-post/index.html with a redirection to the new URL.

Comments

Finally, the comments - I had a couple hundred in Wordpress, and Nikola, being a static page generator, doesn't have any idea of comments. The solution here is to import them to Disqus which is straightforward. First, you create an account at Disqus, install the Disqus Wordpress plugin, and import your comments into Disqus. Be aware: This will take a while. Finally, you need to teach Disqus the new URLs. This is done using an URL remapping, which is a simple CSV file that contains the original URL and the new one. Again, same exercise as above -- you'll probably want to reuse the REDIRECTIONS for this and dump it out into a CSV.

Closing remarks

Voilà, there you go -- you've ported your blog from Wordpress to Nikola. The remaining steps you'll want to do:

  • Set up some revision control for your blog. I just imported it wholesale into Mercurial with the largefiles extension to store all attachments. Backups: Check!
  • Set up a rsync to upload the blog. By its nature, Nikola generates all files, and you need to synchronize -- some scripting will be handy for this.
  • Fix up all URLs to use / as the prefix. I just did a search-and-replace for everything https://anteru.net/ which didn't continue with wp-content, redirected that to /blog/, and then fixed up the /wp-content ones.

That's it -- let's see if Nikola will serve me for the next 11 years, just like Wordpress did :)

Designing C APIs in 2016

It's 2016, and C APIs are as popular as always. Many libraries are written in C, or provide C APIs, and there are tons of bindings for any language making C the de-facto standard for portable APIs. Yet, a lot of C APIs fail basic design guidelines, and there doesn't seem to have been much progress in recent years in the way we design those APIs. I've been working on a modern C API recently, and then there's also Vulkan with a few fresh design ideas. High time we take a look at what options we have when designing a C based API in 2016!

Design matters

API design matters a lot. I've written about it before, and I still get to use a lot of APIs where I'd like to get onto a plane and have a serious chat with the author. Today we're not going to talk about the basic issues, which are ABI compatibility, sane versioning, error handling and the like - instead we'll look at ways you can expose the API to the client.

My assumption here is you're designing an API which will live in a shared object/DLL. The basic approach to do this in C is to expose your API entry points directly. You just mark them as visible, decide on some naming scheme, and off you go. The client either links directly against your library, or loads the entry points manually, and calls them. This is practically how all C APIs have been designed so far. Look at sqlite, libpng, Win32, the Linux kernel - this is exactly the pattern.

Current problems

So what are the problems with this approach? Well, there's a couple:

  • API versioning
  • API loading
  • Extensibility

Let's tackle those one-by-one.

API versioning

For any API, you'll inevitably run into the issue that you're going to update a function signature. If you care about API and ABI compatibility, that means you need to add a new entry point into your API - the classic reason we see so many myFunctionEx or myFunctionV2. There's no way around this if you expose the entry points directly.

It also means you can't remove an entry point. Client applications can solve that issue if you provide a way to query the API version, but then we're going to run into the next problem - API loading.

In general, a direct C API has no really good way to solve this problem, as every version bump means either new entry points or more complicated loading. Adding a couple new entry points doesn't sound like a big issue, but over time, you'll accumulate lots of new versions and it'll become unclear for developers which one to use.

API loading

API loading covers the question how a user gets started with your API. Often enough, you just link directly against an import library, and then you expect a shared object or DLL exporting the same symbols. This makes it hard to dynamically use the library (i.e. if you want to use it only if needed.) Sure, you can do lazy loading tricks using the linker, but what if you don't have the import library to start with? In this case, you'll end up loading some kind of dispatch library which loads all entry points of your API. This is for instance what the OpenCL loader does, or GLEW. This way, your client is 100% isolated from the library, but it's quite some boilerplate to write.

The solutions for this aim at reducing that boilerplate. GLEW generates all load functions from an XML description, OpenCL just mandates the clients expose a single entry point which fills out a dispatch table. Which brings us to the last topic, extensibility.

Extensibility

How do you extend your API? That is, how can someone add something like a validation layer on top of it? For most C APIs, extensions mean just more entry point loading, but layering is completely ignored.

Vulkan explicitly attacks the layering problem. The solution they came up with allows layers to be chained, which is, layers call into underlying layers. To make this efficient, the chaining can skip several layers, so you don't pay per layer loaded, just per layer that is actually handling an API call. Extensions are still handled using the normal way of querying more API entry points.

Vulkan also has a declarative API version stored in the vk.xml file, which contains all extensions, so they can generate the required function pointer definitions. This reduces the boilerplate a lot, but still requires users to query entry points - though it would be possible to autogenerate a full loader like GLEW does.

Dispatch & generation focused APIs

Thinking about the issues above, I figured that ideally, what we want is:

  • As few entry points as possible, ideally one. This solves the dynamic loading issue, and makes it easy to have one entry point per version.
  • A way to group all functions for one version together. Switching a version would then result in compile-time errors.
  • A way to layer a new set of functions on top of the original API - i.e. the possibility to replace individual entry points.

If you think C++ classes and COM, you're not far off. Let's take a look at the following approach to design an API:

  • You expose a single entry point, which returns the dispatch table for your API directly.
  • The dispatch table contains all entry points for your API.
  • You require clients to pass in the dispatch table or some object pointing to the dispatch table to all entry points.

So how would such an API look like? Here's an example:

struct ImgApi
{
    int (*LoadPng) (ImgApi* api, const char* filename,
        Image* handle);
    int (*ReadPixels) (ImgApi* api, Image* handle,
        void* target);
    // or
    int (*ReadPixels) (Image* handle, void* target);

    // Various other entry points
};

// public entry points for V1_0
int CreateMyImgIOApiV1_0 (ImgApi** api);
int DestroyMyImgIOApiV1_0 (ImgApi* api);

Does this thing solve our issues? Let's check:

  • Few entry points - two. Yes, that works, for dynamic and static loading.
  • All functions grouped - check! We can add a ImgApiV2 without breaking older clients, and all changes become compile-time errors.
  • Layering - what do you know, also possible! We just instantiate a new ImgApi, and link it to the original one. In this case, the only difficulty arises from chaining through objects like Image, for which we'll need a way to query the dispatch table pointer from them.

Looks like we got a clear winner here - and indeed, I recently implemented a library using such an API design and the actual implementation is really simple. In particular if you use C++ lambdas, you can fill out a lot of the redirection functions in-line, which is very neat. So what are the downsides? Basically, the fact that you need to call through the dispatch table is the only one I see. This will yield one more indirection, and it's a bit more typing.

Right now my thinking is that if you really need the utmost performance per-call, your API is probably too low-level to start with. Even then, you could still force clients to directly load that one entry point, or provide it from the dispatch table. The more typing issue is generally a non-issue: First of all, any kind of autocompletion will immediately identify what you're doing, and if you really need to, you can auto-generate a C++ class very easily which inlines all forwarding and is simply derived from the dispatch table.

This is also where the generation part comes into play: I think for any API going forward, a declarative description, be it XML, JSON or something else, is a necessity. There's so much you want to auto-generate for the sake of usability that you should be thinking about this from day one.

Right now, this design, combined with a way to generate the dispatch tables, looks to me like the way to go in 2016 and beyond. You get an easy to use API for clients, a lot of freedom to build things on top, while keeping all of the advantages of plain C APIs like portability.

Storing vertex data: To interleave or not to interleave?

Recently, I've been refactoring the geometry storage in my home framework. Among other things, I also looked into vertex attribute storage, which we're going to dive into today.

When it comes to storing vertex data, there's basically two different schools of thought. One says interleave the attributes, that is, store "fat" vertices which contain position, normal, UV coordinates and so on together. I'll refer to this as interleaved storage, as it interleaves all vertex attributes in memory. The other school says all attributes should remain separate, so a vertex consists of multiple streams. Each stream stores one attribute only with tight packing.

Why care?

Let's look where the vertex attribute storage matters:

  • On disk, as compression and read performance may be affected.
  • In memory, as some algorithms prefer one order or the other.
  • At render time, as it affects the required bandwidth and impacts performance on GPUs.

Rendering

We'll start by looking at the last usage first, which is GPU rendering, as it's the easiest to explain. On the GPU, all APIs allow sourcing vertex attributes from multiple streams or from a single stream. This makes experiments very simple - and also highlights a few key differences.

The first thing that is affected is access flexibility. I have a geometry viewer, which may or may not have all attributes present for one mesh. With interleaved data, it's hard to turn off an attribute, as the vertex layout needs to be adjusted. With de-interleaved data, it's as easy as binding a null buffer or using a shader permutation which just skips the channel. One point for de-interleaved data.

The next use case is position-only rendering, which is very common for shadow maps. Again, de-interleaved data wins here, due to cache efficiency. It's quite easy to see - if you only need positions, you get the best cache and bandwidth utilization if you separate it from the other attributes. With interleaved data, every cache line fetches some other attributes which you throw away immediately. Another point for de-interleaved data.

Vertices packed with gaps into a cache line.
Unpacked reads from a cache line. Three attributes are fetched, but most of the cache line is not used. Only 4 vertices can be fetched before the next cache line must be touched.

The last point is actually quite important for GPUs. On a GPU compute unit, you have very wide vector units which want to fetch all the same data in a given cycle, for instance, the position. If you have the data de-interleaved, they can fetch it into registers and evict the cache line immediately. You can see that in the figure above. In the first iteration, the red x coordinate is read, then y, and finally z. It takes thus three reads to consume a whole cache line, and it can be evicted right away. For interleaved data, the data has to remain in cache until everything has been read from it, polluting the already small caches - so de-interleaved data will render slightly faster due to better cache utilization.

Vertices tightly packed in a cache line.
Packed reads from a cache line. 10 vertices can be read before moving to the next cache line.

Is there actually a good reason to use interleaved data for rendering? Actually, I can't think of one, and as it turns out, I changed my geometry viewers to de-interleaved data back a few years ago already and never looked back :)

In the offline rendering world, attributes also have been long specified separately as a ray-tracer mostly cares about positions. For this use case, cache efficiency is most important, so you want to have them separate as well, even on the CPU.

Processing

Here's the more interesting part. During the recent refactoring, I changed the mesh view abstraction to take advantage of de-interleaved data when fetching a single attribute. So all algorithms I had in place needed to be refactored to work with both interleaved and de-interleaved data, giving me a good idea of the advantages and disadvantages of each.

Turns out, there's only one algorithm in my toolbox which actually needs interleaved data so much for performance that it will re-interleave things if it encounters a de-interleaved mesh. This algorithm is the re-indexer, which searches for unique vertices, by storing a hash to the vertex and a pointer so it can do exact comparisons.

Except for that algorithm, all others were working on one attribute only to start with, mostly position, and will be now slightly more cache efficient for de-interleaved data. I briefly measured performance, but it turns out, for "slim" vertices with position, normal and maybe one or two more attributes, the cache efficiency differences on CPUs are rather minimal - I'd expect more gains with heavy multi-threading and in bandwidth-restricted cases. The good news is that nothing got slower.

I'd call it a tie, due to the re-indexer. As I expose a pointer and stride to all algorithms now, it's basically trivial to swap between the representations. For the re-indexer, I'm thinking that there must be a better way to represent a vertex than a pointer and the hash, which would also resolve that issue (maybe a stronger hash which does not collide will be enough ...)

Storage

So here comes the interesting part. My geometry storage is LZ4 compressed, and with compression, you'd expect interleaved data to loose big time against non-interleaved. After all, all positions will have similar exponent, all normals will have the same exponent, etc., and if they are stored consecutively, a compressor should find more correlation in the data.

Turns out, with the default LZ4 compression, this is not quite true, and interleaved data actually compresses quite a bit better. For testing, I used the XYZRGB Asian dragon, and converted it to my binary format which stores position as 3 floats, and normals as 3 floats as well.

Storage No Idx/Compressed Idx/Compressed Idx/Compressed (HC)
Interleaved 169 MiB 138 MiB 135 MiB
Deinterleaved 189 MiB 138 MiB 132 MiB
It seems that LZ4 is actually able to get a better compression for interleaved data, which duplicates whole vertices and not just a single attribute. With indexed data, it's a wash, and only with the high compression setting, the de-interleaved data pulls ahead.

This is actually really surprising for me and it looks like more analysis is warranted here. One thing that obviously got improved are loading times, as I need to de-interleave for rendering, but the difference is just a couple of percent. This is mostly due to the fact that I bulk load everything into memory, which dominates the I/O time.

So on the storage side, it's one point for de-interleaved data in terms of performance, but one point for interleaved data for basic compression. I guess we can call it a tie!

Verdict

Overall, the advantages of having a full de-interleaved pipeline outweigh the disadvantages I found on the storage and algorithmic front. As mentioned, except for one algorithm, everything got slightly faster, and storage space is cheap enough for me that I don't care about the few percent bloat there in the general case. For archival storage, I get some benefit with de-interleaved data, so de-interleaved it is :)

Debugging D3D12 fences & queues

Welcome to a hands-on session with DirectX 12. I was recently made aware by Christian of a synchronization problem in my D3D12 sample which required multiple tries to fix (thanks again for reporting this!). The more interesting part is however how to find it without doing a very close code review like Christian did, but by using some tools.

The setup

If you want to follow along, make sure to check out the repository at revision 131a28cf0af5. I don't want to give away too much in one go, so we'll assume right now there is some synchronization issue and we'll debug it step-by-step. Let's start with taking a look using the Visual Studio Graphics Diagnostics. For this, you need to install the Graphics Tools in Windows 10 -- Visual Studio should prompt you to get them when you start the graphics debugging.

Without further ado, let's start the GPU usage analysis. You can find it under "Debug", "Start diagnostic tools without debugging", "GPU Usage". After the application ends, you should see something like this:

d3d12-debug-gpuusage

Let's select a second or so and use the "view details" button on this. The view you'll get should be roughly similar to the output below.

d3d12-debug-gpuusage-detail

That's a lot of things going on. To find our application, just click on one of the entries in the table below, and you should find which blocks belong to our application. In my case, I get something like this:

d3d12-debug-gpuusage-detail-enhance

Ok, so what do we see here? Well, the CPU starts after the GPU finishes, with some delay. Also, the GPU 3D queue is very empty, which is not surprising as my GPU is not really taxed with rendering a single triangle :) Due to the fact that we're running VSync'ed, we'd expect to be waiting for the last queued frame to finish before the CPU can queue another frame.

Let's try to look at the very first frame:

d3d12-debug-gpuusage-detail-enhance-first-frame

Looks like the CPU side is only tracked after the first submission, but what is suspicious is that the GPU frame time looks like a single frame was rendered before the CPU was invoked again. We'd expect the CPU side to queue up three frames though, so the first frame time should be actually three times as long. Can we get a better understanding of what's happening?

GPUView

Yes, we can, but we'll need another tool for this - GPUView. GPUView is a front-end for ETW, the built-in Windows event tracing, and it hasn't gotten much love. To get it, you need to install the "Windows Performance Toolkit". Also, if you use a non-US locale, you need to prepare an user account with en_US as the locale or it won't work. Let's assume you have everything ready, here's the 1 minute guide to use it:

  1. Fire up an administrator command prompt
  2. Go to C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\gpuview
  3. Run your application
  4. type in log m, Alt+Tab to your application
  5. Let it run a second or two, Alt+Tab back, and type log
  6. Run GPUView on the Merged.etl file.

Just like in the Visual Studio graphics analysis tool, you'll need to select a few milliseconds worth of time before you can make any use of the output. I zoomed in on three frames here.

queue-gpuview-bad

Notice the color coding for each application is random, so here my sample got dark purple. We can see it executing on the 3D queue, and at the bottom, we see the CPU submission queue.

You'll notice that suspiciously, just while the GPU is busy, the CPU queue is completely empty. That doesn't seem right - we should have several frames queued up, and the moment the GPU starts working (this is right after the VSync, after all!), we should be queuing up another frame.

Let's take a look at the present function. Conceptually, it does:

  1. Call present
  2. Advance to the next buffer
  3. Signal a fence for the current buffer

At the next frame start, we'll wait for the buffer associated with the current queue slot, which happens to be the slot we just used! This means we're waiting for the last frame to finish before we issue a new one, draining the CPU queue, and that's what we see in the GPUView output. Problem found! Fortunately, it's a simple one, as the only thing we need to change is to wait for the right fence. Let's fix this (and also the initial fence values, while we're at it) and check again with GPUView.

queue-gpuview-good

Looks better, we see a present packet queued and some data after it. Let's zoom really close on what happens during the rendering.

queue-gpuview-good-enhance

What do we have here? Two present packets queued up, while the GPU is processing the frame. Here we can also see how long it takes to queue up and submit the data to the GPU. Notice that the total time span we're looking at here is in the order of 0.5 ms!

So finally, we fixed the problem and verified the GPU is no longer going idle but instead, the CPU queue is always nicely filled. While in this example, we're limited by VSync, in general you always want to keep the GPU 100% busy which requires you to have one more frame worth of work queued up. Otherwise, the GPU will wait for the CPU and vice versa, and even a wait of 1 ms on a modern GPU is something in the order of 10 billion FLOPs wasted (in my example, on an AMD Fury X, we're talking about 8601600000 FLOPs per ms!) That's a lot of compute power you really want to throw at your frame :)