Anteru's blog
  • Consulting
  • Research
    • Assisted environment probe placement
    • Assisted texture assignment
    • Edge-Friend: Fast and Deterministic Catmull-Clark Subdivision Surfaces
    • Error Metrics for Smart Image Refinement
    • High-Quality Shadows for Streaming Terrain Rendering
    • Hybrid Sample-based Surface Rendering
    • Interactive rendering of Giga-Particle Fluid Simulations
    • Quantitative Analysis of Voxel Raytracing Acceleration Structures
    • Real-time Hybrid Hair Rendering
    • Real-Time Procedural Generation with GPU Work Graphs
    • Scalable rendering for very large meshes
    • Spatiotemporal Variance-Guided Filtering for Motion Blur
    • Subpixel Reconstruction Antialiasing
    • Tiled light trees
    • Towards Practical Meshlet Compression
  • About
  • Archive

What is OpenCL?

September 19, 2012
  • General
  • Opencl
  • Programming
approximately 3 minutes to read

This is a short, basic introduction to OpenCL targeted at customers who are curious to understand how software works and for developers who are not yet familiar with massively parallel programming.

As a consumer, you might wonder why your new mobile phone comes with a quad-core processor and what applications can take advantage of it. Similarly, if you have a notebook, you probably have multiple cores right now, yet some applications like a text processor don’t run faster while others like image processing do benefit a lot. How comes? As a developer, you might have come to the point where you try to rewrite parts of your application to benefit from multi-threading, and you wonder why this is so complicated using the OS interfaces?

Due to NVIDIA’s excellent marketing, you’ve probably already heard about CUDA. On notebooks, there’s often a greenish “CUDA enabled” sticker. But what does it actually mean? And how does CUDA fit into the big picture?

The core problem in the hardware space right now is power usage; battery life in mobile devices is very important, just as efficiency in desktop or notebook PCs. What happened is that it’s no longer possible to run a single program faster – on the other hand, multi-core CPUs can run multiple programs at the same time. Each of them might run just as fast as it did a few years ago, but by running more of them, the overall throughput increases. That’s the reason we’re seeing more and more cores even in mobile phones. Graphics cards are also a type of processor with lots and lots of processing cores.

The big question with all these cores is how to make efficient use of them. What CUDA brought to the table was a programming model, inspired by the graphics APIs, which we now consider the best approach for highly parallel programs. This programming model brings strong constraints – for instance, communication between elements is limited, memory accesses are more complicated, but it allows certain problems to be efficiently solved. For instance, a lot of image processing tasks like blurring or adjusting colours maps very well to this programming model. However, if an application is designed for CUDA, it also means that it is limited to NVIDIA’s GPUs. This may be fine, but sometimes you don’t have a GPU, sometimes the memory on the GPU is not enough, and sometimes AMD’s GPUs might be just faster at a given problem.

Enter OpenCL: OpenCL is a standardized formulation of the parallel programming model, with similar constraints as CUDA, but with a much wider hardware support. From mobile phones over graphics cards to CPUs, OpenCL provides an unified interface for software developers. For you as a customer, this means you have to care less about the particular device at hand. Your image processing suite will work just fine on your smartphone, on your notebook, and if you move it to you desktop PC, you will get better performance, but in every case, the software will use the hardware efficiently. With CUDA, what might happen is that on your notebook without an NVIDIA card, a tool will only use one CPU core and burn a lot of power. With OpenCL, chances are that it will all CPU cores and the integrated graphics chip as well. This will result in better performance and lower energy use.

For you as a customer, OpenCL is yet another technique which makes your software run faster and improves battery life/power efficiency. It also makes it easier for you to compare and choose hardware which works best for your problem, as you get more choice. Finally, OpenCL is also heading to the web: In the future, we can expect image processing tools which are running completely in the browser. These tools are highly likely to take advantage of OpenCL.

For you as a developer, OpenCL provides an API to target a lot of massively parallel hardware platforms with the same code. This means less duplication, easier development and easier deployment. If you haven’t given it a try yet, you should at least take a look now. Parallel programming is here to stay, and OpenCL provides the most gentle introduction to it.

Previous post
Next post

Recent posts

  • Data formats: Why CSV and JSON aren't the best
    Posted on 2024-12-29
  • Replacing cron with systemd-timers
    Posted on 2024-04-21
  • Open Source Maintenance
    Posted on 2024-04-02
  • Angular, Caddy, Gunicorn and Django
    Posted on 2023-10-21
  • Effective meetings
    Posted on 2022-09-12
  • Older posts

Find me on the web

  • GitHub
  • GPU database
  • Projects

Follow me

Anteru NIV_Anteru
Contents © 2005-2025
Anteru
Imprint/Impressum
Privacy policy/Datenschutz
Made with Liara
Last updated February 03, 2019