Package-Portable CMake Code
Sunday December 9, 2018 09:32:55

CMake doesn't have a package manager built into it, but does include support for manually installing packages and consuming them with the find_package command.

The idiom is that write something like “find_package(boost)” and after that point have access to modern CMake targets (such as libraries and other entities) sitting in an documented namespace, such as “boost”.

In a more traditional language, that would look like this:

Boost = find_package(Boost)  # Not real CMake code
...
# Add a dependencies for the Boost header libraries to the library I'm building
target_link_libraries(my_library PUBLIC Boost::boost)

Unfortunately, the CMake language allows functions to create variables in the scope of the caller, which is what find_package does:

but instead it's like this:

find_package(Boost REQUIRED)
# Read the docs and know that a varable named `Boost` was just
# magically brought into scope by the find_package call above
target_link_libraries(my_library PUBLIC Boost::boost)

Ewww.

The second problem is that the exact variables created by find_package can be different for every single package. For example, to figure out what imported targets find_package(BOOST) brings into scope you can look at the Cmake docs. But what about for other libraries? Everyone is free to create package configuration files that do whatever they want. In fact, many find_package files don't even bring the modern imported, namespaced targets into parent scope, but instead bring global variables into the root namespace.

In short, there's no standard way of knowing how find_package will behave.

This problem is compounded due a third issue: many popular libraries don't have official CMake find_package support. Instead, multiple third parties may have written unofficial CMake packages and there's no clear way to choose which ones you will use.

Let's take the SDL2 libraries. If you want to use SDL2 with CMake, you could use the Bincrafters packages for Conan, or tcbrindle's CMake scripts, or the CGet compatable CMake scripts I wrote, or the SDL2 CGet recipe Paul Fultz wrote, etc etc.

All of these may 1. use different names to specify with find_package (ie I use find_package(sdl2) but some people prefer screaming case ala find_package(SDL2)) and 2. will bring in completely different variables.

So while find_package may be a standard idiom for “clean” CMake code, the moment we use one of these SDL2 packages we make our CMake script incompatable with any CMake code that uses a a different SDL2 package. We also make it hard to change the SDL2 package we use in the future.

Writing Package-Portable CMake Code

What we want to do is break the dependency between our CMake scripts and the exact package we're using.

In my case, I want to be able to use the SDL2 libraries from two sources.

One is the Download SDL2 GitHub project, which I wrote for consumption with CGet. You can install this with CMake using

git clone https://github.com/TimSimpson/download-sdl2.git
cd download-sdl2
mkdir build && cd build
cmake -H../ -B./
cmake --build . --target install

This project installs the SDL2 libraries in a standard way to /usr/local/include and /usr/local/lib.

If you don't want to pollute /usr/local you can use the excellent Cget to create a seperate C env, or prefix path for your project and install SDL2 there, like so:

cget install TimSimpson/download-sdl2

Either way, your CMake code can from then out on consume the SDL2 libraries by calling find_package(sdl2) which will bring the following variables into scope:

  • sdl2::sdl2 - The main SDL2 library.
  • sdl2::image - The main SDL2 image library.
  • sdl2::ttf - The main SDL2 TTF library.

I love using standard CMake package installs with Cget. I've been using it for about a year and found it to be sturdy, dependable, and intuitive.

But lately, I've been curious about using the Conan C++ package manager, mostly to speed up CI builds by downloading cached binaries. Currently my Travis builds need to build and install the SDL2 libraries before they build whichever project uses them, and this takes a long time. While I think it's good practice to build all the dependencies for any applications you plan on distributing from scratch, for simple CI builds I sometimes envy Conan's ability to function as a binary package store.

There's also a ton of useful packages available for Conan, mostly written by the BinCrafters team.

What I'd like to do is make my CMake projects which use the Download SDL2 GitHub project work with the BinCrafter's SDL2 package as well.

Conan is a strange beast. It advertises itself as build-system agnositc, and in a certain sense that's true. But the flipside is that it's extremely difficult to make your build-system Conan agnostic. It's very difficult to use Conan without tying your build process- and the build process of anyone consuming your work- to Conan.

Conan uses different “generators” to create build system specific glue code. There's actually four different generators for CMake, which lead to different results with find_package.

I've tried all four CMake generators but will only go over the two which are still in my memory:

The CMake paths generator creates a tool chain file. The appeal here is that you invoke CMake and give it this toolchain file, and it makes all the Conan stuff work in your CMake code which can then be free of Conan specific functions or features. In theory this means you could write a CMake file that's compatable with standard CMake but then use Conan to fetch your packages.

However, since Conan generates the CMake glue this means the variables follow a somewhat stange automated format. find_package(SDL2) brings in SDL2::SDL2, while find_package(SDL2_Image) brings in SDL2_Image::SDL2_Image, etc.

The other approach is the plainly named CMake generator. This creates a .cmake file in your build directory that you have to include from your normal CMake file, tying your build to Conan. There's actually two variations on how this works as well:

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup()

This approach pulls in all of the stuff brought in by Conan (libraries, include directories, etc) and stores it in the global CMake variables. In other words it becomes unnecessary to use find_package at all, but the downside is the global path variables used for includes and libraries are all populated for every target in your CMake file, whether you wanted to specify a dependency on everything you brought in with Conan or not.

The second approach is this:

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS)

This instead pulls in more modern looking CMake targets. However, these targets are different than the ones created with the CMake paths generator I discussed above.

First, all of the variables from all targets are put into scope right away, without the need to call find_package. Second, everything is put into a CONAN_PKG namespace.

In this case, the SDL2 library is found in CONAN_PKG::sdl2, the SDL2 Image library is found in CONAN_PKG::sdl2_image, etc.

So Conan can't even keep the CMake targets it creates the same between the various generators it creates.

What we need to do is find a way to make a typical “clean” CMake script- the kind that would work without Conan- use the CMake variables Conan's glue script brings up, without changing the “clean” script so it forces us to always use Conan.

The Integrated Build Problem

There's a similar problem at play with integrated builds.

An integrated build is when you have several CMake projects that define packages and can be installed, but rather than constantly installing them to a C env you'd like to generate a single build script where they simply depend on each other.

Put this way: if you have project A that makes library A, and project B that uses library A by calling find_package(A), you sometimes would like to crate a parenty CMake script which calls add_subdirectory for projects A and B and then makes B simply depend on the library create by project A.

The problem is, when project B calls find_package it will look for an installed library A instead of the one being built in the same super project.

You could just run the install target for project A, but this takes extra time and is easy to forget about.

A simpler way is to just ensure that the variables brought into scope by find_package(A) will be the same ones brought into scope by calling add_subdirectory on A's directory.

We can do this by creating a macro for find_package that overwrites CMake's find_package functionality to make it ignore certain packages we don't want it to mess with- in this case A:

It looks like this:

# Assume this CMakeLists.txt file will never be consumed via `add_subdirectory`
# and lives at the root of a directory which have projects A, B, and C
# living in it via git submodules or something similar.

set(subprojects A B C)

# Make a macro for `find_package`. CMake will automatically save the old
# find_package as `_find_package`.
macro(find_package )
    if(NOT ${ARGV0} IN_LIST subprojects) # Ignore packages A, B, and C
        _find_package(${ARGV})           # run find_package for everything else
    endif()
endmacro()

# Assume that the CMake projects for A, B, and C are in the current directory
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/A)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/B)
add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/C)

Creating a macro for find_package is a dirty trick, but that's ok if the top level CMakeLists.txt file isn't expected to be portable. The CMake code for projects A, B and C are still portable and can be consumed for other use cases.

Achieving package portability

Going back to the issue of consuming the SDL2 libraries from either my own, cget-compatable package or Bincrafter's Conan packages, we can use the same find_package trick to make the Bincrafter packages resemble the ones I created.

This will again require a new top-level CMake project. Since Conan users don't care about reusing CMake files, and only want to consume the binary artifacts, we can just make a top level CMake file that only works for Conan. In my case I'll just make a directory named conan and put it there.

Because this is a top level CMakeLists.txt file that will include “clean” CMakeLists.txt files, we can feel free to resort to some hacks. In particular, I'm going to use Conan's plain cmake generator I mentioned above which requires calling a Conan specific function to introduce global targets. The result looks like this:

include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
conan_basic_setup(TARGETS)

While this does bring a lot of variables into scope, they're all in the namespace CONAN_PKG so it shouldn't interfere with the clean CMake code that will be included later.

Next we have a problem with header only libraries. In typical CMake usage, these libraries are installed to a system or user directory (such as /usr/local/include) where they become available globally. But with Conan, the headers are only available if you specify the modern targets.

A good example is the GSL libraries. If you install them using CMake, they get plopped right into the /usr/local/include or wherever your include directory is on your current prefix path. There's no need to specify anything about them from within CMake since any source file can just include them. But if you use Conan, any library that depends on them must have a dependency on CONAN_PKG::gsl_micrsoft since Conan stores the include files for every package in a unique directory.

In some ways that's cleaner, but it's also not the way the built in CMake support for the GSL operates. To make the Conan GSL package act like normal, we'll just take its include directories and add it to the global include directories list:

function(add_header_library SRC)
    get_property(var TARGET ${SRC} PROPERTY INTERFACE_INCLUDE_DIRECTORIES)
    include_directories(${var})
endfunction()

# look at the generated file `conanbuildinfo.cmake` to find the names of the
# targets created for each Conan package.
add_header_library(CONAN_PKG::gsl_microsoft)

Next up we have actual binary libraries, like the SDL2.

Conan already gives us imported targets (“CONAN_PKG::sdl2”, “CONAN_PKG:sdl2_image” etc). We just want these to have a slightly different name (“sdl2::sdl2”, “sdl2::image”, etc).

Ideally there'd be a way in CMake to create an “empty” library target which just bundled dependencies. For example, we could create an alias library called “sdl2::sdl2” and then add dependencies to the CONAN_PKG targets.

Unfortunately that doesn't work. The cleanest approach I've found is to create INTERFACE IMPORTED libraries and then extract all of the relevant properties from the CONAN_PKG targets and copy them onto the new targets manually. Here's a function that seems to clone all of the necessary properties:

function(clone_library DST SRC)
    add_library(${DST} INTERFACE IMPORTED)

    get_property(var TARGET ${SRC} PROPERTY INTERFACE_LINK_LIBRARIES)
    set_property(TARGET ${DST} PROPERTY INTERFACE_LINK_LIBRARIES ${var})

    get_property(var TARGET ${SRC} PROPERTY INTERFACE_INCLUDE_DIRECTORIES)
    set_property(TARGET ${DST} PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${var})

    get_property(var TARGET ${SRC} PROPERTY INTERFACE_COMPILE_DEFINITIONS)
    set_property(TARGET ${DST} PROPERTY INTERFACE_COMPILE_DEFINITIONS ${var})

    get_property(var TARGET ${SRC} PROPERTY INTERFACE_COMPILE_OPTIONS)
    set_property(TARGET ${DST} PROPERTY INTERFACE_COMPILE_OPTIONS ${var})
endfunction()

We then create the necessary libraries in our find_package macro when the argument expects a given package:

macro(find_package )
    if ("${ARGV0}" STREQUAL "sdl2")
        clone_library(sdl2::sdl2 CONAN_PKG::sdl2)
        clone_library(sdl2::image CONAN_PKG::sdl2_image)
        clone_library(sdl2::ttf CONAN_PKG::sdl2_ttf)
    elseif(NOT ${ARGV0} IN_LIST subprojects)
        # Ignore subprojects, run _find_package for everything else
        _find_package(${ARGV})
    endif()
endmacro()

Finally, we include the “clean” CMake file using add_subdirectory. If this is for a typical CMake project where the clean CMakeLists.txt file lives in the root directory, then conan/CMakeLists.txt will need to go up a directory to reference it like so:

add_subdirectory(${CMAKE_CURRENT_SOURCE_DIR}/..
                 ${CMAKE_CURRENT_BINARY_DIR}/output)

VoilĂ ! The Cmake project now works with both package sources.

Here is the final result.



A Simpler Way to Build Boost
Thursday December 21, 2017 09:32:55

Yesterday I wrote a post about using a lame Python script to rename the Boost binaries built by Bjam. Today's post is how such a script is completely unnecessary; you can just tell Boost to name the files differently.

I've been using the option –build-type=complete to build the Boost libraries for years. It basically tells it to build all of the different variations possible for a library and a given compiler, which helped me in my previous life when I used nothing but Boost build (so the tree was organized as “library/variants”). However in the modern Cenv era where I use prefix paths for each build variation which then contain libraries (so the tree is “variant/libraries”) there's no need to do this.

So instead when calling b2 pass in –layout=system which gets Boost to avoid all of it's squirrelly naming conventions, which works just fine since the resulting binary will be the only file for a given Boost library sitting in a cenv.

SO: the final install process looks like this:

cd boost-directories
bootstrap.bat
cenv set win64-debug
b2.exe --clean-all
b2.exe --stagedir="%CGET_PREFIX%" --toolset=msvc-14.1 address-model=64 debug link=shared --layout=system stage -j8 --with-system

First off, win64-debug is a cenv I created with cenv init win64-debug -DCMAKE_GENERATOR_PLATFORM:none=x64 -DCMAKE_BUILD_TYPE:none=Debug. The toolchain arguments for cenv init go to CMake and tell it to build for 64 bit (ideally it would tell it to force builds for debug, but it doesn't do that on MSVC++ for reasons I won't get into).

When I run b2.exe, this toolchain info is translated to bjam-ease, with CMAKE_GENERATOR_PLATFORM=x64 turning into address-model=64. So for any given cenv you'll probably need to Google the translation from CMake to Boost Build to make sure your builds agree.

Finally, –with-system tells Boost to build the “system” library. To see the possible libraries use b2 –show-libraries. It's also possible to specify multiple –with args on the command line.

So: voilĂ ! Assuming you mucked with Cmake to make it build the most recent version of Boost you should be good to go. This is about as simple as I've ever gotten the process of using the Boost libraries to be.

Note that the environment variable BOOST_ROOT still needs to be set; otherwise, we'll have to copy the boost headers into our cenv. That's as simple as copying the boost directory into the include directory of the cenv, but since it takes up 114 MB and I seem to be constantly running out of the paltry 256gb of disk space I have on the solid state drives I'm using I prefer not to.

Note: cget also has some interesting built in support for building and installing Boost, so you may want to look into using that. However it requires copying an entire distribution of all the Boost libraries to a temporary directory before it even does anything- which is involves an extra 524 MB!!- which may be a deal breaker if you're as perpetually low on disk space as I am.



Fixing CMake 3.10.1 to work with Boost 1.66
Wednesday December 20, 2017 09:32:55

I went to use the latest version of Boost yesterday only to find I couldn't make it work. “Hmm,” I thought, "didn't I just sacrifice hours of my life to do this very thing and write a blog post about it so I could remind myself later?"

After digging through CMake's included FindBoost.cmake file it turns out that Boost Build has changed behavior this release to put the architecture and address model into the library names it spits out. So the copious amount of code in FindBoost.cmake looks for a file named boost_coroutine-vc141-mt-gd-1_66 but doesn't find it because in Boost 1.66 that file is named boost_coroutine-vc141-mt-gd-x64-1_66 (the x64 is new) (the docs confirm this).

Second problem: CMake's Find Boost module (FindBoost.cmake) is oddly insistent on *not* defining import library targets for the Boost library if it doesn't know what version of Boost you're using. This is kind of a big deal as it means you can never use a new version of Boost correctly until CMake's authors figure out what is needed for that version of Boost and update FindBoost.cmake themselves.

As a hideous hack, this problem can be worked around as follows:

  1. (UPDATE: there's a way easier way to deal with this, see here for info.) Using a python script, copy all of the binary files (dlls and stuff) to a cenv, renaming them so they no longer contain x64. Here's the script, which must be run from the directory containing the Boost binaries and requires setting the environment variable CGET_PREFIX (which is done automatically for me by my tool Cenv):
    import os
    import shutil
    
    prefix = os.environ['CGET_PREFIX']
    
    for filename in os.listdir('.'):
        if '-x64' in filename:
            new_filename = filename.replace('-x64', '')
            new_path = os.path.join(prefix, 'lib', new_filename)
            print('{} -> {}'.format(filename, new_path))
            shutil.copyfile(filename, new_path)
    

  2. Edit the FindBoost.cmake file (on my machine, it's located at C:\Program Files\CMake\share\cmake-3.10\Modules\FindBoost.cmake) and change the following bit of code:
    if(NOT Boost_VERSION VERSION_LESS 106600)
      message(WARNING "New Boost version may have incorrect or missing dependencies and imported targets")
      set(_Boost_IMPORTED_TARGETS FALSE)
    endif()
    

    to
    if(NOT Boost_VERSION VERSION_LESS 106600)
      message(WARNING "New Boost version may have incorrect or missing dependencies and imported targets")
      # set(_Boost_IMPORTED_TARGETS FALSE)
    endif()
    

I'll admit I'm having a hard time understanding why the authors of CMake were so persnickety about only creating the import targets when they themselves had blessed a new Boost version; it would seem to encoruage people using bleeding edge versions of Boost to avoid the import targets in favor of the other variables the Find Boost module spits out, which doesn't seem to fit the spirit of modern CMake.

I'm sure they had their reasons, but I'd argue for users it makes sense to simply alter our own versions of CMake so it will behave as expected and avoid littering our own CMake scripts with workarounds that won't be necessary in a future release of CMake anyway.



Keeping Clean with Cenvs
Saturday December 9, 2017 09:32:55

When I work with other languages on large software projects, the workflow is typically:

  • Grab the source code, extract it, cd into that directory.
  • Run some standard build tool. Usually this tool is well known and completely accepted by the programming language's community (Maven, Tox, Cargo, etc), at least compared to C++, where every few years I hit a wall and have an existential crisis where contemplate how I'm building software and spend ages learning another tool.
  • See it create a pristine directory created to host all build artifacts.

I've been amazed how in C++ this last step is so different. Most tools, instead of creating a single directory with the output of the build process, instead pollute whatever directory they're currently in with zillions of object files and associated build artifacts. There's a historical reason for this: Make does things this way, Ninja was inspired by Make, CMake wants to work with all these tools so it has to follow suit. But it's really gross, and coming from other programming language cultures it feels extremely unintuitive.

In the same vein, “standard” package installs are pretty gross: instead of polluting the current directory with artifacts, they pollute your entire machine by affecting any software you build afterwards.

Typically, a package install works by invoking an install target, such as running “sudo make install”. This copies libraries, header files, and other stuff to system directories such as /usr/lib, /usr/include, /usr/bin, etc.

So if you're working on a project and want to pull in a dependency, like SDL2, you'd download SDL2, run sudo make install and then be able to use it from the SDL2 from your project without including SDL2's source inside of your own project or “vendoring” the dependency (a euphemism for shoving all the build artifacts into source control).

I've typically avoided installation processes like this because:

  • They require sudo.
  • Installing globally clearly doesn't scale if you plan to work on two projects which require different versions or build variants of the same dependency.
  • Because you install the package globally, afterwards it's easy to forget you had to cross this hurdle. If you're not constantly documenting things (which is very possible when you're spinning up a new project) you may not even remember that you had to do anything a year later when you're solving the mystery of “why does this not build correctly on the new guy's machine” (don't say CI will fix this; it's just as easy to bake this kind of dependency into a CI box or base image).
  • On Windows this procedure probably doesn't work at all, or uses some different standard the author of the library or build tool invented that installs things to unpredictable locations. Or worse, it uses the actual Windows standards.
  • Uninstalling the package is not possible because the installation process isn't very well tracked (unlike using a Windows MSI or a Debian package) so you have to guess what needs to be removed.

Thankfully, there's a way to avoid globally installing packages: prefix paths.

These are root directory paths that overrides the default “system” paths. So instead of files being copied to /usr/lib they go into ${prefix_path}/lib, ‘/usr/include’ goes to ‘${prefix_path}/include’ etc.

Since the idiom of C and C++ package installs is only an idiom and not enforced by a contract between build systems, the way you specify prefix paths differs between tools.

In CMake the standard is to set the variables CMAKE_INSTALL_PREFIX to tell it where to put packages, and CMAKE_PREFIX_PATH to tell it where to find them.

It's helpful for me to imagine each directory that can be used as a prefix path as it's own, semi-isolated environment for C and C++ dependencies. I call this a C-environment, or just cenv for short.

A cenv is isolated in that it can't be affected except by packages installed globally. Since cenvs don't affect each other, you can protect them from external influences by keeping your system clean and never installing packages globally.

Once you realize that a mechanism for cleaning installing libraries for C/C++ exists, it's easy to imagine how to achieve a nice work flow similar to other languages:

  • Create a new cenv.
  • Download, build and install whatever packages you want your project to depend on, such as the Boost headers or SDL2, to the cenv.
  • Build the project your working on using the cenv to pick up the packages installed earlier.

Unfortunately, installing packages is still a somewhat difficult process that entails checking out source code, generating build files in CMake, and installing it to your cenv.

Thankfully we can use a tool called cget to download and install CMake based projects.

In recent years there have been a series of package managers introduced for C++. What makes Cget different is how simple it is; most of these tools have introduced their own ideas about what it means to install a package, while cget instead went along with the CMake standards which itself was based on common idioms already in use in Makefile based projects.

The one area cget breaks from the norm is it doesn't install packages globally by default. Instead, cget's default is it creates a brand new cenv for you in the current directory by creating a directory named ./cget. It also creates a CMake toolchain file in this directory, which sets CMAKE_INSTALL_PREFIX and CMAKE_PREFIX_PATH to use the cenv.

(Note: cget calls this directory a “new prefix path”, but I think the name “cenv” represents it better.)

Using cget looks like this:

cd your-project-directory  # This contains a CMakeLists.txt which uses GLM
cget init  # creates a new cenv at `./cget` if none exists.
# Downloads the 0.9.8.5 release of glm from Github, creates a build directory
# somewhere inside of `./cget`, builds glm and installs it to locations in
# the new cenv such as `./cget/include`.
cget install g-truc/glm@0.9.8.5
mkdir build && cd build
# -DCMAKE_TOOLCHAIN_FILE tells it to use cget's cenv
cmake -DCMAKE_TOOLCHAIN_FILE=../cget/cget/cget.cmake -H../ -B./
cmake --build ./

The code above creates a cenv, installs the GLM library to it, then builds the CMake project in the directory by passing the toolchain file cget created for the cenv to Cmake.

(If you're curious about how CMake itself consumes GLM, somewhere in the CMakeLists.txt file will be “find_package(glm)” which will look in the cenv for package info on GLM. This blog post is already pretty long so I'll be explaining how this works in another one, but essentially if CMake knows where to look it can find libraries and header files that are installed in the typical way.)

The string you pass to cget install is called a “package source”. This can be the name of a project in GitHub (for example, above we fetched branch 0.9.8.5 of GLM), a file path on your local machine, a URL to a tar.gz file, or other more exotic types beyond the scope of this blog post.

cget can also accept as a package source a text file containing a list of package sources. By convention this file is called requirements.txt.

This means it's now possible to make a typical C++ Cmake project and distribute it with a file called requirements.txt in the root. Users can then install all the necessary packages they need with cget before building or installing the source code of your our package.

Since cget is based on existing Cmake and make idioms and standards, this also means if they're weirdos they don't have to use cget but can still get information on what packages our project needs.

If we created a requirements.txt file for our project above, we could fill it with:

g-truc/glm@0.9.8.5

With a requirements.txt file in the root of our project our new process becomes:

cd your-project-directory
cget init
cget install -f requirements.txt
mkdir build && cd build
# -DCMAKE_TOOLCHAIN_FILE tells it to use cget's cenv
cmake -DCMAKE_TOOLCHAIN_FILE=../cget/cget/cget.cmake -H../ -B./
cmake --build ./

If someone else wants to install our project using cget, cget will find the requirements.txt file and install the dependencies we require first.

If you want to build your project with different compilers or otherwise use multiple configurations, you'll need more than one cenv. It's possible to make cget create cenvs in different locations by passing –prefix to cget init. The environment variable CGET_PREFIX can also tell cget to use a cenv other than the directory cget in the current directory.

Additionally, we can pass arbitrary toolchains as well as certain CMake settings to cget init to cause it to include those toolchains from the cenv toolchain file it creates.

This means building for multiple configurations looks like this:

# Build with GCC 6 in debug mode
cget init --prefix gcc-debug -DCMAKE_C_COMPILER:none=gcc-6 -DCMAKE_CXX_COMPILER:none=g++-6 -DCMAKE_BUILD_TYPE:none=Debug
export CGET_PREFIX=$(pwd)/gcc-debug
mkdir build-gcc && cd build-gcc
cmake -DCMAKE_TOOLCHAIN_FILE=../gcc-debug/cget/cget.cmake -H../ -B./
cmake --build ./
# Now build with Clang in release mode
cd ..
cget init --prefix clang-release -DCMAKE_C_COMPILER:none=clang-3.8 -DCMAKE_CXX_COMPILER:none=clang++-3.8 -DCMAKE_BUILD_TYPE:none=Release
export CGET_PREFIX=$(pwd)/clang-release
mkdir build-clang && cd build-clang
cmake -DCMAKE_TOOLCHAIN_FILE=../clang-release/cget/cget.cmake -H../ -B./
cmake --build ./

Creating a cenv in the root of each project you're working on is probably fine for some people, but I discovered I quickly grew sheepish about creating brand new cenvs for all my little projects; I seem to always be on the verge of filling the solid state drives where I do all my work. Additionally certain packages- such as Boost- can take up a ton of space.

This is a very similar problem faced by Python developers who use virtualenv, which are like cenvs but for Python projects. For the purposes of testing and CI, a virtualenv for every project makes sense, but depending on how prolific you are this can get expensive. Tools like pyenv and virtualenvwrapper help by creating a list of virtualenvs that are available globally from a shell session that can be easily switched between.

I liked this workflow, so I did the same thing for cenvs by building a tool called, confusingly enough, Cenv (installing it is made to be simple even for those unfamilar with Python, and it also installs cget).

Cenv manages a group of cenvs stored at ~/.cenv (C:\Users\your-name\.cenv on Windows). You create and list them like this:

$ cenv init gcc-debug -DCMAKE_C_COMPILER:none=gcc-6 -DCMAKE_CXX_COMPILER:none=g++-6 -DCMAKE_BUILD_TYPE:none=Debug
$ cenv init clang-release -DCMAKE_C_COMPILER:none=clang-3.8 -DCMAKE_CXX_COMPILER:none=clang++-3.8 -DCMAKE_BUILD_TYPE:none=Release
$ cenv list
  gcc-debug
  clang-release

You can activate one of these cenvs by calling cenv set:

$ cenv set gcc-debug
* * using gcc-debug
$ cenv list
* gcc-debug
  clang-release

“activating” a cenv does three things:

  • It sets the CGET_PREFIX environment variable, so cget uses that cenv.
  • It adds the lib directory of the cenv to the PATH and LD_LIBRARY_PATH environment variables. This is necessary to run executables that have been linked to shared libraries or DLLs that were installed to the cenv (the alternative would be installing them globally or copying all of the needed shared libraries and DLLs to the same directory as the executable you're building, which is wasteful).

Cenv also wraps the cmake command so that it always passes in -DCMAKE_TOOLCHAIN_FILE=${CGET_PREFIX_PATH}/cget/cget.cmake, meaning you get to stop thinking about that.

Running cget set or cget deactivate undoes these changes (it also smartly removes entries added to the PATH and LD_LIBRARY_CONFIG).

With Cenv installed, building a project for two different configurations looks like this:

# Build with GCC 6 in debug mode
cenv set gcc-debug
mkdir build-gcc && cd build-gcc
cmake -H../ -B./
cmake --build ./
cd ..
cenv set clang-release
mkdir build-clang && cd build-clang
cmake -H../ -B./
cmake --build ./

It took me awhile to apprecaite cget and the standard CMake practices it was advocating for, mostly due to the fact that CMake itself, while being a useful, high quality tool, is loaded with so many options and settings that using it the right way isn't immediately clear, and often involves overly verbose arguments that made it feel like I was on the wrong path even when I wasn't.

However, at the core of it package installation with CMake is simple, and I'd argue the inner workings of it are easier to understand than current competing packaging tools for C++. Though cget is extremely useful, the source code is tiny as it focuses on solving a few tiny problems very well. It makes existing CMake practices easier to use instead of inventing it's own standards and procedures for installing C++ packages and furthering the babel of sorts the community is headed towards. As someone who has looked at most of the other C++ package managers I think cget's approach is ultimately the simplest and most maintainable.

I believe cget and Cenv collectively rub most of the rough edges off of CMake, leaving a workflow that is scalable and pleasant. You can install both today by following the instructions on Cenv's README.

In a future blog post, I hope to discuss the basics of writing CMake files which correctly install and consume packages from cenvs.





<---2020-06-13 09:32:55 history 2017-10-29 09:32:55--->



-All material © 2007 Tim Simpson unless otherwise noted-