Article: Building Docker images for Unreal Engine 4
How to build both Windows and Linux Docker images for Epic Games’ Unreal Engine 4.
Tags: C++, DevOps, Docker, Unreal Engine
Posted: 28 April 2018
Supplemental information is available that expands upon this content.
The existing text of this article is preserved here for historical reasons and reflects the state of the Unreal Engine at the time that it was last updated in June 2018. Some of this information is still correct today, whereas some elements are either outdated or were incorrect in the first place (such as the inability to use
sudo inside an Ubuntu container.) Supplemental information can be found here that expands upon this foundation and accurately reflects the state of Docker containers and the Unreal Engine today.
- Issues and their solutions
- Next steps
In the concluding remarks of the article Cross-platform library integration in Unreal Engine 4, I stated that my proposed integration workflow was only the first step in building the foundation of a Continuous Integration (CI) pipeline for UE4 that supports the seamless use of third-party libraries. Another key ingredient in a modern CI pipeline is the ability to use containers as part of the build process. Container-based builds provide a wide array of benefits over bare metal or virtual machine-based builds, including reduced resource requirements and a clean filesystem for each build. Additionally, the use of containers sidesteps the limitation that UnrealBuildTool is only designed to allow a single instance to run at any given time on a system.
Although there have been a small handful of efforts to build Docker-based containers for UE4, all of these have focussed solely on Linux containers. I am aware of no existing projects that aim to provide Dockerfiles for Windows container builds of UE4. Additionally, most of the existing Linux Dockerfiles appear to be geared toward the ability to run the UE4 Editor in a container on a standalone host, rather than to create an environment suitable for use in a CI pipeline.
After considerable experimentation, I have developed a set of Dockerfiles and an accompanying Python build script that provide the necessary functionality to build both Windows and Linux Docker images for Unreal Engine 4 which are suitable for use in a CI pipeline. The code is available from the adamrehn/ue4-docker GitHub repository. It is important to remember that you cannot upload the built images to a public Docker Registry such as Docker Hub, since the Unreal Engine EULA prohibits Engine Licensees from publicly distributing the Engine Tools in any form. You can share the built images with other Engine Licensees via a private Docker Registry so long as you ensure that you do so in a manner that complies with the “private sharing” terms in the EULA.
The Dockerfiles and Python code in the ue4-docker repository are well-commented and should be largely self-documenting. However, if you are interested in the issues that I encountered in developing these files and the solutions that I employed to solve them, the section that follows provides an in-depth discussion.
Issues and their solutions
Issues common to both Windows and Linux containers
- The container build process requires valid credentials to clone the UE4 GitHub repository. The Unreal Engine GitHub Repository (login required) is only accessible to Engine Licensees who have linked their GitHub account to their Epic Games account. Consequently, it is necessary to address the same issue that any Docker image encounters when attempting to clone a private repository - the need to supply valid authentication credentials. I solved this problem by implementing a Git credential helper and directing Git to use it via the
GIT_ASKPASSenvironment variable. This is a common solution utilised by Continuous Integration systems when providing credentials to their build processes. In the case of my build script, an HTTP endpoint is started as a child process and the credential helper in the container uses the
curlcommand to forward credential requests from the container to the host. In order to provide a rudimentary level of security, a secret token is generated and passed to the container in order to ensure the endpoint only responds to the intended requests.
- Non-constant environment variables invalidate the Docker build cache. An unfortunate side-effect of passing a unique security token to the container build process is that the
ENVcommand in the Dockerfile sees a different value for every run and therefore cannot be cached. Since this is one of the first steps in the Dockerfile, this effectively prevents the Docker build cache from being used for the entire build process. This means that if the build process is interrupted for any reason, it must start all over again instead of resuming from the last successful step. To isolate the effects of the environment variables, I split the build process into two images, named
ue4-sourceDockerfile contains only the steps necessary for installing the Git credential helper and cloning the UE4 GitHub repository, and will always run from the start if interrupted. The
ue4-buildDockerfile contains the rest of the build steps, and is able to make full use of the Docker build cache.
- Container memory and disk space limits must be configured for Windows containers or Linux containers running in a VM. When running Linux containers natively under a Linux host, modern versions of Docker do not impose any resource limits by default. However, if Linux containers are being run in a Virtual Machine (as is the case for both Docker For Windows and Docker For Mac), containers are limited by the resources allocated by the VM. By default, these allocations fall below the required minimum of 8GB of memory and (approximately) 100GB of disk space, and the UE4 build process will fail due to insufficient resources. Windows containers are subject to even stricter default resource limits (1GB of memory and 20GB of disk space.) In both scenarios, the user must first configure the Docker daemon to increase the resource limits. For Linux containers running in a VM, both the memory and disk space limits must be manually configured. For Windows containers, memory is controlled by the Python build script via the
-mDocker flag, but the disk space limit must still be manually configured. The Python build script will detect the configured disk space limit for Windows containers and display an error if it detects that it is below 120GB. Detecting the resource limits of the Moby VM is outside the scope of the build script and users running Linux containers in a VM are trusted to read the instructions and manually configure the resource limits prior to running a build.
Issues specific to Windows containers
- The default installation of Git for Windows includes Git Credential Manager. By default, the Git for Windows installer will include Git Credential Manager to store user-supplied authentication credentials. If the credential manager is installed, Git will ignore the presence of the
GIT_ASKPASSenvironment variable and will not invoke the custom credential helper to forward requests to the host. This results in the
git cloneoperation hanging indefinitely as it waits for credentials to be provided via standard input, which is not available in a container running in non-interactive mode. I solved this problem by simply adding the relevant flag to disable Git Credential Manager to the installation command for Git in the Dockerfile.
- The UE4 prerequisites installer breaks when running in a container. The executable
Engine\Extras\Redist\en-us\UE4PrereqSetup_x64.exeis invoked by
Setup.batto install the runtime dependencies needed by the Engine under Windows. However, when running inside a Windows container, the prerequisites installer fails to recognise the version of the Visual C++ Redistributable Package that has already been installed by the Visual Studio 2017 Build Tools installer. It then attempts to install an older version and fails when the installation refuses to proceed due to the presence of a newer version. This failure is propagated to the setup script and halts the entire Dockerfile build process. Since the required runtime libraries are already present, I solved this problem by invoking a Python script that simply patches out the call to the prerequisites installer in
- UnrealVersionSelector hangs when running in a container. The executable
Engine\Binaries\Win64\UnrealVersionSelector-Win64-Shipping.exeis also invoked by
Setup.batto install the Windows Explorer shell integration for the Engine. When running in a container, UnrealVersionSelector appears to hang indefinitely. Since the shell integration is not required in the context of a CI pipeline, the Python script that patches out the call to the prerequisites installer also patches out the call to UnrealVersionSelector.
- Windows Server Core does not include the DirectX or OpenGL runtime files. Although the Windows 10 SDK that is installed by the Visual Studio 2017 Build Tools installer includes the DirectX SDK, it does not include all of the runtime DLL files that are required to run applications which have been built against the SDK. Both these DLLs and a set of OpenGL runtime files would ordinarily be provided by Windows, but this is not the case for Windows Server Core. In my testing with Unreal Engine 4.19, the only missing DLL that appears to be required for the Engine build process to complete is an XInput runtime. However, processing content (e.g. generating the Derived Data Cache for all Engine content or cooking the content for a UE4 project in order to package it for distrubution) requires a variety of additional DLL files related to DirectX, OpenGL, and Vulkan. The full list of required DLLs and the locations from which the Dockerfile retrieves them are as follows:
MSVCR100.dll(installed via the Microsoft Visual C++ Runtime Chocolatey package)
xinput1_3.dll(extracted from the June 2010 DirectX End-User Runtimes package)
D3DCompiler_43.dll(extracted from the June 2010 DirectX End-User Runtimes package)
X3DAudio1_7.dll(extracted from the June 2010 DirectX End-User Runtimes package)
XAPOFX1_5.dll(extracted from the June 2010 DirectX End-User Runtimes package)
XAudio2_7.dll(extracted from the June 2010 DirectX End-User Runtimes package)
dsound.dll(copied from the host OS since this file ships with Windows and has no installer)
opengl32.dll(copied from the host OS since this file ships with Windows and has no installer)
glu32.dll(copied from the host OS since this file ships with Windows and has no installer)
vulkan-1.dll(extracted from the Vulkan SDK Installer)
- The Debugging Tools for Windows are not installed by the Visual Studio 2017 Build Tools command-line installer. Creating an Installed Build of the Engine under Windows requires the PDBCopy tool for stripping symbols from debug symbol files. In Visual Studio 2013 and Visual Studio 2015,
pdbcopy.exewas included in in a standard package named
AppxPackage. In Visual Studio 2017, PDBCopy was moved to the Debugging Tools for Windows (also known as WinDbg), which are an optional component of the Windows 10 SDK. However, the command-line installer for the Visual Studio 2017 Build Tools does not install WinDbg by default, nor does there appear to be an option available to change this behaviour. Fortunately, the windbg Chocolatey package provides an alternative method of installing WinDbg and obtaining
- UnrealBuildTool breaks if the container memory limit is not a multiple of 4GB. When running in a Windows container with a memory limit that is not a multiple of 4GB, UnrealBuildTool will sometimes crash with an error stating “The process cannot access the file because it is being used by another process.” Presumably this is a concurrency bug in UBT itself and not the result of some bizarre interaction with the container environment. I solve this problem by simply hardcoding the memory limit for Windows containers to 8GB in the Python build script.
- Windows containers sometimes crash due to a timeout error in
hcsshim. Recent versions of Docker For Windows may sometimes encounter the error hcsshim: timeout waiting for notification extra info when building or running Windows containers. At the time of writing, Microsoft have stated that they are aware of the problem, but an official fix is yet to be released. Since this issue can block all build progress in some instances, I have implemented support for a workaround in the Python build script. During the course of my testing under Windows 10, I have observed that altering the memory limit for containers between subsequent invocations of the
dockercommand appears to reduce the frequency with which the timeout error occurs. Since Windows 10 always utilises Hyper-V isolation for Windows containers, I postulate that changing the memory limit forces Docker to provision a new Hyper-V VM, preventing it from re-using an existing VM that may have fallen victim to whatever condition triggers the timeout. My workaround is the inclusion of support for a
--random-memoryflag in the Python build script. When this flag is present, the container memory limit will be set to a random value between 8GB and 10GB when the build script starts. If the build process fails due to the timeout error, simply re-run the build script and there is a good chance that the build will be able to continue (albeit potentially only for a short while until the error manifests itself again.) Using this workaround, I have been able to successfully complete builds of the Windows container images on systems where the process would have otherwise been all but impossible. It is worth noting, however, that this strategy can trigger the UnrealBuildTool concurrency error described in the previous list item. An unsuccessful run with the
--random-memoryflag should be followed by a run without the flag in order to effectively address both issues. (In one instance I actually encountered the concurrency error even when using the default memory limit, presumably due to some issue preventing Windows from allocating the full 8GB to the container. Restarting Windows was the only way I found to resolve the issue and restore the normal behaviour.)
Issues specific to Linux containers
- The UE4 build process will not run as root. UnrealBuildTool refuses to build anything if it detects that it is running as the root user under macOS or Linux. As such, it is necessary to create a non-root user and invoke the build commands as that user. It is also necessary to ensure that a home directory is created for this user, since UBT resolves several paths in the current user’s home directory. In the Linux Dockerfile, I have named this non-root user
- The post-clone setup script is hardcoded to use
sudo. The script
Engine/Build/BatchFiles/Linux/Setup.shexpects to run as a non-root user and is hardcoded to use
sudowhen installing missing packages. Even after adding the
ue4user to the sudoers group, I was unable to find a way to make
sudofunction correctly in an Ubuntu 17.10 container. Although it is certainly possible to prevent the relevant commands from ever being run by simply installing the required packages beforehand, such a solution is brittle and likely to break if future Engine versions alter the list of dependency packages. Instead, I have solved this problem by invoking a Python script that patches out all instances of
sudoin the setup script. The setup script is then run as root to ensure that all
apt-getcommands function correctly, and finally the
chowncommand is invoked to ensure any generated files (such as downloaded dependencies) are owned by the
ue4user instead of the root user. The build can then proceed as normal, running as the
- Installed Engine Builds do not include the binary for UnrealPak. As of Unreal Engine 4.19.2, the
InstalledEngineBuild.xmlBuildGraph script (login required) does not include UnrealPak in the list of build tools which are included in an Installed Build of the Engine. Since UnrealPak is required for packaging projects when
.pakfiles are being generated, the Linux Dockerfile manually copies the UnrealPak binary from the existing build tree into the relevant directory in the Installed Build.
With Docker images built for both Windows and Linux, the logical next step is to utilise them as part of a Continuous Integration (CI) pipeline. I will post an article in the near future with details on using the Docker images described above within the popular Jenkins CI system.
With regards to the distribution of built images, Epic Games themselves are the only entity with the legal right to alter the current situation. I have no doubt that many Engine Licensees would be delighted if Epic Games were to provide automated builds of UE4 Docker images via an Epic-hosted Docker Registry. Such a registry could be integrated with the Epic Games account system to provide authentication and would significantly simplify the developer effort required for Engine Licensees to implement self-hosted CI pipelines. In the meantime, the code in the ue4-docker repository provides developers with the ability to build and host UE4 Docker images themselves.