Building Packages from Source¶
Reproducible Builds¶
venvstacks is designed around ensuring that the layer archives that it
produces are reproducible: if the same
stack definition is built again later with the same
build environment,
then the resulting layer archives will be byte-for-byte identical with
those produced by the original stack build.
One of the ways this is achieved is by requiring that all Python packages
included in a stack build be provided as pre-built
binary wheels.
This allows the layer lock files to record the exact binary hashes of their
expected inputs, while the deterministic installation process for binary wheels
avoids introducing variation into the layer archive output (avoiding the
potentially build location dependent aspects of wheel installation is the reason
some package features, such as direct execution scripts, are not available in
the layer environments created by venvstacks).
While venvstacks does not natively support building components from source
references, some users may not wish to use publicly available binary artifacts,
or may depend on projects that don’t provide such artifacts. This section
provides some suggestions and recommendations for handling these situations.
Building artifacts from source¶
The first task is determining which artifacts need to be built and then actually building them.
Systematic builds of entire dependency trees¶
The Fromager project is designed specifically for this task. Quoting the project’s goals, Fromager is designed to guarantee that:
Every binary package you install was built from source in a reproducible environment compatible with your own.
All dependencies are also built from source, no prebuilt binaries.
The build tools themselves are built from source, ensuring a fully transparent toolchain.
Builds can be customized for your needs: applying patches, adjusting compiler options, or producing build variants.
That last point includes ensuring that built components with external dynamic library dependencies are linked against the desired versions of those libraries.
Note that building complex stacks with Fromager may require passing --skip-constraints,
as venvstacks intentionally allows layers that don’t depend on each other
to specify conflicting package version constraints.
Selective builds of required packages¶
If appropriately configured, Fromager can technically support this approach as well. However, providing details of such a configuration is use case dependent and hence beyond the scope of these build suggestions.
More commonly, the components that require building are found by repeatedly running venvstacks and recording the list of projects that the locking failures indicate do not have binary wheels available for installation.
Once that list of projects is available, a use case dependent mechanism can then be used to coordinate downloading the source artifacts and producing appropriately built wheels for the platforms of interest (essentially undertaking an ad hoc approach to the problem that Fromager approaches systematically).
Including private artifacts in layer builds¶
Once the binary wheels are available, the second task is then to actually include those wheels into the layer locking and building process.
Private index servers¶
The primary recommended approach is to run the stack builds against a private index server such as a self-hosted depvi instance, or a cloud-hosted repository service such as JFrog’s Artifactory or Astral’s pyx.
These can be set up to serve as both a caching proxy for publicly available packages
and a host for privately built packages, allowing the stack builds to be
appropriately configured with a single tool.uv.index entry in the stack
definition file:
[[tool.uv.index]]
url = "https://internal.example.com/pyindex/"
default = true
More complex arrangements using the package_indexes and priority_indexes layer specification settings are also possible. The published examples demonstrate such configurations using the public PyTorch repositories, as those are the kinds of parallel build scenarios where the simple caching proxy override approach may be insufficient.
Local wheel directories¶
Prior to the addition of index server configuration support, the only provided
mechanism for including additional wheels in the layer locking and building
process was to pass the --local-wheels option to the venvstacks CLI.
This mechanism is still supported, with no plans to remove it, but there may
be some situations where uv will be unable to lock a stack defined this way,
while being able to successfully lock a stack that uses an appropriate index
server configuration instead.
As layer locking is cross-platform, wheels for all target platforms must be
available when generating the layer lock files. If wheels for some platforms
are absent, the missing pylock.toml entries mean that the affected layers
will not be able to be built on those platforms, even if the wheels are present.
The paths to local wheels are recorded in the layer lock files as relative paths, so that relative positioning must be maintained between the locking environment and the layer building environments (for example, if using a git repository to maintain the history of the layer lock files, checking the local wheels into the same repository with Git-LFS).