Example Stacks

All of the examples shown on this page can be found in the examples directory on GitHub, together with their launch modules and their layer lock files.

scikit-learn

This example is the one used in the Project Overview. It illustrates sharing two different framework layers between a pair of application layers.

The included scikit-learn demonstrations can be executed by running python -m sklearn_classification in the app-classification-demo environment or python -m sklearn_clustering in the app-clustering-demo environment.

[[runtimes]]
name = "cpython-3.11"
python_implementation = "cpython@3.11.10"
requirements = [
    "numpy",
    "matplotlib",
]

[[frameworks]]
name = "sklearn"
runtime = "cpython-3.11"
requirements = [
    "scikit-learn",
]
platforms = [
    "linux_aarch64",
    "linux_x86_64",
    "macosx_arm64",
    "macosx_x86_64",
    "win_amd64",
    # "win_arm64",  # No wheel available on PyPI
]

[[frameworks]]
# Use a non-default GUI due to problems with Tcl/Tk in python-build-standalone:
# https://github.com/astral-sh/uv/issues/6893
name = "gui"
runtime = "cpython-3.11"
linux_target = "glibc@2.34"
requirements = [
    "pyside6",
]
platforms = [
    # "linux_aarch64",  # No wheel available on PyPI
    "linux_x86_64",
    "macosx_arm64",
    "macosx_x86_64",
    "win_amd64",
    "win_arm64",
]



[[applications]]
name = "classification-demo"
launch_module = "launch_modules/sklearn_classification.py"
frameworks = ["sklearn", "gui"]
requirements = [
    "scikit-learn",
    "matplotlib",
    "pyside6",
]

[[applications]]
name = "clustering-demo"
launch_module = "launch_modules/sklearn_clustering.py"
frameworks = ["sklearn", "gui"]
requirements = [
    "scikit-learn",
    "matplotlib",
    "pyside6",
]

[tool.uv]
exclude-newer = "2025-10-11T00:00:00Z"

The generated layer lock files, lock metadata files, and layer package summaries for this stack can be found in the scikit-learn example stack’s requirements folder

JupyterLab

This example illustrates the simplest possible usable stack definition: a runtime layer with a single application layer.

Running python -m run_jupyterlab in the app-jupyterlab environment will execute JupyterLab.

[[runtimes]]
name = "cpython3.11"
python_implementation = "cpython@3.11.13"
requirements = []

[[applications]]
# No framework layers in the initial version of the example
name = "jupyterlab"
launch_module = "run_jupyterlab.py"
runtime = "cpython3.11"
requirements = ["jupyterlab"]

platforms = [
    "linux_aarch64",
    "linux_x86_64",
    "macosx_arm64",
    "macosx_x86_64",
    "win_amd64",
    # "win_amd64",  # No public pyyaml wheel available
]

[tool.uv]
exclude-newer = "2025-10-11T00:00:00Z"

The generated layer lock files, lock metadata files, and layer package summaries for this stack can be found in the JupyterLab example stack’s requirements folder

Apple MLX

This example illustrates using the platforms field and uv configuration to lock and build only for a subset of potential platforms, as well as using the macosx_target and linux_target fields to indicate that wheels targeting newer OS versions should be used than those that would otherwise be selected by default.

This stack also demonstrates the way that venvstacks eliminates redundant packages from the layer lock files when they have environment markers that will never be true on the platforms targeted by that layer, as well as removing environment markers that are always true for all of the targeted platforms.

Running python -m report_mlx_version in the app-mlx-example, app-mlx-cuda-linux (Linux-only) or app-mlx-cuda-macos (macOS-only) environments will report the MLX version available in that environment. (Despite the name, app-mlx-cuda-macos does not actually use CUDA to run MLX. The stack is set up to attempt to define and use an mlx[cuda] dependency on macOS in order to demonstrate irrelevant dependencies being filtered out of the layer lock file).

# Demonstrate using priority indexes to define cross-platform parallel torch stacks
[[runtimes]]
name = "cpython3.11"
python_implementation = "cpython@3.11.13"
requirements = []
platforms = [
    "macosx_arm64",
    "linux_x86_64",
]

[[frameworks]]
name = "mlx"
runtime = "cpython3.11"
linux_target = "glibc@2.35"
macosx_target = "14"
requirements = [
    "mlx==0.29.3",
    "mlx[cpu]; sys_platform == 'linux'",
]
# Platform restrictions are inherited from lower layers

[[frameworks]]
name = "mlx-cuda"
runtime = "cpython3.11"
linux_target = "glibc@2.35"
macosx_target = "14"
requirements = [
    "mlx[cuda]==0.29.3",
]
# A separate CUDA-based layer stack is really only needed on Linux,
# but is defined for macOS to demonstrate filtering of unused dependencies

[[applications]]
name = "mlx-example"
launch_module = "report_mlx_version.py"
frameworks = ["mlx"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "mlx",
]
# Platform restrictions are inherited from lower layers

[[applications]]
name = "mlx-cuda-linux"
launch_module = "report_mlx_version.py"
frameworks = ["mlx-cuda"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "mlx[cuda]",
]
# macos-only dependencies should be omitted from the layer summary
platforms = [
    "linux_x86_64",
]

[[applications]]
name = "mlx-cuda-macos"
launch_module = "report_mlx_version.py"
frameworks = ["mlx-cuda"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "mlx[cuda]",
]
# Linux-only dependencies should be omitted from the layer summary
platforms = [
    "macosx_arm64",
]

[tool.uv]
exclude-newer = "2025-10-19T00:00:00Z"
# Only resolve for the relevant target platforms
environments = [
    "sys_platform == 'darwin' and platform_machine == 'arm64'",
    "sys_platform == 'linux' and platform_machine == 'x86_64'",
]
# required-environments is intentionally NOT set,
# as not all layers are built for all platforms

The generated layer lock files, lock metadata files, and layer package summaries for this stack can be found in the Apple MLX example stack’s requirements folder

PyTorch

This example illustrates using the package_index field to install a specific package (torch) from a non-default named package index in the uv configuration, with different layers specifying different indexes to produce parallel application stacks running on the CPU and on CUDA 12.8.

This stack also demonstrates the use of index_overrides to allow a layer to declare a dependency on two nominally conflicting framework layers such that it will run with either of the layers installed.

Running python -m report_torch_cuda_version in the app-cpu, app-cu128``or ``app-cu128-or-cpu environments will report the CUDA version being used by PyTorch in that environment (None indicates the use of the CPU).

# Demonstrate using priority indexes to define cross-platform parallel torch stacks
[[runtimes]]
name = "cpython3.11"
python_implementation = "cpython@3.11.13"
requirements = [
    # Share a common numpy across the different torch variants
    "numpy",
]

[[frameworks]]
name = "torch-cpu"
runtime = "cpython3.11"
package_indexes = { torch = "pytorch-cpu" }
# priority_indexes = ["pytorch-cpu"]
requirements = [
    "torch==2.8.0",
    # Skip listing numpy, so numpy updates don't automatically invalidate the layer lock
]
dynlib_exclude = [
    "triton/**"
]
platforms = [
    "linux_aarch64",
    "linux_x86_64",
    "macosx_arm64",
    # "macosx_x86_64",  # PyTorch does not publish wheels for macOS on Intel
    "win_amd64",
    "win_arm64",
]

[[frameworks]]
name = "torch-cu128"
runtime = "cpython3.11"
package_indexes = { torch = "pytorch-cu128" }
# priority_indexes = ["pytorch-cu128"]
requirements = [
    "torch==2.8.0",
    # Skip listing numpy, so numpy updates don't automatically invalidate the layer lock
]
dynlib_exclude = [
    "triton/**"
]
platforms = [
    # "linux_aarch64",  # No wheel available in the PyTorch 2.8.0 CUDA repo
    "linux_x86_64",
    # "macosx_arm64",  # CUDA is not used on macOS
    # "macosx_x86_64",  # PyTorch does not publish wheels for macOS on Intel
    "win_amd64",
    # "win_arm64",  # No wheel available in the PyTorch 2.8.0 CUDA repo
]

[[applications]]
name = "cpu"
launch_module = "report_torch_cuda_version.py"
frameworks = ["torch-cpu"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "torch",
]

[[applications]]
name = "cu128"
launch_module = "report_torch_cuda_version.py"
frameworks = ["torch-cu128"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "torch",
]

[[applications]]
name = "cu128-or-cpu"
launch_module = "report_torch_cuda_version.py"
# Both the CUDA and non-CUDA frameworks are added to the import path,
# so this app will work as long as *either* of those layers is installed
# If both are available, it uses the CUDA layer (as it is listed first)
# However, the layer locking needs to be told that it is expected that
# the two layers specify different source indexes, and given a conflict,
# the pytorch-cu128 index should be used in preference to pytorch-cpu
index_overrides = { pytorch-cpu = "pytorch-cu128" }
frameworks = ["torch-cu128", "torch-cpu"]
requirements = [
    # Exact version pin is inherited from the framework layer
    "torch",
]


[tool.uv]
# exclude-newer = "2025-10-11T00:00:00Z"
# The custom torch registries do not support exclude-newer,
# and that currently requires avoiding the feature entirely
# https://github.com/astral-sh/uv/issues/12449

[[tool.uv.index]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu/"
explicit = true

[[tool.uv.index]]
name = "pytorch-cu128"
url = "https://download.pytorch.org/whl/cu128/"
explicit = true

The generated layer lock files, lock metadata files, and layer package summaries for this stack can be found in the PyTorch example stack’s requirements folder