This manual is supposed to provide a comprehensive reference for Chimera Linux packaging, i.e. a comprehensive reference for the packaging format.
In general, things not described in the manual are not a part of the API and you should not rely on them or expect them to be stable.
Table of Contents
This repository contains both the cbuild
program (which is used to build
packages) as well as all the packaging templates. The templates are basically
recipes describing how a package is built.
The cbuild
program is written in Python. Likewise, the packaging templates
are also written in Python, being special scripts containing metadata as well
as functions that define the build steps.
For usage of cbuild
, see the README.md
file in this repository. The manual
does not aim to provide usage instructions for cbuild
.
The cbuild
program provides infrastructure, which allows the packaging
templates to be simplified and often contain only a few fields, without having
to contain any actual functions. For example:
pkgname = "foo"
pkgver = "0.99.0"
pkgrel = 0
build_style = "makefile"
pkgdesc = "Simple package"
maintainer = "q66 <q66@chimera-linux.org>"
license = "BSD-3-Clause"
url = "https://foo.software"
source = f"https://foo.software/{pkgname}-{pkgver}.tar.gz"
sha256 = "ad031c86b23ed776697f77f1a3348cd7129835965d4ee9966bc50e65c97703e8"
Of course, often a template will be a lot more complicated than this, as packages have dependencies, build systems are not always standard and so on.
The template is stored as template.py
in one of the packaging categories,
in a directory named the same as pkgname
. That means for this example it
may be main/foo/template.py
.
The cbuild
program can read templates and build packages according to the
metadata and functions stored. This happens in a special container environment
which is controlled and highly restricted.
You can invoke cbuild
to build the software like this:
$ ./cbuild pkg main/foo
The result will be a local repository containing the binary packages.
The Chimera packaging collection provides four categories in which templates can go. These currently are:
main
contrib
non-free
experimental
Each category has its own repository that is named the same as the category.
The main
category contains software curated and supported by the distro.
In general, a system composed purely of main
packages should be bootable,
but may not contain all functionality required by users. Templates are
evaluated for main
based on various factors such as usefulness, quality of
the software, licensing and others. Templates in main
must not depend on
templates in other categories.
The contrib
category is a user repository. The requirements for contrib
are looser than for main
and the software is not officially supported by
the distribution, but the distro still provides hosting for binary packages
and templates undergo review and acceptance by the distro maintainers. In
addition to other contrib
templates, software here may depend on main
templates.
The non-free
category in general contains proprietary software and stuff
that we cannot redistribute. Software here may depend on anything from main
or contrib
. Unlike contrib
packages, no binary packages are shipped and
users need to build it themselves.
Finally, the experimental
category is mostly unrestricted and has the
least stringent quality requirements. Anything that is anyhow controversial
goes here; once determined to be acceptable, a maintainer may move the
template to contrib
(or sometimes non-free
). Software in this category
does not have binary packages shipped and users are on their own testing it.
Chimera target architecture support is tiered. The tiering affects whether
software can get included in main
and contrib
.
Tier 1 targets must be supported by all software receiving binary packages,
i.e. those in main
and contrib
section; software not being supported on
a tier 1 target means staying in experimental
. This does not apply when
the software only reasonably makes sense on a subset of the architectures
(an example would be a UEFI bootloader). All main
software must have its
test suite passing on tier 1 targets unless there is a good reason for the
otherwise (e.g. tests themselves being broken).
Tier 2 targets will receive packaging when possible. They must have a
fully working main
, but contrib
packages may be missing in some cases.
They are not required to fully pass tests in either category; tests are
run but they may be disabled on per-template basis.
Tier 3 is like tier 2, but it is not required to be complete in either
main
or contrib
, and is not required to pass tests. Tests are still
run for informational purposes, but their results are ignored (i.e.
assuming a pass regardless of the actual outcome). Breakage in tier 3
targets does not block updating packages, and support is entirely on
community basis.
Tier 4 targets receive only main
packages.
There may also be untiered targets. Those have profiles but do not have any packages at the moment. It typically means this target is not ready to be supported, either by us or by software we rely on. Some untiered targets may be promoted at a later point.
Tier 1 targets:
ppc64le
aarch64
x86_64
Tier 2 targets:
Tier 3 targets:
riscv64
Tier 4 targets:
Untiered targets:
ppc64
ppc
In order to be included in experimental
, there are few requirements. The
software has to provide something of usefulness to the users and must not
be malicious. At the time of introduction, it must satisfy the general style
requirements and must be buildable.
For inclusion into contrib
, the software must additionally be provided
under a redistributable license and must be open source; when possible, it
must be packaged from source code (there may be exceptions, but they are
rare, such as bootstrap toolchains for languages that cannot be bootstrapped
purely from source code).
Software in main
must not be vetoed by any core reviewer. In general,
unless there is a good reason for inclusion into main
, things shall
remain in contrib
.
Templates seeking introduction into contrib
or better should in general
be packaged from stable versions. That means using proper release tarballs
rather than arbitrary git
or similar revisions. Exceptions to this may
be made for contrib
(such as when the software is high profile and the
latest stable release is very old and provides worse user experience) but
not for main
.
Most importantly, keep it simple. The cbuild
system is designed to make
correct things easy and terse, and bad things ugly and complicated. If there
is any doubt (i.e. something you consider good but it is inconvenient to
write in cbuild
templates) feel free to report it in the issue tracker.
Keep conditional stuff to a minimum. This includes:
1) Cross-compiling handling should be generalized to be the same for native
in most cases. The system provides facilities to simplify doing that; for
example handling of sysroot
in profiles should be entirely transparent.
2) Cross-compiled packages should be functionally equal to native ones and
have comparable contents. If this is not the case, the template is not
eligible for cross-compilation.
3) There is no such thing as a native architecture and a cross architecture.
Any architecture can be both (i.e. cross-compiling from ARM to x86_64 is
actually a perfectly valid case and should be handled identically to
doing it the other way around).
4) Templates should not perform any contents patching by themselves (e.g. like
via sed
) and especially not conditionally. A generic patch should be
written instead.
You should never make any assumptions about the build environment. Things like
substituting specific default CFLAGS
for something else is always wrong.
Instead, assume that the original value can be any, and if you need a specific
value, override it by passing it after the default.
Build styles should be used when appropriate. When not using build styles, standard template variables should still be used, and expanded where necessary.
Build phases should be considered atomic, and builds should be considered
resumable. Do not store any in-memory state between build phases, as you
cannot be sure that the build will not be resumed from after the phase
has run. Use the init_
template functions to deal with such state, as
they are guaranteed to run every time.
Care should be taken to avoid build-time dependency cycles. Cases where
building a package requires another package to be already built are always
wrong. Every package should be buildable with just a bldroot
and an
entirely empty repository (i.e. cbuild
should be able to build the
entire dependency tree at will). Sometimes this requires disabling tests
in the template (via !check
). It is a good idea that even test suites
that cannot be run or are somehow broken and disabled by default are still
set up. That ensures someone can either find a solution later, fix it, or
at least be able to see which parts of the suite run successfully by forcing
the test run (as cbuild
has an option to bypass !check
).
The build environment takes care to minimize differences between possible
hosts the builds may be run in. However, there may always be edge cases,
and tests should not rely on edge cases - they must be reproducible across
all environments cbuild
may be run in.
When writing new templates, care should be taken to use proper hardening tags. While most hardening options that one should use are implicitly set by default and there is no need to worry about them, there are hardening options that cannot be default but should be set of possible anyway.
Hardening tags are specified using the hardening
list metadata. Just like
the options
list metadata, they can be enabled (e.g. like foo
) or
disabled (e.g. like !foo
).
The Clang CFI is a particularly notable one. It cannot be enabled by default as it breaks on a lot of packages, but those which it does not break with can benefit from it. Packages that are broken with it can also be patched (and patches upstreamed) in the ideal case.
CFI actually consists of multiple components, which can normally be used individually when passing options to Clang, but cbuild groups them together.
CFI requires everything to be compiled with hidden visibility as well as with LTO. Many libraries cannot be compiled with hidden visibility, as they rely on default visibility of symbols. Programs can usually be compiled with hidden visibility as by default they do not export any symbols. This is not always the case, however, and it must be checked on case-by-case basis.
If you cannot enable hidden visibility nor LTO, then you cannot enable CFI.
Otherwise, toggle vis
as well as cfi
and test your template. If this
does not result in a regression (i.e. the package works, its tests pass
and so on), then it can be enabled on the tree.
The most often breaking component of CFI is the indirect function call checker. Clang CFI is type-based, and therefore strict about types being matched. That means the following will break, for example:
typedef void (*cb_t)(void *arg);
void foo(void *ptr, cb_t arg) {
arg(ptr);
}
void cb(int *arg) {
...
}
void bar(void *x) {
foo(x, (cb_t)&cb);
}
The reason this breaks is that we are calling cb
through a different
function signature than cb
is declared with.
Correct, CFI-compliant code in this case would be:
typedef void (*cb_t)(void *arg);
void foo(void *ptr, cb_t arg) {
arg(ptr);
}
void cb(void *argp) {
int *varg = argp;
...
}
void bar(void *x) {
foo(x, &cb);
}
Other types of CFI usually do not break as much as they are either specific to C++ (which is more strictly typed, especially in those contexts) or overall less prone to such shortcuts.
In case of indirect function call breakage, there are two ways to fix this:
1) Patching the code. This is usually better.
2) Adding cfi-genptr
to hardening
. This enables special CFI mode that
relaxes pointer type checks. The first example would work with that,
but note that qualifiers (e.g. const
) still need to match.
It is also possible to disable just indirect function call checks and leave
the rest enabled by disabling cfi-icall
.
Note that there are two other caveats to Clang CFI in our case:
1) It is not cross-DSO; checks are performed only within the executable
or library and not for any external API. Correct cross-DSO CFI requires
support in the C standard library. The cfi-genptr
method also would
not work with cross-DSO CFI.
2) It is currently only available on the x86_64
and aarch64
targets.
On other targets it is silently ignored (so you do not need to set
it conditionally).
This one is notable as it has potential to break existing C/C++ code while
also being the default. The hardening string is int
. All the cases it
traps are undefined behavior in C/C++, but codebases still commonly
violate those.
It enables the following:
signed-integer-overflow
Traps signed integer overflows.integer-divide-by-zero
Traps integer division by zero.Unsigned overflows are allowed as they are not undefined behavior.
An example of signed overflow:
int x = INT_MAX;
x += 1000;
The typical visible outcome of this is wrap-around, given the way two's complement works. The compiler is allowed to do whatever it wants though, and it is allowed to optimize assuming that this will never happen, given it is undefined behavior.
Unsigned integers also wrap around, starting from 0 again.
Regardless of compiler optimization, integer overflows frequently result
in security vulnerabilities, which is why we harden this. In cases where
there are too many instances of the bug and it is not possible to patch
around it, it may be disabled with !int
and a comment explaining why
this is done.
UBSan is available on all targets Chimera currently supports.
Building a package consists of several phases. All phases other after setup
until and including install
can have template specified behavior. The build
system itself runs outside of the sandboxed container, while most actions
(such as building) run inside.
Except for the setup
and fetch
phases, the build system is configured
to unshare all namespaces when performing actions within the sandbox. That
means sandbox-run actions have no access to the network, by design.
Except for the setup
phase, the sandbox is mounted read only with the
exception of the builddir
(up to and including install
), destdir
(after build
) and tmp
directories. That means once setup
is done,
nothing is allowed to modify the container.
All steps are meant to be repeatable and atomic. That means if the step fails in the middle, it should be considered unfinished and should not influence repeated runs. The build system keeps track of the steps and upon successful completion, the step is not run again (e.g. when the build fails elsewhere and needs to be restarted).
All build phases are run in either self.wkrsrc
(all phases), or in
build_wrksrc
inside that directory (configure
and later). The value
of self.wrksrc
is {self.pkgname}-{self.pkgver}
. It exists within
the builddir
and is created automatically.
setup
The build system prepares the environment. This means creating
the necessary files and directories for the syndbox and installing the
build dependencies. When cross-compiling, the cross target environment
is prepared and target dependencies are installed in it.
fetch
During fetch
, required files are downloaded as defined by the
source
template variable by default (or the do_fetch
function of
the template in rare cases). The builtin download behavior runs outside
of the sandbox as pure Python code. When overridden with do_fetch
, it
also overlaps with the extract
stage as the function is supposed to
prepare the builddir
like extract
would.
extract
All defined sources are extracted. The builtin behavior
runs inside of the sandbox, except when bootstrapping. It populates
the self.wrksrc
.
prepare
The source tree is prepared for use. This does not do anything
by default for most templates. Its primary use is e.g. with the cargo
build system for Rust in order to vendor dependencies so they are ready
for use by the time patches are applied (and thus they can be patched
with the other stuff).
patch
This phase applies patches provided in templatedir/patches
to the extracted sources by default. User defined override can perform
arbitrary actions.
configure
In general this means running the configure
script for the
software or something equivalent, i.e. prepare the software for building
but without actually building it.
build
The software is built, but not installed. Things run inside of
the sandbox are not expected to touch destdir
yet.
check
The software's test suite is run, if defined. By default tests
are run (except when impossible, like in cross builds). It is possible
to turn off tests with a flag to cbuild
, and templates may disable
running tests.
install
Install the files into destdir
. If the template defines
subpackages, they can define which files they are supposed to contain;
this is done by "taking" files from the initial populated destdir
after the template-defined do_install
finishes. At the time the
subpackages are populated, builddir
is read-only in the sandbox.
Ideally it would also be read-only during install
, but that is
not actually possible to ensure (since build systems like to touch
their metadata and so on).
pkg
Create binary packages and register them into your local repo.
During this point, destdir
is also read-only for the sandbox.
clean
Clean up the builddir
and destdir
.
When building packages with cbuild
, you can invoke only the specific
phase (from fetch
to pkg
). All phases leading up to the specified
phase are run first, unless already ran.
All packages should only use lowercase characters that are in the ASCII, never mixed case, regardless of what the software is called.
In general, the primary package of the template (i.e. not a subpackage)
should follow the upstream name (other than case) regardless of the
contents of the package. That is, when a library is called foo
,
the package should be called foo
, not libfoo
.
However, if a library is a subpackage of a bigger software project,
there are two things you can do. If the subpackage provides a single
library that is usable as a standalone runtime dependency for other
things, you should use the lib
prefix. If it provides multiple
libraries that should be shipped together, the -libs
suffix should
be used. Whether to separate the individual libraries into individual
subpackages or bundle them together or even not separate them at all
should be decided on per-package basis.
Development packages should use the -devel
suffix, like foo-devel
for the foo
template. The convention with library subpackages and
devel packages is that if you have foo
and libfoo
, the development
files go in foo-devel
. However, if the library part has its own
development files that make sense separately from the main devel
package, it is perfectly acceptable to have libfoo-devel
alongside
foo-devel
. If the template calls for having multiple -devel
packages related to different individual libraries, you can also
split them up accordingly.
Static libraries should go in -static
packages in nearly all cases.
In specific cases, they will go in -devel
. Static libraries are
automatically split from -devel
(unless overridden with !autosplit
or !splitstatic
) and are by default forbidden from other packages
than -devel
or -static
ones, so you should not have to declare
them manually.
In general, things packaging libraries should always have a devel
package of some sort, except in specific rare cases where this does
not make sense (e.g. development toolchains, where the primary package
is already a development package by itself; it may still be a good thing
to separate the runtime libraries in those cases).
Development packages should contain .so
symlinks (where not required
at runtime) as well as include files, pkg-config
files and any other
files required for development but not required at runtime.
Debug packages have the -dbg
suffix and are created automatically in
most cases.
Various other packages are also created automatically. See the section about automatic subpackages for more details.
If a primary package (typically a library or some kind of module) has
auxiliary programs that are separated into a subpackage, the subpackage
should be called foo-progs
.
Subpackages for language bindings should put the language name in the
suffix, e.g. foo-python
. However, language modules that are the primary
package should put that in the prefix, e.g. python-foo
.
As far as general guidelines on subpackages go, things should be separated as little as possible while still ensuring that people do not get useless bloat installed. That means separating runtime libraries where they can work on their own, always separating development packages, always separating language bindings (where they bring a dependency that would otherwise not be necessary) and so on.
Programs meant to be executed directly by the user always go in /usr/bin
.
The /usr/sbin
, /bin
and /sbin
paths are just symbolic links to the
primary /usr/bin
path and should never be present in packages.
Libraries go in /usr/lib
. Do not use /usr/lib64
or /usr/lib32
,
these should never be present in packages. Same goes for the toplevel
/lib
or /lib64
or /lib32
paths. In general, compatibility symlinks
are present in the system and they all point to just /usr/lib
.
Executable programs that are internal and not meant to be run by the
user go in /usr/libexec
(unless the software does not allow this).
Include files go in /usr/include
. Data files go in /usr/share
; the
directory must not contain any ELF executables.
In general, the /usr
directory should be considered immutable when
it comes to user interventions, i.e. editable configuration files should
not be installed in there. However, non-editable configuration files
should always go there and not in /etc
.
Editable configuration files go in /etc
.
Cross-compiling sysroots are in /usr/<triplet>
where triplet is for
example powerpc64-linux-musl
(i.e. short triplet). These contain a
simplified filesystem layout (the usr
directory with the usual files
and symlinks, and the bin
, lib
etc symlinks at top level).
A template consists of variables and functions. A simple template may only consist of variables, while those that need to define some custom behavior may also contain functions.
The template follows the standard Python syntax. Variables are assigned
like foo = value
. Functions are defined like def function(): ...
.
In general, changes made to toplevel variables from inside functions are
not respected as variables are read and stored before the functions are
executed. Any later accesses to variables must be done through the template
handle passed to functions as the first argument (typically called self
).
These variables are mandatory:
license
(str) The license of the project in SPDX license expression
format (e.g. BSD-3-Clause OR GPL-2.0-or-later
). The license should be
a single expression. You can disable validation of the license by using
the !spdx
option (e.g. for custom licenses not covered by SPDX). The
syntax supports custom license IDs via custom:somename
. While this is
not a part of the SPDX license expression specification, it can be used
to cover e.g. dual license software with a custom and standard license
via something like custom:foo OR BSD-3-Clause
. Metapackages should
always use license custom:meta
. Public domain packages should always
use custom:none
. Packages that have some custom license should use
custom:packagename
, and properly install the license. The license
is inherited into all subpackages, and subpackages are allowed to set
it themselves. License exceptions can be from the standard list or they
can be custom as well, e.g. GPL-2.0-or-later WITH custom:foo-exception
.pkgname
(str) The primary package name, must match template name.pkgver
(str) The package version, applies to all subpackages. Must
follow the correct format for the apk
package manager.pkgrel
(int) The release number for the package. When changes are
made to the template that require rebuilding of the package, this is
is incremented by one. The initial value should be zero. When bumping
to a new version, it should be reset back to zero.pkgdesc
(str) A short, one line description of the package. Should
be kept at 72 characters or shorter. In general, this should not begin
with an article, and should not end with a period. It should use American
English and not contain any mistakes. The description is inherited into
all subpackages, though certain subpackages gain some suffixes. See the
section about subpackages for more details.url
(str) The homepage URL of the project being packaged. To pass
lint, the URL must have either the http
or https
scheme, must parse
correctly and not have a trailing slash in the path.There is also a variety of variables that are builtin but not mandatory. Keep in mind that default values may be overridden by build styles.
archs
(list) A list of architecture patterns to determine if the template
can be built for the current architecture. See "Architecture Patterns" below.broken
(str) If specified, the package will refuse to build. The value
is a string that contains the reason why the package does not build.build_style
(str) The build style used for the template. See the
section about build styles for more details.build_wrksrc
(str) A subpath within self.wrksrc
that is assumed to be
the current working directory during configure
and later.checkdepends
(list) This is like hostmakedepends
, but only installed
if the check
option is enabled for the template and not cross-building.
Note that these are installed even if the user explicitly chooses not to
run tests, in order to ensure a reproducible build environment. It mostly
exists to visually separate dependencies only needed for tests from
the others.configure_args
(list) This list is generally specific to the build
system the template uses. Generally speaking, it provides the arguments
passed to some kind of configure
script.configure_env
(dict) Environment variables to be exported when running
the configure script. The way passing them is implemented depends on the
build system, but in general any user-provided environment at call site
overrides this, while this overrides the global environment (env
).configure_script
(str) The name of the script relative to current
working directory used for configuration. Only used by build styles that
use such scripts. The default value is configure
.debug_level
(int) The level to use when generating debug information
in the compiler (i.e. -gN
for C). By default, this is 2, to match the
default level of the compiler with -g
.depends
(list) Runtime dependencies of the package. They are not
installed in the build container, but are checked for availability (and
built if missing). While these may be just names, you can also specify
constraints (e.g. foo<=1.0-r1
) and conflicts (!foo
). You can also
specify dependencies on pkgconf
files (pc:foo
), executable commands
(cmd:foo
) and shared libraries (so:libfoo.so.1
, though this is not
recommended), as well as virtual packages (virtual:foo
). Any virtual
dependencies must explicitly specify a non-virtual provider, which is not
included in the final package metadata, but is used at build-time to check
availability of at least one provider; you can specify that with !
after
the dependency, e.g. cmd:sed!bsdsed
. In a lot of cases dependencies are
automatic, and you should not specify any dependencies that would already
be covered by the scanner. When using version constraints, any apk-style
version pattern is allowed, such as N<V
, N<=V
, N=V
, N>V
, N>=V
as well as fuzzy patterns N~V
(e.g. foo~3.0
will match 3.0.1
).env
(dict) Environment variables to be exported when running commands
within the sandbox. This is considered last, so it overrides any possible
values that may be exported by other means. Use sparingly.exec_wrappers
(list) A list of 2-tuples specifying extra wrappers to
set up for the build. The first element of the tuple is the full path to
the program to wrap, while the second element is the wrapper name. You
can use this to e.g. use sed
as sed
by wrapping /usr/bin/gsed
, in
case it is too much trouble to patch the build system.file_modes
(dict) A dictionary of strings to 3-tuples or 4-tuples,
where the string keys are file paths (relative to the package, e.g.
usr/foo
) and the tuples contain user name, group name, permissions
and optionally the recursive flag (True
or False
). The third field
is a regular permissions integer, e.g. 0o755
. This can be used when
the package creates a new group or user and needs to have files that
are owned by that. Keep in mind that the suid
checks and so on still
happen, so if you make the permissions suid
, you also need to declare
the file in suid_files
. The permissions are applied in the order the
fields are added in the dictionary.hardening
(list) Hardening options to be enabled or disabled for the
template. Refer to the hardening section for more information. This is
a simple list of strings that works similarly to options
, with !
disabling the hardening options. Any enabled hardening option that is
not supported by the target will be ignored.hostmakedepends
(list) A list of strings specifying package names to
be installed in the build container before building. These are always
installed in the build container itself rather than target sysroot,
even if cross compiling. Typically contains runnable tools. This is
not installed during stage 0 bootstrap, since they come from the host.install_if
(list) A list of package names or version constraints that
must be satisfied in order for this package to auto-install (i.e. if
all packages in this list are installed, this one will also be installed).
This is basically the reverse of a "recommends" feature. You should always
include at least one versioned constraint.maintainer
(str) This one is not mandatory but is highly recommended.
A template with no maintainer
field is orphaned. No package in the
main
section of the cports
collection must be orphaned.make_cmd
(str) The name of the program used for building. May not
apply to all templates or build styles. By default this is bmake
(the
default Make implementation in Chimera).make_env
(dict) Environment variables to be exported when running
some build stage. For make
, the call site env
is most significant,
followed by phase-specific make
environment, followed by this, followed
by global environment (env
).make_build_args
(list) A list of custom arguments passed to make_cmd
during the build phase.make_build_env
(dict) Environment variables to be exported when running
the build
phase. For make
, the call site env
is most significant,
followed by this, followed by the rest.make_build_target
(str) The make_cmd
target to be used to build.
Different build systems may use this differently. Empty by default.make_build_wrapper
(list) A list of arguments to prepend before the make
command during build
. It is the middle wrapper, i.e. passed after the
explicit one, but before make_wrapper
.make_check_args
(list) A list of custom arguments passed to make_cmd
when running tests.make_check_env
(dict) Environment variables to be exported when running
the check
phase. For make
, the call site env
is most significant,
followed by this, followed by the rest.make_check_target
(str) The make_cmd
target to be used to run tests.
Different build systems may use this differently (check
by default
unless overridden by the build_style
).make_check_wrapper
(list) A list of arguments to prepend before the make
command during check
. It is the middle wrapper, i.e. passed after the
explicit one, but before make_wrapper
.make_dir
(str) The subdirectory of cwd
that make_cmd
is invoked in
by default. This has the default value of .
, so it normally does not
impose any directory changes. However, the default may be altered by
build styles. This is utilized by build systems such as meson
and
cmake
to build outside the regular tree. It is also utilized by their
configure
steps as the working directory.make_install_args
(list) A list of custom arguments passed to make_cmd
when installing.make_install_env
(dict) Environment variables to be exported when running
the install
phase. For make
, the call site env
is most significant,
followed by this, followed by the rest.make_install_target
(str) The make_cmd
target to be used to install.
Different build systems may use this differently (install
by default).make_install_wrapper
(list) A list of arguments to prepend before the make
command during install
. It is the middle wrapper, i.e. passed after the
explicit one, but before make_wrapper
.make_wrapper
(list) A list of arguments to prepend before the make
command. It is the least important wrapper, i.e. passed the last out of
all wrappers.makedepends
(list) A list of strings specifying package names to be
installed in the build container. When cross compiling, these are installed
into the target architecture sysroot. When not cross compiling, this is
simply concatenated with hostmakedepends
.nopie_files
(list) A list of glob patterns (strings). By default,
the system will reject non-PIE executables when PIE is enabled, but
if the file's path matches any of the patterns in this list, it will
be ignored instead.nostrip_files
(list) A list of glob patterns (strings). When scanning
files to be stripped of debug symbols, each pattern in this list is
considered. If anything is matched, the file will not be stripped.
This is useful if you want the default strip behavior for most things
but there are some files that absolutely cannot be stripped.options
(list) Various boolean toggles for the template. It is a list
of strings; a string foo
toggles the option on, while !foo
does the
opposite. Every permissible option has a default.patch_args
(list) Options passed to patch
when applying patches,
in addition to the builtin ones (-sNp1 -V none
). You can use this to
override the strip count or pass additional options.provides
(list) A list of packages provided virtually, specified
in the format foo=1.0-r0
. The package manager will consider these
alternative names for the package, and automatically have them
conflict with other packages of this name. If the version part is
not provided, several packages of that name may be installed, but
none of them will be considered by default; instead, an error message
will be given and the user will need to choose. Additionally, it can
be used to provide pc
files (like pc:foo=1.0
, you can use 0
as
a version fallback) and commands (like cmd:foo
). This is notably
useful when combined with the !scanpkgconf
option and so on.
It can also be used to provide extra shared libraries. This needs
to be versioned with the full version of the shared library (you can
infer that from the filename, e.g. so:libfoo.so.1=1.4.2
when you have
libfoo.so.1
SONAME
and full name libfoo.so.1.4.2
). You can likewise
use 0
as a fallback there. Typically, you will not use this as the shared
library scanning is automatic; but sometimes libraries provide either a
non-conforming SONAME
which the scanner does not pick up, or the
scanner is disabled explicitly.priority
(int) When used with replaces
, this specifies which of
the packages gets to keep the files (i.e. the higher-priority package
will keep them).replaces
(list) A list of packages we are replacing, in the same
constraint format as provides
. This allows the current package to
replace files of the listed packages, without complaining about file
conflicts. The files from the current package will simply take over the
conflicting files. This is primarily useful for moving files from one
package to another, or together with priority
, for "policy packages".scriptlets
(dict) A dictionary of strings that are the scriptlets for
this package. These take precedence over file scriptlets.sha256
(list or str) A list of SHA256 checksums (or just one checksum
as a string) specified as digest strings corresponding to each field in
source
. Used for verification.source
(list or str or tuple) A list of URLs to download and extract
(by default). The items can be either strings (in which case the filename
is inferred from the URL itself), 2-tuples or 3-tuples. In case of a single
source, the variable itself can be a string or tuple as if it was the item.
When a source is a tuple, it can have the filename explicitly specified as
the second field, with the first field being the URL. The third field (or
second field, in which case the filename is inferred from the URL) can be
a boolean. If this is False
, the source file will not be extracted (using
True
will result in the default behavior). Otherwise, the files will be
extracted into self.wrksrc
in a way so that if extraction yields just a
single regular directory, the contents of that will go in the self.wrksrc
,
otherwise the extracted files/directories are moved into the directory.suid_files
(list) A list of glob patterns (strings). The system will
reject any setuid
and setgid
files that do not match at least one
pattern in this list.tools
(dict) This can be used to override default tools. Refer to the
section about tools for more information.tool_flags
(dict) This can be used to override things such as CFLAGS
or LDFLAGS
. Refer to the section about tools and tool flags for more
information.triggers
(list) A list of directory paths the package should trigger
on. That is, if any package changes these monitored directories, the
trigger script for this package should run. This can include wildcards
(foo/*
will fire on any directory inside foo
).These variables generate scriptlets:
system_users
(list) A list of users to create. A user can take two
forms. It can either be a string (in the format username
or username:uid
)
for the simple case, or a dict
containing at least the fields name
and
uid
(an integer) and optionally desc
, shell
, groups
, pgroup
and
home
.system_groups
(list) A list of groups to create. It contains strings,
which can be in the format gname
or gname:gid
.sgml_entries
(list) A list of 3-tuples representing arguments to
xmlcatmgr -sc /etc/sgml/auto/catalog add <args>
, or remove
(third
element is unused then).sgml_catalogs
(list) Like ("CATALOG", v, "--")
in sgml_entries
.xml_entries
(list) A list of 3-tuples representing arguments to
xmlcatmgr -c /etc/sgml/auto/catalog add <args>
, or remove
(third
element is unused then).xml_catalogs
(list) Like ("nextCatalog", v, "--")
in xml_entries
.Additionally, there is a variety of variables that are not generic but rather are used by specific build styles. They are listed and described in each build style's section.
The other thing template files can specify is functions. Functions define template logic; they are here for everything that cannot be done in a purely declarative manner. Functions and variables interact; variables provide data for the functions to read.
In general, the functions defined by templates are phase functions; they are associated with a specific build phase. There are some functions that do not fit this mold, however.
Every user-defined function in a template takes one argument, typically called
self
. It refers to the template object; you can read the current state of
template variables as well as other special variables through it, and perform
various actions using the API it defines.
The first template-defined function that is called is init
. This function
is called very early during initialization of the template object; most of
its fields are not populated at this point. The following is guaranteed
during the time init(self)
is called:
1) Template variables are populated; every template variable is accessible
through self
.
2) Template options are initialized.
3) The build_style
, if used, is initialized.
4) Version and architecture are validated.
The following is guaranteed not to be initialized:
1) Build-style specific template variables are not populated.
2) Build-style specific template variable defaults are not set.
3) Template functions are not filled in.
4) Path variables are not filled in.
5) It is yet unknown whether the build will proceed, since broken
and other things have not yet been checked.
6) Subpackages are not populated.
7) Tools are not handled yet.
8) Mostly everything else.
Basically, you can consider this function as the continuation of global scope; you can finish any initialization that you haven't done globally here, before other things are checked.
Assuming the build proceeds, phase functions are called. Every phase may
use up to 4 functions - init_PHASE
, pre_PHASE
, do_PHASE
and post_PHASE
.
They are called in that order. The pre_
and post_
functions exist so that
the template can specify additional logic for when the do_
function is
already defined by a build_style
.
The init_
prefixed function is, unlike the other 3, not subject to stamp
checking. That means it is called every time, even during repeated builds,
which is useful as the template handle is persistent - once data is written
in it, it will last all the way to the end, so you can use the init_
hooks
to initialize data that later phases depend on, even if the phase itself is
not invoked during this run (e.g. when re-running build after a failure).
The phases for which all this applies are fetch
, extract
, prepare
,
patch
, configure
, build
, check
and install
. They are invoked
in this order.
Every other function defined in template scope is not used by cbuild
.
However, all regular names are reserved for future expansion. If you want
to define custom functions (e.g. helpers) in template scope, prefix their
names with an underscore.
Also keep in mind that the order of execution also interacts with hooks. See the section on hooks for more information.
A template can specify which architectures it can build for. The archs
meta field is used for that and has roughly this format:
archs = ["pat1", "pat2", ...]
A concrete example would be something like this:
archs = ["x86_64", "ppc*", "riscv*", "!arm*"]
This would specify that the template can build on the x86_64
architecture
as well as any architecture matching ppc*
or riscv*
, but never on any
architecture matching arm*
.
The syntax follows usual shell-style "glob" rules. That means supporting
the *
, ?
, [seq]
and [!seq]
patterns (the matching is implemented
using the fnmatch
case-sensitive pattern matcher in Python). In addition
to that, !
in front of the pattern can negate it.
When not specified, it's the same as specifying *
as the sole pattern.
The system checks the list for all matching patterns. The most strictly
matching pattern trumps everything, with "most strictly" meaning matching
the largest number of exact characters; all pattern styles are considered
equally "loose", so foo*z
is equally strict to foo[xy]z
. It is an
error if you have two matching equally strict patterns, as well as if you
have two identical patterns but only one is negating.
If the finally picked pattern is negating or if no matching pattern was found in the list, the template is considered not buildable.
Build styles are a way to simplify the template by inserting pre-defined logic with a single line.
build_style = "meson"
Simply with this, you declare that this template uses the Meson build
system. What actually happens is that the build style will create some
of the necessary functions (do_build
etc) implicitly.
A build style is a Python file in cbuild/build_style
and looks like
this:
def do_configure(self):
pass
def do_build(self):
pass
def do_install(self):
pass
def use(tmpl):
tmpl.do_configure = do_configure
tmpl.do_build = do_build
tmpl.do_install = do_install
tmpl.build_style_defaults = [
("make_cmd", "mything")
]
The template can further override pieces of the build style as necessary, while the build style can set any functions it wants. It can also define new template variables, as well as override default values for any template variable.
In general, build styles are simply small wrappers over the cbuild.util
namespace APIs. That allows you to use the APIs when you need logic that
cannot be declared with just a simple variable, and keep templates simple
where that is sufficient.
There are currently a few build styles available.
A metapackage build_style
. It merely defines empty do_fetch
as well
as do_install
. All empty packages must use this build style, including
subpackages; metasubpackages of normal packages must mark themselves with
this. This is the only time a subpackage sets build_style
.
You can generally use this for CMake-using projects.
Variables:
cmake_dir
A directory relative to cwd
of the template that contains
the root CMakeLists.txt
. By default it is None
, which means that it
is directly in cwd
.Default values:
make_cmd
= ninja
make_build_target
= all
make_check_target
= test
make_dir
= build
Sets do_configure
, do_build
, do_check
, do_install
.
The cmake
tool is run inside self.make_dir
.
Additionally creates self.make
, which is an instance of cbuild.util.make.Make
for the template.
Implemented around cbuild.util.cmake
.
A simple style that simply runs self.configure_script
within self.chroot_cwd
with self.configure_args
for do_configure
and uses a simple Make
from
cbuild.util
to build.
Sets do_configure
, do_build
, do_check
, do_install
.
You are expected to supply all other logic yourself. This build style works
best when you need a simple, unassuming wrapper for projects using custom
configure scripts. For autotools
and autotools
-compatible systems, use
gnu_configure
.
Additionally creates self.make
, which is an instance of cbuild.util.make.Make
for the template, with no other changes.
A more comprehensive build_style
, written around cbuild.util.gnu_configure
.
Default values:
make_dir
= build
Sets do_configure
, do_build
, do_check
, do_install
.
During do_configure
, gnu_configure.replace_guess
is called first, followed
by gnu_configure.configure
. The configure
script is run inside self.make_dir
.
Additionally creates self.make
, which is an instance of cbuild.util.make.Make
for the template, with build
wrksrc
, and env
retrieved using the
gnu_configure.get_make_env
API.
All of this means that gnu_configure
can implicitly deal with cross-compiling
and other things, while configure
can't.
A simple wrapper around cbuild.util.make
.
Variables:
make_use_env
A boolean (defaults to False
) specifying whether some of the
core variables will be provided solely via the environment. If unset, they
are provided on the command line. These variables are OBJCOPY
, RANLIB
,
CXX
, CPP
, CC
, LD
, AR
, AS
, CFLAGS
, FFLAGS
, LDFLAGS
, CXXFLAGS
and OBJDUMP
(the last one only when not bootstrapping) during do_build
.
All of these inherently exist in the environment, so if this is True
, they
will simply not be passed on the command line arguments.Sets do_configure
, do_build
, do_check
, do_install
.
The install
target is always called with STRIP=true
and PREFIX=/usr
.
Additionally creates self.make
, which is an instance of cbuild.util.make.Make
for the template, with no other changes.
You can use this for Meson-using projects.
Variables:
meson_dir
A directory relative to cwd
of the template that contains
the root meson.build
. By default it is None
, which means that it
is directly in cwd
.Default values:
make_cmd
= ninja
make_build_target
= all
make_check_target
= test
make_dir
= build
Sets do_configure
, do_build
, do_check
, do_install
.
The cmake
tool is run inside self.make_dir
.
Additionally creates self.make
, which is an instance of cbuild.util.make.Make
for the template, with build
wrksrc
.
Implemented around cbuild.util.meson
.
A build style for Python modules (using setup.py
).
Default values:
make_check_target
= test
Sets do_build
, do_check
, do_install
.
The do_build
executes setup.py
with python
, with the build
target
plus any self.make_build_args
.
The do_install
executes setup.py
with python
, with the install
target
and arguments --prefix=/usr
, --root={self.chroot_destdir}
plus any
self.make_install_args
.
A build style for Python modules (PEP517). Requires to have python-pip
in
hostmakedepends
.
Default values:
make_build_target
= .
make_install_target
= {self.pkgname.removeprefix('python-')}-{self.pkgver}-*-*-*.whl
Sets do_build
, do_check
, do_install
.
The do_build
builds a wheel with pip
. The do_install
will install the
contents of the wheel. The do_check
will run pytest
or fail.
The make_install_target
is used as a glob pattern to match built wheels.
The cbuild
system has support for subpackages. Subpackages are just
regular packages repository-wise, except they are built as a part of
some main package's process, and are created from its files.
Subpackages are used for a variety of things, such as separating development files from the main package, or for plugins.
You should create a symbolic link named like the subpackage in the respective repo category and have it point to the directory with the main package template.
In the template file, you use a decorator. The decorator is available globally during the time a package is initialized. The syntax works like this:
@subpackage("mysubpackage")
def _subpkg(self):
...
The function name is up to you, it does not matter. In order to cover more cases, the subpackage definition can also be conditional:
@subpackage("mysubpackage", foo == bar)
def ...
The subpackage will only be defined if the condition argument is True
.
Note that this is the only way subpackages should ever be conditional in.
Generally it applies that if the subpackage symlink exists in cports
, there
should always be a decorated subpackage function. The reason for this is that
cbuild
should be aware of any subpackage the template may generate, without
regard to whether it will be generated or not. This is useful as it allows
for better introspection/analysis by tooling.
The subpackage body function can look like this:
@subpackage("foo-devel")
def _devel(self):
self.depends = [...]
self.options = ["textrels"]
return ["usr/include", "usr/lib/*.so", "usr/lib/*.a"]
How this works should be fairly self-explanatory, but to make it clearer, the function assigns template variables that apply to subpackages and returns an array of files or directories to "steal" from the main package. This is why subpackage ordering can be important; sometimes some files may overlap and you may need to ensure some subpackages "steal" their files first.
The self
argument here is the subpackage handle.
If better control over the files is needed, you can also return a function
instead of a variable. The function takes no arguments (you are supposed
to nest this function and refer to the subpackage via its parent function)
and can use self.take(path)
and the likes.
The following variables apply to subpackages. Most do not inherit their value from the parent and are assigned the defaults; some are inherited, those are explicitly marked.
pkgdesc
(inherits)options
depends
provides
nostrip_files
hardening
nopie_files
shlib_provides
shlib_requires
suid_files
triggers
The hardening
option does not actually do anything (since subpackages do
not affect the build) and its sole purpose is to be able to turn off the PIE
check for subpackages (as projects may build a mixture of PIE and non-PIE
files).
The pkgdesc
may gain a suffix if the subpackage name has a certain suffix:
-devel
, it will be (development files)
-static
, it will be (static libraries)
-libs
, it will be (libraries)
-progs
, it will be (programs)
There are also automatic subpackages, which can be declared explicitly if needed, and those have their own descriptions as well. See the later section of this document for those.
Any old suffix is removed first before an automatic suffix is appended. You
should never use (suffixes)
as a regular part of the package description.
They are reserved for subpackages to describe the subpackage kind.
In general, subpackage descriptions should have suffixes like that. You can
choose the best suffix for packages not matching standardized names. Sometimes
it may also be the case a -devel
subpackage corresponds to another subpackage
rather than the main package, and the default description will thus be wrong.
In those cases, you should override it while following the conventions.
Additionally, depends
is special for subpackages. If the subpackage is a
-doc
or -dbg
subpackage, it will by default gain a dependency on their
parent (i.e. unprefixed) package automatically. If you want to add more
dependencies, you can append. If you do not want the parent package
dependency, e.g. when the package is special and does not have a parent,
you can just overwrite it. For foo-static
, the base dependency is foo-devel
.
If any broken symlink in a package or subpackage resolves to another subpackage or the main package, a dependency is automatically emitted - see the section about automatic dependencies below.
There are subpackages that are generated automatically.
These are (with their package description suffixes):
dbg
- (debug files)
doc
- (documentation)
man
- (manual pages)
dinit
- (service files)
dinit-links
- (service links)
initramfs-tools
- (initramfs scripts)
udev
- (udev rules)
bashcomp
- (bash completions)
zshcomp
- (zsh completions)
locale
- (locale data)
static
- (static libraries)
pycache
- (Python bytecode)
These suffixes should be considered reserved, i.e. you should not make a package with the reserved suffix unless it's replacing the otherwise automatic subpackage, and they themselves should not split off any further subpackages.
They are split off based on existence of certain files inside the package, except debug packages, which are split off if any debug information could be stripped off ELF files within the package.
Automatic subpackages are automatically installed under certain circumstances, except for debug and static packages. For automatic installation to happen, the package they were split off needs to be installed, plus the following:
base-doc
for -doc
subpackagesbase-man
for -man
subpackagesbase-udev
for -udev
subpackagesbase-locale
for -locale
subpackagesbase-devel-static
for -static
subpackagesdinit-chimera
for -dinit
subpackages-dinit
subpackage for -dinit-links
subpackagesinitramfs-tools
for -initramfs-tools
subpackagesbash-completion
for -bashcomp
packageszsh
for -zshcomp
packagespython-pycache
for -pycache
packages (except python-pycache
itself)Development packages may be automatically installed if base-devel
is
installed and specific other circumstances enable this. Please refer to
the section about automatic dependencies below.
You can turn off automatic splitting with the !autosplit
option. Some
templates also have builtin whitelists for split subpackage data, e.g.
eudev
will not split off a -udev
subpackage.
You can turn on/off splitting only static libraries with splitstatic
.
The build system includes an automatic dependency scanner. This allows you to deal with a lot of what you would ordinarily need to specify by hand.
Packages are scanned for the following:
1) What they provide 2) What they depend on
Packages can automatically provide:
1) Shared libraries (.so
files)
2) pkg-config
definitions (.pc
files)
3) Commands (executables)
Packages can automatically depend on:
1) Shared libraries
2) pkg-config
definitions
3) Symbolic link providers
First, packages are scanned for their shared library dependencies. This is
done by recursively scanning the package tree for ELF files and inspecting
their NEEDED
. This will result in a SONAME
. This SONAME
is then
matched against providers among installed packages. That means providers
must be installed as makedepends
.
If a provider is not found, the system will error. Of course, things that are provided within the package are skipped. Likewise, if a dependency is found in a subpackage of the current build, it is used directly and not scanned within repositories.
Shared libraries without SONAME
can still participate in the resolution
if they exist directly in usr/lib
and do not have a suffix beyond .so
.
During stage 0 bootstrap, the repository is considered in addition to already installed packages. This is because we do not have a full build root at this point, and lots of things are instead provided from the host system at that point.
Once shared libraries are dealt with, the package is scanned for .pc
files. Each .pc
file is inspected for their Requires
(public as well
as private) and dependencies are automatically added as pc:
dependencies
into the resulting apk
. These can be version constrained, the version
constraint is preserved. The .pc
files may exist in usr/lib/pkgconfig
and usr/share/pkgconfig
and nowhere else.
Of course, if the .pc
file exists within the same package, no dependency
is added. All pc:
dependencies that are added are reverse-scanned for
their providers in the repository (an exception to this is if the pc:
dependency exists in a subpackage). If no provider can be located, the
system will error.
Lastly, symlink dependencies are scanned. If a broken symlink is encountered
somewhere in the package, the system will try to resolve it to files in
other subpackages of the same set. If found, a dependency will be added,
this dependency is versioned (since all subpackages share a version).
This is mostly useful so that -devel
packages can automatically depend
on whatever they correspond to (since -devel
packages contain .so
symlinks, which resolve to real files in the runtime package).
Broken symlinks that do not resolve to anything are normally an error. You
can override it by putting brokenlinks
in options
.
Once dependencies are scanned, the package is scanned for provides, so that other packages can depend on it.
ELF files with a suffix starting with .so
are considered for so:
provides. Files with just .so
suffix participate in this if they exist
directly in usr/lib
(as otherwise they may be e.g. plugins and we do
not want to handle those). Versioned files (e.g. .so.1
) can be located
anywhere. If the version contains anything that is not a number, it is
skipped.
Eligible files are scanned for SONAME
information. If they do not provide
one, the library is skipped. If they provide an unversioned SONAME
(i.e.
one that ends with .so
) they are skipped when not directly in /usr/lib
.
The filename is scanned for version. For example, libfoo.so.1.2.3
with
SONAME
libfoo.so.1
will provide a so:libfoo.so.1=1.2.3
. If no version
is provided in the filename, 0
is used. If a version is found, it must
validate as an apk
version number.
The package is then scanned for .pc
files to be provided. Only two paths
are considered, usr/lib/pkgconfig
and usr/share/pkgconfig
. IT is an error
for the same .pc
file to exist in both paths. The .pc
files are scanned
for version (this version is sanitized, any -(alpha|beta|rc|pre)
has its
dash replaced with an underscore to be compliant, and the result is verified
with apk
). If no version information is present, 0
is used by default.
For foo.pc
, The provide will become pc:foo=VER
.
Lastly, the package is scanned for command provides. Every file in usr/bin
is a command, and will make a cmd:foo
for usr/bin/foo
.
There are some options
you can use to control this. With !scanrundeps
,
no dependencies will be scanned. As for provides, that can be controlled
with scanshlibs
, scanpkgconf
and scancmd
.
There is a mechanism in place that lets development subpackages (those that
end with -devel
) to be automatically installed. In order for that to
happen, the base-devel
package needs to be installed in the system,
in addition to a specific set of packages.
The behavior of this may be overridden by the packager by disabling the
scandevelif
subpackage option. Defining a custom non-empty install_if
list will likewise automatically disable this behavior entirely.
The dependencies of the subpackage are scanned, and if any full local
dependencies are present (i.e. to another subpackage or the main package,
and fully versioned), this dependency is added to the install_if
. That
allows the package to be autoinstalled if enabled by policy and if
the non-development packages are already installed.
For static libraries, the mechanism is a little different, as they are
usually split off automatically and a hook cannot be used. They get their
install_if against their base development package, in addition to the
base-devel-static
policy package. If this does not work for something,
for example if the relationship is reversed or the base package does not
exist, it is possible to set install_if
to an empty array in the
subpackage definition.
There are various options you can specify as a part of the options
variable.
Some of them can only be specified at the top level, while some also apply
to subpackages.
The following options are toplevel-only, i.e. they apply globally within the template including for subpackages:
bootstrap
(false) This option specifies that the template is built
during bootstrapping. Other templates will fail to build unless a build
container is available.parallel
(true) By disabling this, you can enforce single-threaded
builds for the template. By default the number of build jobs passed
by cbuild
is respected. Note that this does not influence LTO linker
threads.debug
(true) By default, debug packages (-dbg
) are generated if
there are any strippable debug symbols. By setting this to false
,
you can disable passing of debug options to the compiler, as well as
prevent generation of debug packages.check
(true) By disabling this you can ensure the check
phase
is never run, even if enabled and enforced in the build system. A
reason should always be provided as a comment above the options
field.checkroot
(false) You can use this to run the check
stage as
root. This is useful for some test suites that will not function
otherwise. Of course, this still uses namespaces, so it does not
actually run as your host system root (as it can't).installroot
(true) By default, install phase is run as root
.
This is done with fakeroot
, which may interfere with rpath if
such binary is invoked during installation. You may disable this
in those cases. For stage 0 builds, it is always disabled.cross
(true) If disabled, the template will error early when
attempting cross compilation.lint
(true) If enabled, the template contents will be checked
for additional errors before building. This includes correct ordering
of fields, validation of URL and description strings and other checks.
It does not check formatting of the template, as that can be handled
better with external tools.lto
(false) If enabled, LTO will be used. This will result in the
necessary compiled flags being applied. Build styles can alter their
behavior to accommodate the flags. The default LTO type is thin LTO,
which can be overridden with ltofull
.ltofull
(false) If you set this together with lto
, full LTO will
be used. It does not activate LTO by itself.linkparallel
(true) Similarly to parallel
, this can be used to
disable linker and LTO threads.The following options apply to a single package and need to be specified for subpackages separately if needed:
textrels
(false) By default, if cbuild
finds textrels within any
ELF files in the packages, it will error. It is possible to override
this by enabling the option.execstack
(false) By default, if cbuild
finds ELF files with
executable stack, it will error. It is possible to override this by
enabling the option. Any ELF file that either does not have PT_GNU_STACK
or has the 1 << 0
bit set in its flags
.foreignelf
(false) By default, if cbuild
finds ELF files that
have a foreign machine architecture (checked by matching against the
libc
of the target), it will error. It is possible to override this
by enabling this option. Usually this is a wrong thing to do, but for
example in case of cross toolchains you might want to enable this.keepempty
(false) By default, cbuild
will prune all empty directories
from every package. This can be used to override that. It should almost
never be used. However, there are some cases, notably base-files
, where
keeping empty directories is intended. In most cases, when an empty directory
is desired, a placeholder file called .empty
should be created in it, which
ensures that users cannot accidentally rmdir
the directory.brokenlinks
(false) By default, broken symlinks that cannot be resolved
within any subpackage will result in an error. You can override this behavior
but usually shouldn't.hardlinks
(false) Normally, multiple hardlinks are detected and errored
on. By enabling this, you allow packages with hardlinks to build.lintstatic
(true) Normally, static libraries are not allowed to be in
the main package. In specific rare cases, this may be overridden.scanrundeps
(true) This specifies whether automatic runtime dependencies
are scanned for the package. By default, ELF files are scanned for their
dependencies, which is usually desirable, but not always.scanshlibs
(true) If disabled, the package will not be scanned for
shared libraries to be provided by the package.scanpkgconf
(true) If disabled, the package will not be scanned for
.pc
files.scandevelif
(true) If disabled, install_if
will not be generated
for development packages.scancmd
(true) If disabled, the package will not be scanned for
executable commands.spdx
(true) If enabled, the license name(s) will be validated
as SPDX compliant. License for subpackages is validated separately,
if overridden (if not overridden, validation is skipped).strip
(true) If disabled, ELF files in this package will not be
stripped, which means debug symbols will remain where thesy are and
debug package will not be generated.ltostrip
(false) By default, lto
being enabled disables stripping
of static archives, as LTO archives consist of bitcode and not object
files. You can enforce the pass to run with this, which is mainly useful
for when there are mixed LTO and non-LTO archives or when something is
built with GCC and -ffat-lto-objects
. Keep in mind that you will have
to use nostrip_files
to filter out bitcode archives with this option.autosplit
(true) If disabled, the build system will not autosplit
subpackages (other than -dbg
, which is controlled with other vars).splitstatic
(false, true) This is like autosplit
, but only for
static libraries. It is on by default for devel
packages and off
otherwise. You can change the default by toggling this.splitudev
(true) This is like autosplit
, but only for udev
rules.splitdinit
(true) This is like autosplit
, but only for dinit
service files and links.splitdoc
(true) This is like autosplit
, but only for docs.The cbuild
system implements an automatic way to deal with toggling
different hardening options. Several hardening options are implicit
as a part of our toolchain and do not have toggleable options; those
include FORTIFY and RELRO.
Currently the following options are always enabled by default:
pie
Position-independent executables.ssp
Enables -fstack-protector-strong
.scp
Enables -fstack-clash-protection
(ppc64le
, ppc64
, ppc
, x86_64
)int
Traps signed integer overflows and integer division by zero.pac
Enables AArch64 pointer authentication (aarch64
).Several others are available that are not on by default:
vis
Build with -fvisibility=hidden
in default flags.cfi
Enables Clang Control Flow Integrity (needs vis
, x86_64
and aarch64
)sst
Enables Clang SafeStack (x86_64
, aarch64
)CFI has additional options that affect it:
cfi-genptr
Relaxed pointer checks (disabled by default).cfi-icall
Indirect function call checking (enabled by default).Hardening options that are not supported on a platform are silently disabled, but their dependency relationships are always checked.
CFI should be enabled where possible. Our current CFI is not cross-DSO, which means calls across shared library boundaries will not be checked, and the whole template needs building with hidden visibility. A lot of projects do not like being built with hidden visibility, and since Clang CFI is type-based, it is rather easy to encounter CFI violations, so it is not something that can just be enabled and expected to work. Careful testing should be done for each template that enables CFI.
The int
hardening option is enabled by default, but can likewise result in
crashes in various programs/libraries. However, such crashes are always bugs
in those programs/libraries. The best solution is to fix the issues and submit
patches upstream, but in case of complicated bugs, it is okay to disable it in
the template and put in a comment for later (with information on how to reproduce
the crash).
The build system also provides separate management of tools for convenience. Similarly, it allows you to declare custom tool flags. Tools and tool flags in this case refer primarily to the toolchain and flags passed to it.
By default, the following tools are defined:
CC
The C compiler, clang
by default.CXX
The C++ compiler, clang++
by default.CPP
The C preprocessor, clang-cpp
by default.LD
The linker, ld.lld
by default.PKG_CONFIG
The pkg-config
implementation, pkg-config
by default.NM
The nm
tool, llvm-nm
when not bootstrapping, nm
otherwise.AR
The ar
archiver, llvm-ar
when not bootstrapping, ar
otherwise.AS
The assembler, clang
by default.RANLIB
The ranlib
tool, llvm-ranlib
when not bootstrapping
and ranlib
otherwise.STRIP
The strip
tool, llvm-strip
when not bootstrapping
and strip
otherwise.OBJDUMP
The objdump
tool, llvm-objdump
, and not provided
when bootstrapping (ELF Toolchain does not provide it).OBJCOPY
The objcopy
tool, llvm-objcopy
when not bootstrapping
and objcopy
otherwise.READELF
The readelf
tool, llvm-readelf
when not bootstrapping
and readelf
otherwise.The following tool flags are defined:
CFLAGS
(C)CXXFLAGS
(C++)FFLAGS
(Fortran)LDFLAGS
(linker, usually passed together with one of the above)RUSTFLAGS
(Rust)When invoking commands within the sandbox, the build system will export the values as environment variables, but before user provided environment variables are exported (therefore, actual explicit env vars take priority).
The CC
, CXX
, CPP
, LD
and PKG_CONFIG
tools are treated specially
for cross-compiling targets; when a cross-compiling target is detected,
the short tripet is prepended. This also happens when the user overrides
the tool via the tools
variable in the template. Therefore, if you set
CC
to foo
and you cross-compile to aarch64
, you may get something
like aarch64-linux-musl-foo
.
Additionally, these tools are also exported into the environment with
their host values, as BUILD_CC
, BUILD_LD
and so on. This is to ensure
that project build systems can utilize both host and target toolchains
where appropriate.
Tool flags have a bit more elaborate handling. Similarly to tools they
are also exported into the environment by their names, including for
the host profile with the BUILD_
prefix. However, the actual values
are composed of multiple parts, which are generally the following:
1) Any hardening flags for the tool as defined by current hardening
of the
template, possibly extended or overridden by the hardening
argument.
2) The flags as defined in either the current build profile or target
.
3) Bootstrapping or cross-compiling flags.
4) The flags as defined in your template, if any.
5) -fdebug-prefix-map=/builddir/{wrksrc}=.
to improve ccache behavior
for CFLAGS
and CXXFLAGS
.
6) Any extra flags from extra_flags
.
7) Debug flags as corresponding to the tool according to the current debug
level (default or template-specified), if building with debug.
Not all of the above may apply to all tool types, but it tends to apply to compilers. Any differences will be noted in here, if needed.
There are many more variables that are implicitly exported into the environment, but those are documented elsewhere.
The packaging system lets you provide custom hooks as well as triggers.
Hooks are scriptlets (simple shell scripts) that will run at specified times during the package installation or removal. Triggers are scriptlets that run if something modifies a monitored directory.
The system supports install
, upgrade
and deinstall
hooks, each
having pre
and post
variants differentiating whether the hook is
run before or after the step.
The install
hooks are executed if a package is installed, but not
downgraded or upgraded or reinstalled. Conversely, the upgrade
hooks are run on downgrade or upgrade as well as reinstallation,
but not clean installation. The deinstall
hooks are run when you
uninstall a package, but removal before upgrade or reinstall is not
counted.
Overall, this makes 6 hooks such as pre-install
and so on.
Triggers are a different kind of scriptlet. Each package is allowed to carry one trigger, and this trigger must have a list of directory patterns set up for it. These directory patterns are then monitored for changes, potentially by other packages. That means other packages can result in invocation of triggers even if the package providing the trigger is not modified in any way.
Triggers are fired when the affected directory is modified in any way, this includes uninstallation.
The scriptlet is provided as a file in the template's directory,
named pkgname.scriptname
, e.g. foo.trigger
or foo.post-install
.
You can use symlinks if you want one scriptlet to be used for multiple
hooks.
If a trigger script is provided, the triggers
variable must be set
appropriately.
All scriptlets are run as if set -e
. All scriptlets are run with the
default shell interpreter (#!/bin/sh
) regardless of their shebang.
You should still provide a #!/bin/sh
shebang, but this is just for
style.
Alternatively, scriptlets may be provided as a part of the template
using the scriptlets
field. If both file and in-template scriptlet
are provided, the in-template one takes precedence.
Hooks get passed the new or current package version as the first argument, as well as the old version as a second argument where this is relevant.
Triggers are passed the directory paths that resulted in the trigger being invoked.
There are certain things that result in a hook being generated automatically, without providing an explicit scriptlet for it. If that happens, the potential user script is run after the automatic one.
There are automatic hooks for user and group registration. These are
controlled by the system_users
and system_groups
variables that
you can specify. See the documentation for those.
These hooks will automatically take care of creating necessary users
and groups as well as deactivating them when needed. The creation is
done in pre-install
and pre-upgrade
, while the deactivation is
done in post-deinstall
.
Triggered by the sgml_entries
and xml_entries
variables. If these
are specified, the package should also depend on xmlcatmgr
or the
scriptlets will fail.
The cbuild
system allows for flexible definition of profiles for
different target architectures. These profiles are used for both
native and cross builds.
The definition exists in etc/build_profiles/ARCH.ini
where ARCH
is the apk
architecture name (in general matching uname -m
).
It may look like this:
[profile]
endian = little
wordsize = 64
triplet = riscv64-unknown-linux-musl
machine = riscv64
goarch = riscv64
repos = main contrib
[flags]
CFLAGS = -march=rv64gc -mabi=lp64d
CXXFLAGS = ${CFLAGS}
FFLAGS = ${CFLAGS}
LDFLAGS =
RUSTFLAGS =
These are also the fields it has to define. The triplet
must always
be the full triplet (cbuild
will take care of building the short
triplet from it if needed). The compiler flags are optional.
The repos
field specifies which categories are provided by remote
repositories. As different architecture tiers may provide different
package sets and some architectures don't have remote repositories
at all, this is specified in the profile as we have no way to check
it (and assuming all repos exist would just lead to needless failures
when updating the package indexes).
There is also the special bootstrap
profile used when bootstrapping.
It differs from normal profiles in that the profile
section is not
actually specified, as the endianness and word size are already known
from the host and the rest of the info is architecture specific. What
it can specify is the flags
section, and possibly also additional
per-architecture flags (e.g. flags.riscv64
). User specified flags
from global config are ignored when bootstrapping.
The cbuild
system provides special API to manipulate profiles, and
you can utilize any arbitrary profiles within one build if needed.
More about that in the respective API sections, but the API allows
one to retrieve compiler flags in proper architecture-specific way,
check if we are cross-compiling and otherwise inspect the target.
API-side, the profile (retrieved with self.profile()
for example)
is represented as a Profile
object. It looks like this:
class Profile:
arch = ...
triplet = ...
short_triplet = ...
machine = ...
sysroot = ...
wordsize = ...
endian = ...
cross = ...
repos = ...
goarch = ...
The properties have the following meanings:
arch
The apk
architecture name of the profile.triplet
The "long" target triplet (e.g. aarch64-unknown-linux-musl
)short_triplet
The "short" target triplet (e.g. aarch64-linux-musl
)machine
The uname
machine of the profile. Matches arch
if not explicit.sysroot
A pathlib
path representing the sysroot.wordsize
The integer word size of the target (typically 64 or 32).endian
The endianness of the target (little
or big
).cross
A boolean that is True
for cross compiling targets and
False
otherwise.goarch
The architecture name for the Go programming language. Optional
and only present when supported by the toolchain.For the bootstrap
profile, triplet
and short_triplet
are None
.
The sysroot
refers to /
for native targets and /usr/<short_triplet>
for
cross-compiling targets.
In general, you will not want to use the profile's methods, and the member variables are strictly read only.
This section of the documentation defines what the build environment looks like when building a package.
Except when bootstrapping from scratch, most of the actual build process
runs sandboxed. The sandboxing is provided by the means of a minimal
Chimera container (as defined by the main/base-chroot
package) and
the bwrap
tool (bubblewrap
), which utilizes Linux Namespaces to
provide a safe and unprivileged environment.
During initial setup, all required dependencies are installed. The root is mounted read-write during this stage, and network access is still available. This stage is considered trusted; no shell code is executed.
When cross-compiling, the toolchain pieces required for the target
architecture are installed (e.g. base-cross-aarch64
for aarch64
).
The target dependencies are installed not in the container directly,
but rather in the target sysroot, which is /usr/aarch64-linux-musl
in the container (as an example for aarch64
).
In order to trick apk
into managing the sysroot properly, the system
automatically creates an internal dummy metapackage. This is needed so
that installing packages into the sysroot does not overwrite files
provided by the container's cross toolchain packages, this includes
things like musl
as well as libcxx
, libunwind
and other bits
that are a part of the cross-toolchain and should not be installed
as regular packages (which they otherwise would, as dependencies).
Once the environment is set up and template code runs, the root is always mounted as read only. That prevents unintended modifications to the container, ensuring that it always remains consistent.
When bootstrapping the build container from binary packages,
/etc/machine-id
is generated as a random string. This is mainly
to allow things that need it to pass tests and so on.
The following environment variables are exported into the sandbox:
PATH
The executable path, includes /usr/bin
plus possible
additions for ccache
and so on.SHELL
Set to /bin/sh
.HOME
Set to /tmp
.LC_COLLATE
Set to C
.LANG
Set to en_US.UTF-8
.UNAME_m
Set to the preferred host architecture. Read by uname(1)
.PYTHONUNBUFFERED
Set to 1
. This disables output buffering on
Python subprocesses, which allows output to be printed right away,
since cbuild
captures it for logging purposes.SOURCE_DATE_EPOCH
The timestamp for reproducible builds.CBUILD_STATEDIR
Points to where current package build metadata
is stored, such as stamps for finished phases.CFLAGS
Target C compiler flags.FFLAGS
Target Fortran compiler flags.CXXFLAGS
Target C++ compiler flags.LDFLAGS
Target linker flags.RUSTFLAGS
Target Rust compiler flags.CC
Target C compiler.CXX
Target C++ compiler.CPP
Target C preprocessor.LD
Target linker.PKG_CONFIG
Target pkg-config
.STRIPBIN
Set to a special wrapper that avoids stripping the file.
This is in order to bypass install(1)
-s
argument.CBUILD_TARGET_MACHINE
Target apk
machine architecture.CBUILD_TARGET_TRIPLET
Full target triplet (as described in profile).
This is not exported during stage0 bootstrap.CBUILD_TARGET_SYSROOT
Target sysroot path. Host sysroot is always /
.BUILD_CFLAGS
Host C compiler flags.BUILD_FFLAGS
Host Fortran compiler flags.BUILD_CXXFLAGS
Host C++ compiler flags.BUILD_LDFLAGS
Host linker flags.BUILD_RUSTFLAGS
Host Rust compiler flags.BUILD_CC
Host C compiler.BUILD_CXX
Host C++ compiler.BUILD_CPP
Host C preprocessor.BUILD_LD
Host linker.BUILD_PKG_CONFIG
Host pkg-config
.CBUILD_HOST_MACHINE
Host apk
machine architecture.CBUILD_HOST_TRIPLET
Full host triplet (as described in profile).
This is not exported during stage0 bootstrap.Additionally, when using ccache
, the following are also exported:
CCACHEPATH
The path to ccache
toolchain symlinks.CCACHE_DIR
The path to ccache
data.CCACHE_COMPILERCHECK
Set to content
.CCACHE_COMPRESS
Set to 1
.CCACHE_BASEDIR
Set to the cbuild
-set current working directory.When set in host environment, the variables NO_PROXY
, FTP_PROXY
,
HTTP_PROXY
, HTTPS_PROXY
, SOCKS_PROXY
, FTP_RETRIES
, HTTP_PROXY_AUTH
are carried over into the environment.
The values of the tools
meta variable are also exported. Additionally,
values of the env
meta variable are exported, taking priority over any
other values. Finally, when invoking code in the sandbox, the user of the
API may specify additional custom environment variables, which further
override the rest.
The container is entered with a specific current working directory. At first
this is self.wrksrc
, then from configure
onwards it may be build_wrksrc
if set (which is inside self.wrksrc
). This applies to all parts of each
phase, including init
, pre
and post
.
The current working directory may be overridden locally via API, either for the template or for the specific container invocation.
The following bind mounts are provided:
/
The root, read-only./ccache
The ccache
data path (CCACHE_DIR
), read-write./builddir
The directory in which self.wrksrc
exists./destdir
The destination directory for installing; packages will
install into /destdir/pkgname-pkgver
, or when cross compiling,
into /destdir/triplet/pkgname-pkgver
. Read only before install
,
and read-write for the install
phase./sources
Read-only, points to where all sources are stored./dev
, /proc
and /tmp
are fresh (not bound).Once the fetch
phase is done, all possible namespaces are unshared.
This includes the network namespace, so there is no more network
access within the sandbox at this point.
The cbuild
system is largely driven by hooks. A hook is a Python source
file present in cbuild/hooks/<section>
. Hooks take care of things such
as sources handling, environment setup, linting, cleanups, and even
package generation and repo registration.
The section consists of the init_
, pre_
, do_
or post_
prefix plus
the phase name (fetch
, extract
, prepare
, patch
, configure
, build
,
check
, install
and pkg
).
Hooks are stamp-checked, except the init_
hooks which are run always.
They are called together with the corresponding phase functions (if such
phase function exists) defined in the template. Every hook defined in the
section directory is invoked, in sorted order. They use a numerical prefix
to ensure proper sorting.
A hook looks like this:
def invoke(pkg):
pass
It takes a package (sometimes this may be a subpackage) and does not return a value, though it may error.
This is the entire call chain of a template build. The init:
and pre:
invocations mean init_
or pre_
hooks plus template function if available.
For post:
, the order is reversed, with the function called first and the
hooks called afterwards. For do_fetch
and do_extract
, either the hooks
or the function are called but not both; the function overrides the hooks.
This allows templates to define custom behavior if needed, but fall back
to the defaults that are useful for most.
When step:
is written, it means init_
hooks and function called always,
followed by pre_
hooks and function, followed by do_
function and hooks,
followed by post_
function and hooks. All steps have their do_
function
optional (i.e. template does not have to define it) except install
, which
always has to have it defined in the template.
1) init
2) init: fetch
3) pre: fetch
4) do_fetch
OR do_fetch
hooks
5) post: fetch
6) init: extract
7) do_extract
OR do_extract
hooks
8) post: extract
9) step: prepare
10) step: patch
11) step: configure
12) step: build
13) step: check
14) step: install
The install
step is also special in that it does not call post_install
hooks yet (post_install
function is called though).
After this, subpackage installation is performed. For each subpackage, the following is run:
1) subpackage is checked for pkg_install
2) if defined, pre_install
hooks are called, followed by pkg_install
3) post_install
hooks are called always
Finally, post_install
hooks are called for the main package.
For both subpackages and main package, the system scans for shared libraries
in the package, before post_install
hooks are called.
The whole install
step is treated atomically, i.e. if anything in it fails
and the build is restarted, it runs again from install
.
Once done, init_pkg
hooks are called for the main package. Then, for each
subpackage and finally for the main package, pre_pkg
hooks are called.
The pre_pkg
hooks should not alter anything in the resulting destdir
.
From this point onwards, it should be considered read only.
Finally, do_pkg
and post_pkg
hooks are called first for each subpackage
and then for the main package. After this, the build system rebuilds repo
indexes, removes automatic dependencies, and performs cleanup.
The build system implements staging. This means packages do not get registered into the actual final repo outright, but instead they first get staged and only when ready, they get moved into the repository proper.
Every built package gets staged first. There is a specific staging overlay repo for every repository, but the unstaging algorithm considers them all a single global stage.
When you invoke a build (./cbuild pkg category/foo
), it must first finish.
This includes building potential missing dependencies. Once the entire
potential batch is built, the unstaging algorithm kicks in and does the
following:
1) If the user has explicitly requested that the package remains staged,
nothing is done. This can be done via a command line option to cbuild
or using the configuration file.
2) The system collects all staging overlays currently present.
3) Every staging overlay is searched for packages. These packages are
collected and each package is checked for its virtual providers. These
include shared libraries (so:libfoo.so=ver
) and others. The system
checks both the staged version and a possible previously built version
that was already built and not in stage. The providers of both are
collected.
4) Staged version providers are accumulated in the added
global set.
The previous version providers are in the dropped
global set. This
happens only if the providers between the versions differ. If they
do, the package is considered replaced
.
5) Common entries between added
and dropped
are eliminated. These
are entries that have the same name as well as version.
6) Now all dropped
providers are searched for in both the main repos
and the stages. Their reverse dependencies (i.e. things depending on
them) are collected, and each reverse dependency is stored in a global
set.
7) Each reverse dependency is searched for and its dependencies are collected.
Only the "best" version is considered, which is the potentially staged
one. Every dependency is checked if it matches something in the dropped
set. Version constraints are respected here. If one is not found in the
dropped
set, the dependency is discarded. Otherwise, it is added into
a set of dependencies for further checking.
8) Each revdep dependency that satisfied a dropped
provider is further
checked for providers. If a provider that was not replaced
is found,
then the dependency is discarded. This ensures that if there is another
provider that can satisfy the dependency, we don't have to worry about it.
9) If the resulting set is empty, the repository gets unstaged as there
is nothing else to consider. If it is not empty, the repositories are
kept staged, and a list of packages depending on each problematic
provider is printed.
This algorithm is not perfect and will not catch certain edge cases, such as
when moving a provider from main
to contrib
but there still being packages
that depend on it in main
. This is an intended tradeoff to keep things
reasonably simple. You are expected to be careful with such cases and deal
with them properly.
The main point of the staging system is to handle soname
updates in a way
that does not disrupt user workflow. That is, when a soname
is increased
for a library, the rebuild will get staged until everything depending on
it has been rebuilt against the new version too. While the package system
deals with this gracefully and would not let users update affected packages,
it is better to make this invisible and keep the old versions until things
are ready.
Additionally, it is there for convenience, to be notified of potential rebuilds to be done, as well as so one does not forget.
The public API of cbuild
that is accessible from templates consists of
exactly 2 parts: the API available as a part of the template handle, and
the API in the cbuild.util
module namespace.
The template handle provides the important APIs that cannot be reimplemented using other APIs. The utility namespace, on the other hand, provides things that are useful to have implemented in a unified manner, but are implemented in terms of the existing interfaces.
There are also several builtin global variables that are accessible from the template scope at the time the template itself is executed. These are only available during that time, and never after that, so do not attempt to access them from inside functions.
This is a subpackage decorator, see Subpackages.
Using self
, you can access the Template
handle from the global scope.
Keep in mind that at this point, it is uninitialized - not even things run
during the init()
call are set up.
Also, do not rely on it inside functions. Its existence is limited to the
time when the primary template body is being executed. Of course, functions
in general take the handle as the first argument, which is by convention
also called self
. You can obviously rely on that, just do not rely on it
being implicitly defined.
The handle API consists of 3 classes. The Package
class provides base API
that is available from both the main template and subpackage handles. The
Template
class represents the template handle available as self
in
global functions, while the Subpackage
class represents the object in
subpackages.
Both Template
and Subpackage
inherit from Package
.
Shared API for both templates and subpackages.
All APIs may raise errors. The user is not supposed to handle the errors,
they will be handled appropriately by cbuild
.
Filesystem APIs take strings or pathlib
paths.
A string representing the name of the package.
The version number of the package. While provided as a template variable, this is inherited into subpackages as well, so it's considered a part of the base API.
The release number of the package. While provided as a template variable, this is inherited into subpackages as well, so it's considered a part of the base API.
Represents an instance of a class with this API:
class Logger:
def out_plain(self, msg, end = "\n")
def out(self, msg, end = "\n")
def warn(self, msg, end = "\n")
def out_red(self, msg, end = "\n")
The out_plain()
method writes out the given string plus the end
.
The out()
method does the same, but in a colored format and prefixed
with the =>
string.
The warn()
method prints out => WARNING: <msg><end>
in a warning
color. The out_red
is like out
, except in red, providing a base for
printing out errors.
Whether the color-using methods use colors or not depends on the current
configuration of cbuild
(arguments, environment, whether we are in an
interactive terminal are all things that may disable colors).
A dictionary representing the enabled/disabled options for the template
or subpackage. It is one of the few member variables that actually override
the template variables; within the template, you specify options
as a
list, but that is not useful for checking, so the system internally maps
it to an array (and fills in the defaults as well, so you can check for
options the template did not explicitly set).
Usage:
if not self.options["strip"]:
... do something that only happens when stripping is disabled ...
The absolute path to the destination root of the template or subpackage.
This directory will be populated during the install
phase and represents
the target root.
Same as destdir
, but when viewed from inside the sandbox.
The absolute path to the directory (stored within builddir
) which
contains all the state files (i.e. tracking which phases are done and
so on in a persistent manner to allow resuming, plus any wrappers).
Using self.logger.out()
, print out a specially prefixed message. The
message has the format <prefix>: <msg><end>
, where prefix
can be
one of the following:
{self.pkgname}-{self.pkgver}-r{self.pkgrel}
{self.pkgname}
cbuild
This depends on the stage of the build.
Like log
, but using out_red
.
Like log
, but using warn
.
In addition to logging a message like log_red
, also raises an error,
which will abort the build.
To be used as a context manager. Temporarily changes the cwd
as well
as chroot_cwd
of the template to point to dirn
(which is treated
as a relative path to current cwd
).
This is pretty much an equivalent of the Unix pushd
/popd
commands.
Usage:
with self.pushd("src"):
pass
Copies srcp
to destp
. Both paths are considered potentially relative
to cwd
. If srcp
is a file, it is copied into destp
if a directory,
or becomes destp
. If symlinks
is True
, symlinks are followed, i.e.
if srcp
was a symlink, the result will be a copy of the file it resolves
to.
If srcp
is a directory, recursive
must be True
else the function
will error. This includes the case when srcp
is a symbolic link to a
directory. In the latter case, srcp
is copied as-is to dest
like
if it was a file, and symlinks
is ignored. The meaning of symlinks
is the opposite for directories with recursive
, if it is True
, all
symlinks are preserved, otherwise they are resolved.
This mimics the behavior of the Unix cp
tool.
Moves srcp
to destp
. If destp
is an existing directory, srcp
is
moved into that directory, otherwise srcp
is renamed to destp
.
Both paths are considered potentially relative to cwd
.
This mimics the behavior of the Unix mv
tool.
Creates the directory path
. If parents
is False
and the parent of
path
does not exist, this will error. If the directory already exists,
it will likewise error. If parents
is True
, it will create all parent
directories, and it will never error when path
already exists and is
a directory.
Mimics the behavior of the Unix mkdir
tool, possibly with -p
.
Removes the path path
. Can be either a file or a directory. If it is
a directory (symlinks are treated as files) and recursive
is not True
,
an error is raised. If force
is True
, the function will never error
when path
is non-existent.
Mimics the behavior of the Unix rm
tool, recursive
is like -r
and
force
is like -f
.
Creates a symlink at destp
pointing to srcp
. The dest
is considered
potentially relative to cwd
. If destp
resolves to a directory, the
symlink is created inside that directory (including if it is a symlink
to a directory). In that case, the symlink's name will be the name
portion of srcp
.
When relative
is True
, srcp
is resolved to be relative to destp
using os.path.relpath
; otherwise it is not modified in any way and
used as the target as-is. It can be a pathlib
path or a string, just
like destp
.
This mimics the behavior of the Unix ln
tool with the -s
switch and
optionally with -r
.
Changes the mode of path
to mode
. Usually you will want to use the
octal notation (e.g. 0o644
for owner-writable, all-readable). The
path
is considered potentially relative to cwd
.
This mimics the behavior of the Unix chmod
tool.
Copies a file pointed to by src
(relative to cwd
) to dest
(which must
be a relative path in destdir
). If dest
is a directory, the file will
be copied into it, otherwise it will be created there.
The src
may be an aboslute path. If root
is specified, it will be used
instead of destdir
.
Returns a generator object that represents a recursive search for pattern
within path
(which is considered potentially relative to cwd
). Each
result is a pathlib.Path
object that is a found entry. If files
is
set to True
, only files are considered.
Usage:
for p in self.find("foo", "*.py"):
...
APIs not available on subpackages.
The number of configured jobs to use for building. This is not affected
by whether parallel builds are disabled via options, always referring
to the number provided by cbuild
.
The number of linker threads (and LTO jobs, if enabled) to use. This is
not affected by whether parallel builds are disabled via options, always
referring to the number provided by cbuild
.
The number of jobs to use for building. Unlike conf_jobs
, this will always
be 1 if parallel
option is disabled.
The number of linker threads (and LTO jobs, if enabled) to use. Unlike
conf_link_threads
, this will always be 1 if linkparallel
option is disabled.
Whether the build was forced (boolean).
The current bootstrap stage. When 0
, it means we're running the first-stage
bootstrap that does not have a sandbox and runs on top of the host system.
During normal builds, it's 3
. During other stages of source bootstrap,
it can be 1
(when compiling using the sandbox generated by stage 0) or
2
(when compiling using the sandbox generated by stage 1).
Whether running the check
phase is enabled by cbuild
. This is False
for
cross builds even if testing is otherwise enabled. Keep in mind that setting
!check
in options
will not make this False
, as it's set before options
are read.
You should never base your makedepends
or hostmakedepends
on whether you
are running tests or not. Packages should always be built with an identical
environment regardless of settings.
Whether building dbg
packages is enabled by cbuild
.
Whether using ccache
is enabled by cbuild
A string representing the name of the directory inside builddir
that
is used as the default working source. It is usually the basis for self.cwd
,
along with the potential user-set build_wrksrc
meta variable.
The current working directory of the template. This does not mirror the
actual current working directory of the OS; it is the directory that is
used strictly by the Python APIs of cbuild
.
Like cwd
, but when viewed from inside of the sandbox. In general you
will use this when building paths for commands to be executed within,
as using cwd
directly would refer to a non-existent or incorrect
path.
The absolute path to the directory with template.py
.
The absolute path to the files
directory of the template. This directory
contains auxiliary files needed for the build, shipped in cports
.
The absolute path to the patches
directory of the template. This directory
contains patches that are applied in the patch
phase.
The aboslute path to where the source files for the template are stored.
The absolute path to the builddir
. This directory is where sources are
extracted, and which is used as the mutable base for builds.
Like builddir
, but when viewed from inside the sandbox.
A directory within statedir
(an absolute path to it) that is used for
wrappers. This is added to PATH
when executing commands within the sandbox,
in order to override or wrap certain tools where we don't want the default
behavior.
The base directory (absolute path) where all destination directories for packages will be stored, i.e. for the main package as well as subpackages.
Like destdir_base
, but when viewed from inside the sandbox.
Execute a command in the build container, sandboxed. Does not spawn a shell,
instead directly runs cmd
, passing it *args
. You can use env
to provide
extra environment variables in addition to the implied ones (see the build
environment section). The provided env vars override whatever builtin ones
the system sets up.
The wrksrc
is relative to current cwd
of the template. If not given, the
working directory will be the current cwd
of the template itself.
The level of sandboxing used depends on the current build phase. In all cases,
the root filesystem will be mounted read only, the builddir
will be mutable
unless we're after post_install
, the destdir
will be immutable unless we
are at install
phase, and all namespaces will be unshared (including network
namespace) unless we're at fetch
.
The allow_network
argument can be used to conditionally allow network access
but only during the fetch
, extract
, prepare
and patch
phases.
If run during the install
phase (or during the check
phase when checkroot
is enabled in options
), the command will be run masquerading as the root
user. This affects all things that use this API, e.g. make
invocations.
This behavior is to better accommodate various build systems.
By default, failed runs will result in an exception being raised. You can
bypass that by setting check
to False
. Also, by default all output is
printed out without capturing it; using capture_output
you can override
that if needed.
The stdout
and stderr
arguments work the same as for Python subprocess.run
.
The return value is the same as from Python subprocess.run
. There you can
access the return code as well as possibly captured stdout
.
Usage:
self.do("foo", ["--arg1", "--arg2"], wrksrc = "bar")
This is a utility API meant to be used as a context manager. It deals with
a stamp file (identified by name
) in the current template cwd
. You can
use it to have some code run just once, and once it succeeds, not have it
run again even if the same phase is run. You use it like this:
with self.stamp("test") as s:
s.check() # this is important
... do whatever you want here ...
The check()
method ensures that the code following it is not run if the
stamp file already exists. The script will proceed after the context.
If target
is not given, simply returns the current profile, otherwise
to be used as a context manager. Temporarily overrides the current build
profile to the given target
, which can be a specific profile name (for
example aarch64
) or the special aliases host
and target
, which refer
to the build machine and the target machine respectively (the target machine
is the same as build machine when not cross compiling).
Usage:
with self.profile("aarch64") as pf:
... do something that we need for aarch64 at the time ...
if self.profile().endian == "big":
...
Get specific tool flags (e.g. CFLAGS
) for the current profile or for target
.
The target
argument is the same as for profile()
.
See the section on tools and tool flags for more information.
The return value will be a list of strings, unless shell
is True
, in
which case the result is a shell-escaped string that can be passed safely.
A shortcut for get_tool_flags
with CFLAGS
.
A shortcut for get_tool_flags
with CXXFLAGS
.
A shortcut for get_tool_flags
with FFLAGS
.
A shortcut for get_tool_flags
with LDFLAGS
.
Get the specific tool (e.g. CC
) for the current profile or for target
.
The target
argument is the same as for profile()
.
This properly deals with cross-compiling, taking care of adding the right
prefix where needed and so on. It should always be used instead of querying
the tools
member variable directly.
Check if the current configuration (i.e. taking into account the template
as well as the current profile or the target
) has the given hardening
flag enabled. For a hardening flag to be enabled, it must not be disabled
by the template or defaults, and it must be supported for the target.
The target
argument is the same as for profile()
.
Check if the current configuration (i.e. taking into account the template
as well as the current profile or the target
) is going to LTO the
build. This will be True
if the template does not disable it, and
if the stage is at least 2 and the profile supports it.
Installs path
(which may be a file or a directory and is relative
to cwd
of the template) to dest
(which must refer to a directory,
and must not be absolute - it is treated as relative to destdir
).
If symlinks
is True
(which is the default), symlinks in path
will also be symlinks in dest
.
Usage:
self.install_files("data/foo", "usr/share")
Creates a directory dest
in destdir
.
Usage:
self.install_dir("usr/include")
The empty
argument, if set to True
, will result in the .empty
file being created inside. This serves as a placeholder to prevent
the directory's accidental removal.
Installs src
into dest
, where src
refers to a file (absolute or
relative to cwd
) and dest
refers to a directory (must exist and be
relative).
The destination file must not already exist. The permissions are adjusted
to mode
, unless set to None
. The destination file name will be name
,
unless it is None
, in which case the source file name is kept.
The dest
is created if non-existent.
Equivalent to self.install_file(src, "usr/bin", 0o755, name)
.
Equivalent to self.install_file(src, "usr/lib", 0o755, name)
.
Install a manpage src
. That means installing into usr/share/man
into
the right category (e.g. man1
), this is determined from the filename by
default, but you can specify it as cat
(e.g. the integer 1
). The manpage
will retain its name, except whne name
is specified. This name should not
include the category (it is automatically appended, either as previously
determined from the filename, or as specified by cat
).
The permissions will be 644
. All paths are created as necessary.
Equivalent to self.install_file(src, "usr/share/licenses/" + pkgname, 0o644, name)
.
When pkgname
is not given, self.pkgname
is used.
If src
is a file path that does not have the .user
extension, it installs
the file in etc/dinit.d
with mode 0o644
. Otherwise, it installs the file
in etc/dinit.d/user
with its extension removed. If name
is provided, it
is used as it is without changes.
If enable
is True
, the service will be implicitly enabled as system service.
Equivalent to self.install_file(src, "etc/dinit.d/scripts", 0o755, name)
.
Creates a symbolic link at dest
, pointing to src
.
Usage:
self.install_link("libfoo.so.1", "usr/lib/libfoo.so")
For each argument representing an absolute path to a shell, register it with the system.
Usage:
self.install_shell("/usr/bin/bash")
These methods are only available on subpackage objects. You cannot create a subpackage object directly, but it can be passed to hooks as well as certain user defined functions.
Subpackage contents are taken explicitly from the main package. The only
contents that are taken implicitly are the potential licenses, i.e. the
usr/share/licenses/<subpkgname>
path.
The subpackage will "steal" path p
. The argument can be a string or
a pathlib
path, representing a relative path to destdir
of the main
package.
If missing_ok
is True
, the function will not error if the path does
not exist. In general you should not set this.
You will want to use this if you return a function from the subpackage function. The following are equivalent:
def _subpkg(self):
...
return ["usr/include", "usr/lib/*.a", "usr/lib/*.so"]
def _subpkg(self):
...
def install():
self.take("usr/include")
self.take("usr/lib/*.a")
self.take("usr/lib/*.so")
return install
This function will take
everything that should usually belong in a
development package. See the implementation in cbuild/core/template.py
for the current coverage.
If man
is a non-empty string, it represents the manpage categories to take.
This function will take
everything that should usually belong in a
-static
package. This is all static libraries in usr/lib
.
This function will take
everything that should usually belong in a
documentation package. See the implementation in cbuild/core/template.py
for the current coverage.
This function will take
everything that should usually belong in a
-libs
package. This is all shared libraries in usr/lib
that start
with lib
and follow a regular soname style. It also includes GObject
typelibs since those in general need to be available with the runtime
library for access from GI bindings.
This function will take
everything that should usually belong in a
-progs
package, i.e. all binaries in usr/bin
.
If man
is a non-empty string, it represents the manpage categories to take.
A simple lazy wrapper around take_devel
returning a function that you
should return from a subpackage (e.g. return self.default_devel()
).
The man
argument is passed as is to take_devel
. The extra
argument
can specify additional things to take. If extra
is a list
, each item
in the list is passed to take()
(without any other arguments). Otherwise
it is considered a callable and called as is without argunents.
A simple lazy wrapper around take_static
returning a function that you
should return from a subpackage (e.g. return self.default_static()
).
The extra
argument can specify additional things to take. If extra
is a list
, each item in the list is passed to take()
(without any
other arguments). Otherwise it is considered a callable and called as
is without argunents.
A simple lazy wrapper around take_doc
returning a function that you
should return from a subpackage (e.g. return self.default_doc()
).
The extra
argument can specify additional things to take. If extra
is a list
, each item in the list is passed to take()
(without any
other arguments). Otherwise it is considered a callable and called as
is without argunents.
A simple lazy wrapper around take_libs
returning a function that you
should return from a subpackage (e.g. return self.default_libs()
).
The extra
argument can specify additional things to take. If extra
is a list
, each item in the list is passed to take()
(without any
other arguments). Otherwise it is considered a callable and called as
is without argunents.
A simple lazy wrapper around take_progs
returning a function that you
should return from a subpackage (e.g. return self.default_progs()
).
The man
argument is passed as is to take_progs
. The extra
argument
can specify additional things to take. If extra
is a list
, each item
in the list is passed to take()
(without any other arguments). Otherwise
it is considered a callable and called as is without argunents.
Utility APIs exist in the cbuild.util
namespace. They provide building
blocks for templates, built using the other available public API. You do
not have to actually use any of these building blocks from technical
standpoint, but you are highly encouraged to use them in practice, as
they simplify the template logic greatly.
Utilities for managing Cargo-based Rust projects.
Clears the file checksums in .cargo-checksum.json
of a vendored crate.
You will need to do this for every crate you patch, as Cargo verifies the checksums of every file specified in there. Clearing effectively allows easy distro patching.
A wrapper for management of CMake projects.
Executes cmake
. The directory for build files is build_dir
, which
is relative to chroot_cwd
(when None
, it is pkg.make_dir
). The
root CMakeLists.txt
exists within cmake_dir
, which is relative to
chroot_cwd
(when None
, it is assumed to be .
).
The pkg
is an instance of Template
.
The build_dir
is created if non-existent.
The arguments passed to cmake
are in this order:
-DCMAKE_TOOLCHAIN_FILE=...
-DCMAKE_INSTALL_PREFIX=/usr
,-DCMAKE_BUILD_TYPE=None
,-DCMAKE_INSTALL_LIBDIR=lib
,-DCMAKE_INSTALL_SBINDIR=bin
,pkg.configure_args
extra_args
cmake_dir
.The CMAKE_GENERATOR
environment variable is set to Ninja
if pkg.make_cmd
is ninja
, otherwise to Unix Makefiles
.
An appropriate toolchain file is created when bootstrapping and when cross
compiling. You can prevent the creation of a toolchain file by explicitly
setting cross_build
to False
. That will ensure a native-like build even
when the profile is set to a cross-compiling one.
The environment from env
is used, being the most important, followed by
pkg.configure_env
and then the rest.
A simple wrapper to directly invoke a compiler.
A base class for a GNU-like compiler driver (such as Clang or GCC).
The constructor. Sets the fields template
, cexec
, flags
and ldflags
.
The cexec
argument is the compiler executable name (or path). The
flags arguments must be provided in the array form (not a string).
The flags
are always passed for invocation, and ldflags
only for linking.
Invoke the compiler. Arguments will be passed in the following order:
self.flags
inputs
Each entry is converted to str
.self.ldflags
if obj_file
is False
.flags
-c
if obj_file
is True
, ldflags
otherwise.-o
output
(made absolute against chroot_cwd
)If quiet
is True
, the command will not be printed. Otherwise, the command
with all its arguments will be printed out via the logger before execution.
A C compiler. Like GnuLike
, but more automatic.
Calls GnuLike.__init__
. If cexec
is None
, it defaults to tmpl.get_tool("CC")
.
The flags
are tmpl.get_cflags()
, while ldflags
are tmpl.get_ldflags()
.
A C++ compiler. Like GnuLike
, but more automatic.
Calls GnuLike.__init__
. If cexec
is None
, it defaults to tmpl.get_tool("CXX")
.
The flags
are tmpl.get_cxxflags()
, while ldflags
are tmpl.get_ldflags()
.
A wrapper for handling of GNU Autotools and compatible projects.
First, build_dir
is created if non-existent (relative to cwd
). If not
set, it is assumed to be pkg.make_dir
. Then, the configure_script
is
called (which lives in configure_dir
, by default .
, which lives in
chroot_cwd
, and its name is by default pkg.configure_script
).
The pkg
is an instance of Template
.
These arguments are passed first:
--prefix=/usr
--sysconfdir=/etc
--sbindir=/usr/bin
--bindir=/usr/bin
--mandir=/usr/share/man
--infodir=/usr/share/info
--localstatedir=/var
If cross-compiling, these are followed by --build=TRIPLET
and --target=TRIPLET
which are automatically guessed from the profiles. Additionally, these
are also passed for cross mode:
--with-sysroot={sysroot}
--with-libtool-sysroot={sysroot}
When cross compiling, autoconf caches are exported into the environment, which
are described by the files in cbuild/misc/autoconf_cache
. The common_linux
is parsed first, then musl-linux
, endian-(big|little)
, and architecture
specific files.
Architecture-specific cache files are:
arm-common
and arm-linux
.aarch64-linux
.ppc64
and ppc64le
, powerpc-common
, powerpc-linux
, powerpc64-linux
.x86_64
, x86_64-linux
.When not cross-compiling, the musl-linux
cache file is still read and
exported.
The result of get_make_env()
is also exported into the environment, before
anything else.
The configure_args
(pkg.configure_args
if None
) are passed after the implicit
args, finally followed by extra_args
. Additionally, env
is exported into the
environment, after the cache files (so the environment dictionary can override
any caches). This also uses pkg.configure_env
(env
takes precedence over it).
The environment variable MAKE
is implicitly set for this run, with the value
of what the cbuild.util.make.Make(pkg).get_command()
would be.
The Make environment to use when building Autotools-based projects.
Currently contains the lt_cv_sys_lib_dlsearch_path_spec
, which is
set to /usr/lib64 /usr/lib32 /usr/lib /lib /usr/local/lib
.
Given a Template
, finds files named *config*.guess
and *config*.sub
recursively and replaces them with fresh copies from cbuild/misc
.
This provides an automated fixup for when projects ship with outdated
config.guess
and config.sub
which frequently miss musl
support
or new targets such as riscv64
.
A wrapper around Make and Make-style tools.
Initializes the Make. The arguments can provide default values for various settings, which can further be overridden in sub-invocations.
The command
is the default make
command (which is not necessarily
the actual command used). The wrksrc
is relative to cwd
.
The the actual command used. If command
was provided via constructor,
that is considered the base, otherwise self.template.make_cmd
is.
If not bootstrapping, that is then returned as-is. When bootstrapping, more logic is taken to accommodate standard Linux host environments:
gmake
and the gmake
command is not available,
we fall back to make
.make
and the bmake
command is available, we
use bmake
instead.The reason this is done is that we use make
by default for most
projects, but make
on Chimera is NetBSD bmake
, while on most
Linux systems this is GNU make
. Meanwhile, if a template specifies
gmake
as the command, we want GNU make
to be used (which is
called gmake
in Chimera) but gmake
may not exist on regular
Linux distributions (where it's called just make
).
This makes it compatible with both Chimera and regular Linux systems
as the bmake
alias exists in both and gmake
is still used when
requested and exists.
Invoke the tool, whose name is retrieved with get_command()
. The
arguments are passed like this:
-jJOBS
where JOBS
is jobs
or self.jobs
or self.template.make_jobs
.targets
, which can be a list of strings or a string, if a list all are
passed, if a string the string is passed.args
The environment for the invocation works as follows:
env
self.template.make_env
The combined environment is passed to self.template.do()
.
The wrksrc
is either the wrksrc
argument, self.wrksrc
, or
self.template.wrksrc
in that order (the first that is set is used).
You can use this method as a completely generic, unspecialized invocation.
The wrapper
is expanded before the command. You can use this to wrap make
invocations with different commands, e.g. when running tests.
Calls invoke
. The targets
is self.template.make_build_target
, the
args
are self.template.make_build_args
plus any extra args
. The
other arguments are passed as is.
The environment for the invocation works as follows:
env
self.template.make_build_env
self.template.make_env
Calls invoke
. The targets
is self.template.make_install_target
and
jobs
, wrksrc
are passed as is.
If default_args
is True
, DESTDIR
is passed implicitly (set to the
value of self.chroot_destdir
. The method of passing it depends on the
value of args_use_env
. If that is True
, it is passed in the environment,
otherwise it is passed on the arguments (as the first argument).
The environment for the invocation works as follows:
env
self.template.make_install_env
self.template.make_env
DESTDIR
Other arguments that are passed as self.template.make_install_args
plus
any extra args
.
The env
is passed as is, except when DESTDIR
is passed via environment,
then it is passed together with that (user passed environment always takes
preference).
Calls invoke
. The targets
is self.template.make_check_target
, the
args
are self.template.make_check_args
plus any extra args
. The
other arguments are passed as is.
env
self.template.make_check_env
self.template.make_env
A wrapper for management of Meson projects.
Executes meson
. The meson_dir
is where the root meson.build
is located,
assumed to be .
implicitly, relative to chroot_cwd
. The build_dir
is
the directory for build files, also relative to chroot_cwd
, its default
value when None
is pkg.make_dir
.
The pkg
is an instance of Template
.
The build_dir
is created if non-existent.
The arguments passed to meson
are in this order:
--prefix=/usr
--libdir=/usr/lib
--libexecdir=/usr/libexec
--bindir=/usr/bin
--sbindir=/usr/bin
--includedir=/usr/include
--datadir=/usr/share
--mandir=/usr/share/man
--infodir=/usr/share/info
--sysconfdir=/etc
--localstatedir=/var
--sharedstatedir=/var/lib
--buildtype=plain
--auto-features=auto
--wrap-mode=nodownload
-Ddefault_library=both
-Db_ndebug=true
-Db_staticpic=true
--cross-file=...
if cross-compilingextra_args
meson_dir
build_dir
When cross compiling, an appropriate cross file is automatically generated.
The environment from env
is used, being the most important, followed by
pkg.configure_env
and then the rest.
The system offers a way to check templates for updates. In a lot of cases, especially for those using common hosting solutions, this is automatic and there is no need to do anything.
You can invoke it like this:
$ ./cbuild update-check main/mypkg
This may have output like this, for example:
$ ./cbuild update-check main/llvm
llvm-12.0.0 -> llvm-12.0.1
llvm-12.0.0 -> llvm-13.0.0
If you pass an extra argument with any value, it will be verbose, printing extra messages along the way.
The update checking can be tweaked by creating the file update.py
in the
same directory with the template. This file is a Python source file just
like the template itself, and likewise it can contain variables and hooks.
It can also reference the update check object via self
at the global
scope. This can be used to retrieve data to process.
The allowed variables are:
pkgname
(str) This is the package name the default pattern checks
for. By default, it is taken from the template. You can override this
if the template name does not match the remote project name.pkgver
(str) This is the version the fetched versions are compared
against. You can use this when the version format of the package does
not match and would result in wrong comparisons.url
(str) The URL where the version numbers are mentioned. If unset,
the url
of the template (taken as is) plus the source
URL(s) (with
the filename component stripped) are used. An exception to this is when
the source
URLs contain ftp.gnome.org
, in which case the url
of
the template is not used and only source
URLs are.pattern
(str) A Python regular expression (it is considered a verbose
regular expression, so you can use multiple lines and comments) that
matches the version number in the fetched page. You should match the
version as accurately as possible, and use a capture for the version
number itself, without the pkgname
and so on. The re.findall
API
is used to search for it. There is a bunch of defaults that are applied
for different known sites.group
(int) The subgroup of the pattern
match to use. You only
need to use this if your pattern contains more than one capture group.
If it contains just one, you should never use this.ignore
(list,bool) A list of shell-style glob patterns that match
version numbers ignored by the checker. You can use this to ignore
for example beta versions. You can also set this to True
to skip
the update-check altogether. Packages with meta
build_style
are
ignored automatically.single_directory
(bool) You can set this to True
if you wish to
disable the default URL expansion logic. By default, for every collected
URL, this looks for a versioned component in the path and if one is found,
parent URL is fetched to figure out adjacent versioned URLs to consider
for newer versions. This applies to projects that use source URLs such as
https://my.project/foo/foo-3.14/foo-3.14.tar.gz
. When this is unset,
we can check the foo
directory for versions. There are also various
hosting sites that are explicitly blacklisted from the parent directory
checks, since their specific URL is known (e.g. GitHub).vdprefix
(str) A Python regular expression matching the part that
precedes the numeric part of the version directory in the URL. Used when
single_directory
is disabled. The default is |v|<pkgname>
.vdsuffix
(str) A Python regular expression matching the part that
follows the numeric part of the version directory in the URL. Used when
single_directory
is disabled. The default is |\.x
.You can define some functions:
collect_sources
A function taking the update check object, which is
supposed to collect the initial list of source URLs to be considered.
The default simply returns self.collect_sources()
, which uses either
self.url
or self.template.url
plus self.template.source
.expand_source
A function taking the update check object plus a URL
(one for each returned from collect_sources
). It is a filter function
that returns a list (containing the input URL if it does not wish to
expand or filter anything, and empty if it wishes to skip the URL). The
default behavior is to simply return self.expand_source(input)
, which
returns the input when single_directory
is set to True
and does the
parent directory expansion otherwise.fetch_versions
A function taking a single URL and returning a list
of version numbers. By default self.fetch_versions(url)
.These functions take the update check object. It has the following properties:
verbose
Whether verbose logging is on.template
The package template handle.url
, pkgname
, single_directory
, pattern
, group
, ignore
The variables.It also has methods with the same names as the functions you can define. You can call them from your custom functions.
If you want to contribute, you need to take the following steps:
1) Fork the cports
repository
2) Read CONTRIBUTING.md
3) Work on your contribution, ensuring quality requirements are met
(if you are unsure, do not hesitate to ask for help)
4) Create a pull request with the changes
5) Wait for a review or merge; if the changes are clean, they may get
merged right away, otherwise you will be asked to make changes
If you still need help, you should be able to get your answers in our
IRC channel (#chimera-linux
on irc.oftc.net
) or our Matrix channel
(#chimera-linux:matrix.org
). The two are linked, so use whichever
you prefer.