Replies: 2 comments 7 replies
-
This is very interesting. Recently, flex preferred version has been set to 2.6.3, which is used for mesa build. Meanwhile, Scotch does not build with 2.6.3, but is OK with 2.6.4. So,when compiling an app with both mesa and scotch one gets a concretization error. Your idea would make it work without having to force a specific version of flex for both independent applications, scotch and mesa. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure if this should be part of this discussion or its own: We ran into GTKPlus now being a MesonPackage, which requires py3, but we need it in (effectively) environments with py2. We "solved" this by just retaining a pre-MesonPackage version of gtkplus/package.py I think this kind of problem would also be solved with your solution here. But it might be worth thinking about relaxing the "A package has a single buildtype" assumption behind our BasePackage class. |
Beta Was this translation helpful? Give feedback.
-
Currently Spack permits at most a single node stemming from any given package in a concretized DAG, independently of the dependency types connecting the nodes. This ensures consistency in the DAG, but may be overly restrictive. For instance, we can't currently concretize a DAG where one node requires Python 2.X to build and another requires Python 3.X to run.
Below I'll outline a proposal for a generalization of the current concretization algorithm which is:
The goal is to permit the separate concretization of build dependencies for any root spec.
Iterative concretization algorithm
In the following pseudo-code
solve
is the current solution algorithm, thetypes=
argument are the dependency types accounted for that solve:The overall idea is to construct first the spec that will be deployed, and iterate afterwards on each node to complement it with the build dependencies that are needed to build it from sources. The iterative construction should be done in a way to maximise the reuse of specs that have been already computed.
Discussion on the iterative construction of a DAG
One consideration on this approach is that it simplifies modelling cross-compilation by dividing a single solve across architectural boundaries into multiple solves that are homogeneous on the "target" architecture:
Each of the solve is done in a way that minimizes changes to the current logic program and prevents the search space for a solution to explode (as compared to a coupled solution allowing for multiple nodes from the same package).
Note that "build" dependencies cannot influence anymore the presence or absence of "link" / "run" dependencies, unless we want to enclose the iteration above in a self-consistent loop. While in principle this may be a limitation, in practice it's difficult to find a package where this feedback from "build" dependencies to "link" / "run" dependencies is used. For instance, there's no package - as far as I know - that deploys differently if it is built with
cmake@X.Y:
with respect tocmake@A.B
:Also, in most case a condition like the one above can be reformulated as a decision over an intermediate condition that introduces an additional requirement on a build dependency:
Changes required to the Spec API
The "single node per package" constraint is widely depended upon in the current spec API due to the subscripting syntax of specs. Each spec can in fact be used similarly to a dictionary to retrieve other nodes or sub-DAGs, using package names as keys:
Relaxing this constraint to permit multiple nodes from the same package requires to extend the current semantics. We can maintain backward compatibility by ordering the multiple nodes by dependency types so that
link
<run
<build
<test
and then sorting the nodes. This ensures thatlink
andrun
specs are returned by the subscript semantics in case there's a splitbuild
dependency.Other API calls can then be added to
Spec
to return selected nodes or edges in the DAG. For instance:can be used to get all the build dependencies from a given node. #21683 provides an implementation of these API calls that solves #11983 based on a new data structure to store edges that are connecting a node to dependents or dependencies.
Default dependency types
Currently the default dependency types are:
spack/lib/spack/spack/dependency.py
Lines 16 to 17 in 3c874e2
Given the semantics of dependency types, it would be probably good to turn it to "link" only (the dependency is needed at compile and/or link and/or load time). Rationale is that not many dependencies act both as a build tool ("build" dependencies) and a library ("link" dependency) at the same time. The default value is possibly vestigial from the initial interpretation of "build" dependency as something needed at build time and the fact that C or C++ header files enter within this definition.
Related PRs and Issues
Spec._dependents
mapping a package name to a single dependency spec: see Incomplete computation of installed dependents #11983PRs addressing this feature
Beta Was this translation helpful? Give feedback.
All reactions