👉 Would you rather watch a short video tutorial? Check it our here: [installing additional
packages](https://app.tella.tv/story/cknr8owf4000308kzalsk11a5).
To install new packages, you should make use of {ref}environments <environments>
. Simply build a
new environment that contains your package and select it inside the pipeline editor. Installing
packages is done using well known commands such as pip install
and sudo apt-get install
.
💡 When updating an existing environment, the new environment will automatically be used inside
the visual editor (and for your {term}`interactive pipeline runs <interactive (pipeline) run>`).
However, the JupyterLab kernel needs to be restarted if it was already running.
Do not install new packages by running bash commands inside the Notebooks. This will require the packages to be installed every time you do a pipeline run, since the state of the kernel environment is ephemeral.
(how-to-import-a-project)=
💡 This approach also works to share code between pipelines.
There are multiple answers to this question. One being that you can make that code into a package
which you can then install in your environment, just like other packages such as numpy
. Of
course the development cycle would be highly reduced with this approach and so an alternative would
be to add the files to the project directory directly and import them in your scripts.
For example, you could create a utils.py
file in your project directory and use its functions
from within your scripts by:
import utils
utils.transform(...)
To keep Orchest's disk footprint to a minimal you can use the following best practices:
- Are you persisting data to disk? Then write it to the
/data
directory instead of the project directory. {ref}Jobs <jobs>
create a snapshot (for reproducibility reasons) of your project directory and would copy data in your project directory for every pipeline run, consuming large amounts of storage. The smaller the size of your project directory, the smaller the size of your jobs. - Do you have many pipeline runs as part of jobs? You can configure your job to only retain a number of pipeline runs and automatically delete the older ones. Steps: (1) edit an existing job or create a new one, (2) go to pipeline runs, and (3) select auto clean-up.
Currently GPU support is not yet available. Coming soon!
(skip-notebook-cells)=
Notebooks facilitate an experimental workflow, meaning that there will be cells that should not be
run when executing the notebook (from top to bottom). Since {term}pipeline runs <pipeline run>
require your notebooks to be executable, Orchest provides an (pre-installed JupyterLab) extension
to skip those cells.
To skip a cell during pipeline runs:
- Open JupyterLab.
- Go to the Property Inspector, this is the icon with the two gears all the way at the right.
- Select the cell you want to skip and give it a tag of: skip.
The cells with the skip tag are still runnable through JupyterLab, but when executing these notebooks as part of pipelines in Orchest they will not be run.
The moment we have moved to a Kubernetes backed Orchest version (and deprecated the Docker based version), we will update this section of the documentation to include steps on how to migrate your current deployment to a Kubernetes based one.
Just know that we are super excited to make the Kubernetes version available part of the open core and we are invested to provide a smooth migration experience 🔥