diff --git a/docs/source/deployment/airflow_astronomer.md b/docs/source/deployment/airflow_astronomer.md index 307e9ab903..e084b73530 100644 --- a/docs/source/deployment/airflow_astronomer.md +++ b/docs/source/deployment/airflow_astronomer.md @@ -84,7 +84,7 @@ To follow this tutorial, ensure you have the following: pip install kedro-airflow~=0.4 ``` -5. Run `pip install -r src/requirements.txt` to install all dependencies. +5. Run `pip install -r requirements.txt` to install all dependencies. ### Deployment process diff --git a/docs/source/deployment/aws_step_functions.md b/docs/source/deployment/aws_step_functions.md index 380f303067..9841fc5af2 100644 --- a/docs/source/deployment/aws_step_functions.md +++ b/docs/source/deployment/aws_step_functions.md @@ -55,7 +55,7 @@ The rest of the tutorial will explain each step in the deployment process above * Create a `conf/aws` directory in your Kedro project * Put a `catalog.yml` file in this directory with the following content -* Ensure that you have `s3fs>=0.3.0,<0.5` defined in your `src/requirements.txt` so the data can be read from S3. +* Ensure that you have `s3fs>=0.3.0,<0.5` defined in your `requirements.txt` so the data can be read from S3.
Click to expand diff --git a/docs/source/deployment/databricks/databricks_ide_development_workflow.md b/docs/source/deployment/databricks/databricks_ide_development_workflow.md index dc723189c9..a5f1616c59 100644 --- a/docs/source/deployment/databricks/databricks_ide_development_workflow.md +++ b/docs/source/deployment/databricks/databricks_ide_development_workflow.md @@ -184,7 +184,7 @@ Open your newly-created notebook and create **four new cells** inside it. You wi 1. Before you import and run your Python code, you'll need to install your project's dependencies on the cluster attached to your notebook. Your project has a `requirements.txt` file for this purpose. Add the following code to the first new cell to install the dependencies: ```ipython -%pip install -r "/Workspace/Repos//iris-databricks/src/requirements.txt" +%pip install -r "/Workspace/Repos//iris-databricks/requirements.txt" ``` 2. To run your project in your notebook, you must load the Kedro IPython extension. Add the following code to the second new cell to load the IPython extension: diff --git a/docs/source/deployment/databricks/databricks_notebooks_development_workflow.md b/docs/source/deployment/databricks/databricks_notebooks_development_workflow.md index 5867163ab9..9f9b513935 100644 --- a/docs/source/deployment/databricks/databricks_notebooks_development_workflow.md +++ b/docs/source/deployment/databricks/databricks_notebooks_development_workflow.md @@ -213,7 +213,7 @@ Create **four new cells** inside your notebook. You will fill these cells with c 1. Before you import and run your Python code, you'll need to install your project's dependencies on the cluster attached to your notebook. Your project has a `requirements.txt` file for this purpose. Add the following code to the first new cell to install the dependencies: ```ipython -%pip install -r "/Workspace/Repos//iris-databricks/src/requirements.txt" +%pip install -r "/Workspace/Repos//iris-databricks/requirements.txt" ``` 2. To run your project in your notebook, you must load the Kedro IPython extension. Add the following code to the second new cell to load the IPython extension: diff --git a/docs/source/deployment/distributed.md b/docs/source/deployment/distributed.md index 2b005afe42..af49ae5fec 100644 --- a/docs/source/deployment/distributed.md +++ b/docs/source/deployment/distributed.md @@ -14,7 +14,7 @@ For better dependency management, we encourage you to containerise the entire pi Firstly make sure your [project requirements are up-to-date](../kedro_project_setup/dependencies.md) by running: ```bash -pip-compile --output-file=/src/requirements.txt --input-file=/src/requirements.txt +pip-compile --output-file=/requirements.txt --input-file=/requirements.txt ``` We then recommend the [`Kedro-Docker`](https://github.com/kedro-org/kedro-plugins/tree/main/kedro-docker) plugin to streamline the process of building the image. [Instructions for using this are in the plugin's README.md](https://github.com/kedro-org/kedro-plugins/blob/main/README.md). diff --git a/docs/source/deployment/single_machine.md b/docs/source/deployment/single_machine.md index 0964a6a968..b70ddfcf60 100644 --- a/docs/source/deployment/single_machine.md +++ b/docs/source/deployment/single_machine.md @@ -114,7 +114,7 @@ conda install -c conda-forge kedro Install the project’s dependencies, by running the following in the project's root directory: ```console -pip install -r src/requirements.txt +pip install -r requirements.txt ``` After having installed your project on the remote server you can run the Kedro project as follows from the root of the project: diff --git a/docs/source/development/commands_reference.md b/docs/source/development/commands_reference.md index ae2933e256..097984702e 100644 --- a/docs/source/development/commands_reference.md +++ b/docs/source/development/commands_reference.md @@ -292,18 +292,18 @@ _This command will be deprecated from Kedro version 0.19.0._ kedro build-reqs ``` -This command runs [`pip-compile`](https://github.com/jazzband/pip-tools#example-usage-for-pip-compile) on the project's `src/requirements.txt` file and will create `src/requirements.lock` with the compiled requirements. +This command runs [`pip-compile`](https://github.com/jazzband/pip-tools#example-usage-for-pip-compile) on the project's `requirements.txt` file and will create `requirements.lock` with the compiled requirements. `kedro build-reqs` has two optional arguments to specify which file to compile the requirements from and where to save the compiled requirements to. These arguments are `--input-file` and `--output-file` respectively. -`kedro build-reqs` also accepts and passes through CLI options accepted by `pip-compile`. For example, `kedro build-reqs --generate-hashes` will call `pip-compile --output-file=src/requirements.lock --generate-hashes src/requirements.txt`. +`kedro build-reqs` also accepts and passes through CLI options accepted by `pip-compile`. For example, `kedro build-reqs --generate-hashes` will call `pip-compile --output-file=requirements.lock --generate-hashes requirements.txt`. #### Install all package dependencies -The following runs [`pip`](https://github.com/pypa/pip) to install all package dependencies specified in `src/requirements.txt`: +The following runs [`pip`](https://github.com/pypa/pip) to install all package dependencies specified in `requirements.txt`: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` For further information, see the [documentation on installing project-specific dependencies](../kedro_project_setup/dependencies.md#install-project-specific-dependencies). diff --git a/docs/source/development/linting.md b/docs/source/development/linting.md index e2c0f31037..263d81f9d7 100644 --- a/docs/source/development/linting.md +++ b/docs/source/development/linting.md @@ -23,7 +23,7 @@ and check the [cyclomatic complexity](https://www.ibm.com/docs/en/raa/6.1?topic= type. [You can read more in the `isort` documentation](https://pycqa.github.io/isort/). ### Install the tools -Install `black`, `flake8`, and `isort` by adding the following lines to your project's `src/requirements.txt` +Install `black`, `flake8`, and `isort` by adding the following lines to your project's `requirements.txt` file: ```text black # Used for formatting code @@ -33,7 +33,7 @@ isort # Used for formatting code (sorting module imports) To install all the project-specific dependencies, including the linting tools, navigate to the root directory of the project and run: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` Alternatively, you can individually install the linting tools using the following shell commands: ```bash @@ -72,7 +72,7 @@ These hooks are run before committing your code to your repositories to automati making code reviews easier and less time-consuming. ### Install `pre-commit` -You can install `pre-commit` along with other dependencies by including it in the `src/requirements.txt` file of your +You can install `pre-commit` along with other dependencies by including it in the `requirements.txt` file of your Kedro project by adding the following line: ```text pre-commit diff --git a/docs/source/experiment_tracking/index.md b/docs/source/experiment_tracking/index.md index 35a5dc053d..bc4db7dcd4 100644 --- a/docs/source/experiment_tracking/index.md +++ b/docs/source/experiment_tracking/index.md @@ -72,7 +72,7 @@ cd spaceflights Install the project's dependencies: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` ## Set up the session store @@ -268,17 +268,20 @@ In this section, we illustrate how to compare Matplotlib plots across experiment ### Update the dependencies -Update the `src/requirements.txt` file in your Kedro project by adding the following dataset to enable Matplotlib for your project: +Update the `pyproject.toml` file in your Kedro project by adding the following dataset to enable Matplotlib for your project: ```text -kedro-datasets[matplotlib.MatplotlibWriter]~=1.1 -seaborn~=0.12.1 +dependencies = [ + # ... + "kedro-datasets[matplotlib.MatplotlibWriter]~=1.1", + "seaborn~=0.12.1", +] ``` And install the requirements with: ```bash -pip install -r src/requirements.txt +pip install --editable . ``` ### Add a plotting node diff --git a/docs/source/get_started/new_project.md b/docs/source/get_started/new_project.md index 1048c49e17..6bdc3cbd86 100644 --- a/docs/source/get_started/new_project.md +++ b/docs/source/get_started/new_project.md @@ -11,7 +11,7 @@ There are a few ways to create a new project once you have [set up Kedro](instal Once you've created a project: -* You need to **navigate to its project folder** and **install its dependencies**: `pip install -r src/requirements.txt` +* You need to **navigate to its project folder** and **install its dependencies**: `pip install -r requirements.txt` * **To run the project**: `kedro run` * **To visualise the project**: `kedro viz` @@ -79,7 +79,7 @@ kedro new --starter=pandas-iris However you create a Kedro project, once `kedro new` has completed, the next step is to navigate to the project folder (`cd `) and install dependencies with `pip` as follows: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` Now run the project: diff --git a/docs/source/kedro_project_setup/dependencies.md b/docs/source/kedro_project_setup/dependencies.md index 862ee4e49d..11efd44068 100644 --- a/docs/source/kedro_project_setup/dependencies.md +++ b/docs/source/kedro_project_setup/dependencies.md @@ -5,30 +5,33 @@ Both `pip install kedro` and `conda install -c conda-forge kedro` install the co When you create a project, you then introduce additional dependencies for the tasks it performs. ## Project-specific dependencies -You can specify a project's exact dependencies in the `src/requirements.txt` file to make it easier for you and others to run your project in the future, -and to avoid version conflicts downstream. This can be achieved with the help of [`pip-tools`](https://pypi.org/project/pip-tools/). +You can specify a project's exact dependencies in the `pyproject.toml` file, +as well as any development dependencies in `requirements.txt`, +to make it easier for you and others to run your project in the future and to avoid version conflicts downstream. +This can be achieved with the help of [`pip-tools`](https://pypi.org/project/pip-tools/). + To install `pip-tools` in your virtual environment, run the following command: ```bash pip install pip-tools ``` -To add or remove dependencies to a project, edit the `src/requirements.txt` file, then run the following: +To add or remove dependencies to a project, edit the `requirements.txt` file, then run the following: ```bash -pip-compile --output-file=/src/requirements.txt --input-file=/src/requirements.txt +pip-compile --output-file=/requirements.txt --input-file=/requirements.txt ``` This will [pip compile](https://github.com/jazzband/pip-tools#example-usage-for-pip-compile) the requirements listed in -the `src/requirements.txt` file into a `src/requirements.lock` that specifies a list of pinned project dependencies +the `requirements.txt` file into a `requirements.lock` that specifies a list of pinned project dependencies (those with a strict version). You can also use this command with additional CLI arguments such as `--generate-hashes` to use `pip`'s Hash Checking Mode or `--upgrade-package` to update specific packages to the latest or specific versions. [Check out the `pip-tools` documentation](https://pypi.org/project/pip-tools/) for more information. ```{note} -The `src/requirements.txt` file contains "source" requirements, while `src/requirements.lock` contains the compiled version of those and requires no manual updates. +The `requirements.txt` and `pyproject.toml` files contain "source" requirements, while `requirements.lock` contains the compiled version of those and requires no manual updates. ``` -To further update the project requirements, modify the `src/requirements.txt` file (not `src/requirements.lock`) and re-run the `pip-compile` command above. +To further update the project requirements, modify the `requirements.txt` file (not `requirements.lock`) and re-run the `pip-compile` command above. ## Install project-specific dependencies @@ -36,7 +39,7 @@ To further update the project requirements, modify the `src/requirements.txt` fi To install the project-specific dependencies, navigate to the root directory of the project and run: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` ## Workflow dependencies diff --git a/docs/source/kedro_project_setup/starters.md b/docs/source/kedro_project_setup/starters.md index aac0dc331f..7dbc649af1 100644 --- a/docs/source/kedro_project_setup/starters.md +++ b/docs/source/kedro_project_setup/starters.md @@ -155,7 +155,10 @@ Here is the layout of the project as a Cookiecutter template: ├── docs # Project documentation ├── notebooks # Project related Jupyter notebooks (can be used for experimental code before moving the code to src) ├── README.md # Project README -├── setup.cfg # Configuration options for tools e.g. `pytest` or `flake8` +├── .flake8 # Configuration options for `flake8` +├── requirements.txt # Development dependencies +├── pyproject.py # Package metadata and configuration options for tools +├── tests # Tests └── src # Project source code └── {{ cookiecutter.python_package }} ├── __init.py__ @@ -163,9 +166,6 @@ Here is the layout of the project as a Cookiecutter template: ├── pipeline_registry.py ├── __main__.py └── settings.py - ├── requirements.txt - ├── setup.py - └── tests ``` ```{note} diff --git a/docs/source/tutorial/spaceflights_tutorial_faqs.md b/docs/source/tutorial/spaceflights_tutorial_faqs.md index 92d873dcb9..d23b337f9e 100644 --- a/docs/source/tutorial/spaceflights_tutorial_faqs.md +++ b/docs/source/tutorial/spaceflights_tutorial_faqs.md @@ -46,7 +46,7 @@ documentation on how to install relevant dependencies for kedro_datasets.pandas. https://kedro.readthedocs.io/en/stable/kedro_project_setup/dependencies.html ``` -The Kedro Data Catalog is missing [dependencies needed to parse the data](../kedro_project_setup/dependencies.md#install-dependencies-related-to-the-data-catalog). Check that you have [all the project dependencies to `requirements.txt`](./tutorial_template.md#install-project-dependencies) and then call `pip install -r src/requirements.txt` to install them. +The Kedro Data Catalog is missing [dependencies needed to parse the data](../kedro_project_setup/dependencies.md#install-dependencies-related-to-the-data-catalog). Check that you have [all the project dependencies to `requirements.txt`](./tutorial_template.md#install-project-dependencies) and then call `pip install -r requirements.txt` to install them. ### Pipeline run diff --git a/docs/source/tutorial/tutorial_template.md b/docs/source/tutorial/tutorial_template.md index 99b75cb031..9e203ec38d 100644 --- a/docs/source/tutorial/tutorial_template.md +++ b/docs/source/tutorial/tutorial_template.md @@ -1,6 +1,6 @@ # Set up the spaceflights project -This section shows how to create a new project (with `kedro new` using the [Kedro spaceflights starter](https://github.com/kedro-org/kedro-starters/tree/main/spaceflights)) and install project dependencies (with `pip install -r src/requirements.txt`). +This section shows how to create a new project (with `kedro new` using the [Kedro spaceflights starter](https://github.com/kedro-org/kedro-starters/tree/main/spaceflights)) and install project dependencies (with `pip install -r requirements.txt`). ## Create a new project @@ -28,7 +28,7 @@ cd spaceflights Kedro projects have a `requirements.txt` file to specify their dependencies and enable sharable projects by ensuring consistency across Python packages and versions. -The spaceflights project dependencies are stored in `src/requirements.txt`(you may find that the versions differ slightly depending on the version of Kedro): +The spaceflights project dependencies are stored in `requirements.txt`(you may find that the versions differ slightly depending on the version of Kedro): ```text # code quality packages @@ -63,7 +63,7 @@ scikit-learn~=1.0 To install all the project-specific dependencies, run the following from the project root directory: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` ## Optional: logging and configuration diff --git a/docs/source/visualisation/kedro-viz_visualisation.md b/docs/source/visualisation/kedro-viz_visualisation.md index 179710c7d0..f6d36fc611 100644 --- a/docs/source/visualisation/kedro-viz_visualisation.md +++ b/docs/source/visualisation/kedro-viz_visualisation.md @@ -18,7 +18,7 @@ When prompted for a project name, you can enter anything, but we will assume `Sp When your project is ready, navigate to the root directory of the project and install the dependencies for the project, which include Kedro-Viz: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` The next step is optional, but useful to check that all is working. Run the full set of pipelines for the tutorial project: diff --git a/docs/source/visualisation/visualise_charts_with_plotly.md b/docs/source/visualisation/visualise_charts_with_plotly.md index 5b14d2c635..59c783c561 100644 --- a/docs/source/visualisation/visualise_charts_with_plotly.md +++ b/docs/source/visualisation/visualise_charts_with_plotly.md @@ -27,7 +27,7 @@ There are two types of Plotly datasets supported by Kedro: * `plotly.PlotlyDataSet` which only supports [Plotly Express](https://plotly.com/python/plotly-express) * `plotly.JSONDataSet` which supports Plotly Express and [Plotly Graph Objects](https://plotly.com/python/graph-objects/) -To use the Plotly datasets, you must update the `requirements.txt` file in the `src` folder of the Kedro project to add the following dependencies: +To use the Plotly datasets, you must update the `requirements.txt` file in the root folder of the Kedro project to add the following dependencies: ```text @@ -38,7 +38,7 @@ kedro-datasets[plotly.PlotlyDataSet, plotly.JSONDataSet]~=1.1 Navigate to the root directory of the project in your terminal and install the dependencies for the tutorial project: ```bash -pip install -r src/requirements.txt +pip install -r requirements.txt ``` ### Configure the Data Catalog @@ -177,11 +177,13 @@ You can view Matplotlib charts in Kedro-Viz when you use the [Kedro MatplotLibWr ### Update the dependencies -You must update the `src/requirements.txt` file in the Kedro project by adding the following dataset to enable Matplotlib for the project: +You must update the `pyproject.toml` file in the Kedro project by adding the following dataset to enable Matplotlib for the project: ```bash -kedro-datasets[matplotlib.MatplotlibWriter]~=1.1 -seaborn~=0.12.1 +dependencies = [ + "kedro-datasets[matplotlib.MatplotlibWriter]~=1.1", + "seaborn~=0.12.1", +] ``` ### Configure the Data Catalog diff --git a/features/environment.py b/features/environment.py index c98246dc85..2b930d03b8 100644 --- a/features/environment.py +++ b/features/environment.py @@ -116,13 +116,18 @@ def _setup_minimal_env(context): def _install_project_requirements(context): install_reqs = ( - Path( - "kedro/templates/project/{{ cookiecutter.repo_name }}/src/requirements.txt" - ) + Path("kedro/templates/project/{{ cookiecutter.repo_name }}/requirements.txt") .read_text(encoding="utf-8") .splitlines() ) - install_reqs = [req for req in install_reqs if "{" not in req] - install_reqs.append(".[pandas.CSVDataSet]") + install_reqs = [ + req + for req in install_reqs + if (req.strip()) + and ("{" not in req) + and (not req.startswith("-e")) + and (not req.startswith("#")) + ] + install_reqs.append("kedro-datasets[pandas.CSVDataSet]") call([context.pip, "install", *install_reqs], env=context.env) return context diff --git a/features/steps/cli_steps.py b/features/steps/cli_steps.py index 0008841de4..c0a9453409 100644 --- a/features/steps/cli_steps.py +++ b/features/steps/cli_steps.py @@ -162,7 +162,7 @@ def create_config_file(context): @given("I have installed the project dependencies") def pip_install_dependencies(context): """Install project dependencies using pip.""" - reqs_path = "src/requirements.txt" + reqs_path = "requirements.txt" res = run( [context.pip, "install", "-r", reqs_path], env=context.env, @@ -410,7 +410,7 @@ def update_kedro_req(context: behave.runner.Context): """Replace kedro as a standalone requirement with a line that includes all of kedro's dependencies (-r kedro/requirements.txt) """ - reqs_path = context.root_project_dir / "src" / "requirements.txt" + reqs_path = context.root_project_dir / "requirements.txt" kedro_reqs = f"-r {context.requirements_path.as_posix()}" if reqs_path.is_file(): @@ -428,7 +428,7 @@ def update_kedro_req(context: behave.runner.Context): @when("I add {dependency} to the requirements") def add_req(context: behave.runner.Context, dependency: str): - reqs_path = context.root_project_dir / "src" / "requirements.txt" + reqs_path = context.root_project_dir / "requirements.txt" if reqs_path.is_file(): reqs_path.write_text(reqs_path.read_text() + "\n" + str(dependency) + "\n") @@ -610,14 +610,14 @@ def check_docs_generated(context: behave.runner.Context): @then("requirements should be generated") def check_reqs_generated(context: behave.runner.Context): """Check that new project requirements are generated.""" - reqs_path = context.root_project_dir / "src" / "requirements.lock" + reqs_path = context.root_project_dir / "requirements.lock" assert reqs_path.is_file() assert "This file is autogenerated by pip-compile" in reqs_path.read_text() @then("{dependency} should be in the requirements") def check_dependency_in_reqs(context: behave.runner.Context, dependency: str): - reqs_path = context.root_project_dir / "src" / "requirements.txt" + reqs_path = context.root_project_dir / "requirements.txt" assert dependency in reqs_path.read_text() diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/README.md b/features/steps/test_starter/{{ cookiecutter.repo_name }}/README.md index 8041d41dd9..b9cb686a00 100644 --- a/features/steps/test_starter/{{ cookiecutter.repo_name }}/README.md +++ b/features/steps/test_starter/{{ cookiecutter.repo_name }}/README.md @@ -17,12 +17,12 @@ In order to get the best out of the template: ## How to install dependencies -Declare any dependencies in `src/requirements.txt` for `pip` installation and `src/environment.yml` for `conda` installation. +Declare any dependencies in `requirements.txt` for `pip` installation. To install them, run: ``` -pip install -r src/requirements.txt +pip install -r requirements.txt ``` ## How to run Kedro @@ -52,9 +52,9 @@ To generate or update the dependency requirements for your project: kedro build-reqs ``` -This will `pip-compile` the contents of `src/requirements.txt` into a new file `src/requirements.lock`. You can see the output of the resolution by opening `src/requirements.lock`. +This will `pip-compile` the contents of `requirements.txt` into a new file `requirements.lock`. You can see the output of the resolution by opening `requirements.lock`. -After this, if you'd like to update your project requirements, please update `src/requirements.txt` and re-run `kedro build-reqs`. +After this, if you'd like to update your project requirements, please update `requirements.txt` and re-run `kedro build-reqs`. [Further information about project dependencies](https://kedro.readthedocs.io/en/stable/kedro_project_setup/dependencies.html#project-specific-dependencies) diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/pyproject.toml b/features/steps/test_starter/{{ cookiecutter.repo_name }}/pyproject.toml index ca5524efc1..c0cadfe234 100644 --- a/features/steps/test_starter/{{ cookiecutter.repo_name }}/pyproject.toml +++ b/features/steps/test_starter/{{ cookiecutter.repo_name }}/pyproject.toml @@ -1,7 +1,43 @@ +[build-system] +requires = ["setuptools"] +build-backend = "setuptools.build_meta" + +[project] +name = "{{ cookiecutter.python_package }}" +dependencies = [ + "kedro~={{ cookiecutter.kedro_version }}", + "kedro-datasets[pandas.CSVDataSet]", +] +dynamic = ["version"] + +[project.scripts] +{{ cookiecutter.repo_name }} = "{{ cookiecutter.python_package }}.__main__:main" + +[project.optional-dependencies] +docs = [ + "docutils<0.18.0", + "sphinx~=3.4.3", + "sphinx_rtd_theme==0.5.1", + "nbsphinx==0.8.1", + "nbstripout~=0.4", + "sphinx-autodoc-typehints==1.11.1", + "sphinx_copybutton==0.3.1", + "ipykernel>=5.3, <7.0", + "Jinja2<3.1.0", + "myst-parser~=0.17.2", +] + +[tool.setuptools.dynamic] +version = {attr = "{{ cookiecutter.python_package }}.__version__"} + +[tool.setuptools.packages.find] +where = ["src"] +namespaces = false + [tool.kedro] -project_name = "{{ cookiecutter.project_name }}" -project_version = "{{ cookiecutter.kedro_version }}" package_name = "{{ cookiecutter.python_package }}" +project_name = "{{ cookiecutter.project_name }}" +kedro_init_version = "{{ cookiecutter.kedro_version }}" [tool.isort] profile = "black" diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/requirements.txt b/features/steps/test_starter/{{ cookiecutter.repo_name }}/requirements.txt similarity index 81% rename from kedro/templates/project/{{ cookiecutter.repo_name }}/src/requirements.txt rename to features/steps/test_starter/{{ cookiecutter.repo_name }}/requirements.txt index aa7ee32014..50cf28b712 100644 --- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/requirements.txt +++ b/features/steps/test_starter/{{ cookiecutter.repo_name }}/requirements.txt @@ -1,3 +1,7 @@ +# Install library code +-e file:. + +# Development dependencies black~=22.0 flake8>=3.7.9, <5.0 ipython>=7.31.1, <8.0; python_version < '3.8' @@ -6,8 +10,6 @@ isort~=5.0 jupyter~=1.0 jupyterlab_server>=2.11.1, <2.16.0 jupyterlab~=3.0, <3.6.0 -kedro~={{ cookiecutter.kedro_version }} -kedro-telemetry~=0.2.0 nbstripout~=0.4 pytest-cov~=3.0 pytest-mock>=1.7.1, <2.0 diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/setup.py b/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/setup.py deleted file mode 100644 index af5b101519..0000000000 --- a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/setup.py +++ /dev/null @@ -1,39 +0,0 @@ -from setuptools import find_packages, setup - -entry_point = ( - "{{ cookiecutter.repo_name }} = {{ cookiecutter.python_package }}.__main__:main" -) - - -# get the dependencies and installs -with open("requirements.txt", encoding="utf-8") as f: - # Make sure we strip all comments and options (e.g "--extra-index-url") - # that arise from a modified pip.conf file that configure global options - # when running kedro build-reqs - requires = [] - for line in f: - req = line.split("#", 1)[0].strip() - if req and not req.startswith("-r"): - requires.append(req) - -setup( - name="{{ cookiecutter.python_package }}", - version="0.1", - packages=find_packages(exclude=["tests"]), - entry_points={"console_scripts": [entry_point]}, - install_requires=requires, - extras_require={ - "docs": [ - "docutils<0.18.0", - "sphinx~=3.4.3", - "sphinx_rtd_theme==0.5.1", - "nbsphinx==0.8.1", - "nbstripout~=0.4", - "sphinx-autodoc-typehints==1.11.1", - "sphinx_copybutton==0.3.1", - "ipykernel>=5.3, <7.0", - "Jinja2<3.1.0", - "myst-parser~=0.17.2", - ] - }, -) diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/__init__.py b/features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/__init__.py similarity index 100% rename from features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/__init__.py rename to features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/__init__.py diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/pipelines/__init__.py b/features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/pipelines/__init__.py similarity index 100% rename from features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/pipelines/__init__.py rename to features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/pipelines/__init__.py diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/test_run.py b/features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/test_run.py similarity index 100% rename from features/steps/test_starter/{{ cookiecutter.repo_name }}/src/tests/test_run.py rename to features/steps/test_starter/{{ cookiecutter.repo_name }}/tests/test_run.py diff --git a/kedro/framework/cli/micropkg.py b/kedro/framework/cli/micropkg.py index a2de8c9a16..27ca160f63 100644 --- a/kedro/framework/cli/micropkg.py +++ b/kedro/framework/cli/micropkg.py @@ -240,7 +240,7 @@ def _pull_package( package_reqs = _get_all_library_reqs(library_meta) if package_reqs: - requirements_txt = metadata.source_dir / "requirements.txt" + requirements_txt = metadata.project_path / "requirements.txt" _append_package_reqs(requirements_txt, package_reqs, package_name) _clean_pycache(temp_dir_path) @@ -664,19 +664,6 @@ def _drop_comment(line): return line.partition(" #")[0] -def _make_install_requires(requirements_txt: Path) -> list[str]: - """Parses each line of requirements.txt into a version specifier valid to put in - install_requires. - Matches pkg_resources.parse_requirements""" - if not requirements_txt.exists(): - return [] - return [ - str(_EquivalentRequirement(_drop_comment(requirement_line))) - for requirement_line in requirements_txt.read_text().splitlines() - if requirement_line and not requirement_line.startswith("#") - ] - - def _create_nested_package(project: Project, package_path: Path) -> Path: # fails if parts of the path exists already packages = package_path.parts @@ -840,9 +827,8 @@ def _generate_sdist_file( # Build a pyproject.toml on the fly try: - install_requires = _make_install_requires( - package_source / "requirements.txt" # type: ignore - ) + library_meta = project_wheel_metadata(metadata.project_path) + install_requires = _get_all_library_reqs(library_meta) except Exception as exc: click.secho("FAILED", fg="red") cls = exc.__class__ @@ -947,7 +933,7 @@ def _append_package_reqs( file.write(sep.join(sorted_reqs)) click.secho( - "Use 'kedro build-reqs' to compile and 'pip install -r src/requirements.lock' to install " + "Use 'kedro build-reqs' to compile and 'pip install -r requirements.lock' to install " "the updated list of requirements." ) diff --git a/kedro/framework/cli/project.py b/kedro/framework/cli/project.py index 034b460023..7914a4c9ee 100644 --- a/kedro/framework/cli/project.py +++ b/kedro/framework/cli/project.py @@ -32,7 +32,7 @@ from kedro.utils import load_obj NO_DEPENDENCY_MESSAGE = """{module} is not installed. Please make sure {module} is in -{src}/requirements.txt and run 'pip install -r src/requirements.txt'.""" +{src}/requirements.txt and run 'pip install -r requirements.txt'.""" LINT_CHECK_ONLY_HELP = """Check the files for style guide violations, unsorted / unformatted imports, and unblackened Python code without modifying the files.""" OPEN_ARG_HELP = """Open the documentation in your default browser after building.""" @@ -88,9 +88,9 @@ def test(metadata: ProjectMetadata, args, **kwargs): # pylint: disable=unused-a try: _check_module_importable("pytest") except KedroCliError as exc: - source_path = metadata.source_dir + project_path = metadata.project_path raise KedroCliError( - NO_DEPENDENCY_MESSAGE.format(module="pytest", src=str(source_path)) + NO_DEPENDENCY_MESSAGE.format(module="pytest", src=str(project_path)) ) from exc python_call("pytest", args) @@ -110,12 +110,14 @@ def lint( click.secho(deprecation_message, fg="red") source_path = metadata.source_dir + project_path = metadata.project_path package_name = metadata.package_name - files = files or (str(source_path / "tests"), str(source_path / package_name)) + files = files or (str(project_path / "tests"), str(source_path / package_name)) if "PYTHONPATH" not in os.environ: # isort needs the source path to be in the 'PYTHONPATH' environment # variable to treat it as a first-party import location + # NOTE: Actually, `pip install [-e] .` achieves the same os.environ["PYTHONPATH"] = str(source_path) # pragma: no cover for module_name in ("flake8", "isort", "black"): @@ -123,7 +125,7 @@ def lint( _check_module_importable(module_name) except KedroCliError as exc: raise KedroCliError( - NO_DEPENDENCY_MESSAGE.format(module=module_name, src=str(source_path)) + NO_DEPENDENCY_MESSAGE.format(module=module_name, src=str(project_path)) ) from exc python_call("black", ("--check",) + files if check_only else files) @@ -149,7 +151,7 @@ def ipython( @click.pass_obj # this will pass the metadata as first argument def package(metadata: ProjectMetadata): """Package the project as a Python wheel.""" - source_path = metadata.source_dir + project_path = metadata.project_path call( [ sys.executable, @@ -157,9 +159,9 @@ def package(metadata: ProjectMetadata): "build", "--wheel", "--outdir", - "../dist", + "dist", ], - cwd=str(source_path), + cwd=str(project_path), ) directory = ( @@ -199,10 +201,11 @@ def build_docs(metadata: ProjectMetadata, open_docs): click.secho(deprecation_message, fg="red") source_path = metadata.source_dir + project_path = metadata.project_path package_name = metadata.package_name - python_call("pip", ["install", str(source_path / "[docs]")]) - python_call("pip", ["install", "-r", str(source_path / "requirements.txt")]) + python_call("pip", ["install", str(project_path / "[docs]")]) + python_call("pip", ["install", "-r", str(project_path / "requirements.txt")]) python_call("ipykernel", ["install", "--user", f"--name={package_name}"]) shutil.rmtree("docs/build", ignore_errors=True) call( @@ -239,8 +242,8 @@ def build_docs(metadata: ProjectMetadata, open_docs): def build_reqs( metadata: ProjectMetadata, input_file, output_file, args, **kwargs ): # pylint: disable=unused-argument - """Run `pip-compile` on src/requirements.txt or the user defined input file and save - the compiled requirements to src/requirements.lock or the user defined output file. + """Run `pip-compile` on requirements.txt or the user defined input file and save + the compiled requirements to requirements.lock or the user defined output file. (DEPRECATED) """ deprecation_message = ( @@ -249,9 +252,9 @@ def build_reqs( ) click.secho(deprecation_message, fg="red") - source_path = metadata.source_dir - input_file = Path(input_file or source_path / "requirements.txt") - output_file = Path(output_file or source_path / "requirements.lock") + project_path = metadata.project_path + input_file = Path(input_file or project_path / "requirements.txt") + output_file = Path(output_file or project_path / "requirements.lock") if input_file.is_file(): python_call( @@ -291,7 +294,7 @@ def activate_nbstripout( ) click.secho(deprecation_message, fg="red") - source_path = metadata.source_dir + project_path = metadata.source_dir click.secho( ( "Notebook output cells will be automatically cleared before committing" @@ -304,7 +307,7 @@ def activate_nbstripout( _check_module_importable("nbstripout") except KedroCliError as exc: raise KedroCliError( - NO_DEPENDENCY_MESSAGE.format(module="nbstripout", src=str(source_path)) + NO_DEPENDENCY_MESSAGE.format(module="nbstripout", src=str(project_path)) ) from exc try: diff --git a/kedro/framework/cli/starters.py b/kedro/framework/cli/starters.py index 77491d391f..db3a24a321 100644 --- a/kedro/framework/cli/starters.py +++ b/kedro/framework/cli/starters.py @@ -371,7 +371,7 @@ def _create_project(template_path: str, cookiecutter_args: dict[str, Any]): ) click.secho( "\nA best-practice setup includes initialising git and creating " - "a virtual environment before running 'pip install -r src/requirements.txt' to install " + "a virtual environment before running 'pip install -r requirements.txt' to install " "project-specific dependencies. Refer to the Kedro documentation: " "https://kedro.readthedocs.io/" ) diff --git a/kedro/framework/cli/utils.py b/kedro/framework/cli/utils.py index bd1c59a2ec..2eb47665d4 100644 --- a/kedro/framework/cli/utils.py +++ b/kedro/framework/cli/utils.py @@ -354,7 +354,7 @@ def _check_module_importable(module_name: str) -> None: except ImportError as exc: raise KedroCliError( f"Module '{module_name}' not found. Make sure to install required project " - f"dependencies by running the 'pip install -r src/requirements.txt' command first." + f"dependencies by running the 'pip install -r requirements.txt' command first." ) from exc diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/setup.cfg b/kedro/templates/project/{{ cookiecutter.repo_name }}/.flake8 similarity index 100% rename from kedro/templates/project/{{ cookiecutter.repo_name }}/setup.cfg rename to kedro/templates/project/{{ cookiecutter.repo_name }}/.flake8 diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/README.md b/kedro/templates/project/{{ cookiecutter.repo_name }}/README.md index 07ae44d46c..1531726699 100644 --- a/kedro/templates/project/{{ cookiecutter.repo_name }}/README.md +++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/README.md @@ -17,12 +17,18 @@ In order to get the best out of the template: ## How to install dependencies -Declare any dependencies in `src/requirements.txt` for `pip` installation and `src/environment.yml` for `conda` installation. +Declare any library dependencies in `pyproject.toml`, and any development dependencies in `requirements.txt`. -To install them, run: +To install all of them, run: ``` -pip install -r src/requirements.txt +pip install -r requirements.txt +``` + +To install only the library dependencies, run: + +``` +pip install --editable . ``` ## How to run your Kedro pipeline @@ -35,7 +41,7 @@ kedro run ## How to test your Kedro project -Have a look at the file `src/tests/test_run.py` for instructions on how to write your tests. You can run your tests as follows: +Have a look at the file `tests/test_run.py` for instructions on how to write your tests. You can run your tests as follows: ``` kedro test @@ -51,9 +57,9 @@ To generate or update the dependency requirements for your project: kedro build-reqs ``` -This will `pip-compile` the contents of `src/requirements.txt` into a new file `src/requirements.lock`. You can see the output of the resolution by opening `src/requirements.lock`. +This will `pip-compile` the contents of `requirements.txt` into a new file `requirements.lock`. You can see the output of the resolution by opening `requirements.lock`. -After this, if you'd like to update your project requirements, please update `src/requirements.txt` and re-run `kedro build-reqs`. +After this, if you'd like to update your project requirements, please update `pyproject.toml` or `requirements.txt` and re-run `kedro build-reqs`. [Further information about project dependencies](https://docs.kedro.org/en/stable/kedro_project_setup/dependencies.html#project-specific-dependencies) @@ -61,15 +67,9 @@ After this, if you'd like to update your project requirements, please update `sr > Note: Using `kedro jupyter` or `kedro ipython` to run your notebook provides these variables in scope: `context`, `catalog`, and `startup_error`. > -> Jupyter, JupyterLab, and IPython are already included in the project requirements by default, so once you have run `pip install -r src/requirements.txt` you will not need to take any extra steps before you use them. +> Jupyter, JupyterLab, and IPython are already included as development dependencies of the project by default, so once you have run `pip install -r requirements.txt` you will not need to take any extra steps before you use them. ### Jupyter -To use Jupyter notebooks in your Kedro project, you need to install Jupyter: - -``` -pip install jupyter -``` - After installing Jupyter, you can start a local notebook server: ``` @@ -77,12 +77,6 @@ kedro jupyter notebook ``` ### JupyterLab -To use JupyterLab, you need to install it: - -``` -pip install jupyterlab -``` - You can also start JupyterLab: ``` diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/pyproject.toml b/kedro/templates/project/{{ cookiecutter.repo_name }}/pyproject.toml index 7ae06368bd..f39867c572 100644 --- a/kedro/templates/project/{{ cookiecutter.repo_name }}/pyproject.toml +++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/pyproject.toml @@ -1,3 +1,38 @@ +[build-system] +requires = ["setuptools"] +build-backend = "setuptools.build_meta" + +[project] +name = "{{ cookiecutter.python_package }}" +dependencies = [ + "kedro~={{ cookiecutter.kedro_version }}", +] +dynamic = ["version"] + +[project.scripts] +{{ cookiecutter.repo_name }} = "{{ cookiecutter.python_package }}.__main__:main" + +[project.optional-dependencies] +docs = [ + "docutils<0.18.0", + "sphinx~=3.4.3", + "sphinx_rtd_theme==0.5.1", + "nbsphinx==0.8.1", + "nbstripout~=0.4", + "sphinx-autodoc-typehints==1.11.1", + "sphinx_copybutton==0.3.1", + "ipykernel>=5.3, <7.0", + "Jinja2<3.1.0", + "myst-parser~=0.17.2", +] + +[tool.setuptools.dynamic] +version = {attr = "{{ cookiecutter.python_package }}.__version__"} + +[tool.setuptools.packages.find] +where = ["src"] +namespaces = false + [tool.kedro] package_name = "{{ cookiecutter.python_package }}" project_name = "{{ cookiecutter.project_name }}" diff --git a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/requirements.txt b/kedro/templates/project/{{ cookiecutter.repo_name }}/requirements.txt similarity index 76% rename from features/steps/test_starter/{{ cookiecutter.repo_name }}/src/requirements.txt rename to kedro/templates/project/{{ cookiecutter.repo_name }}/requirements.txt index 7e6f29ac16..50cf28b712 100644 --- a/features/steps/test_starter/{{ cookiecutter.repo_name }}/src/requirements.txt +++ b/kedro/templates/project/{{ cookiecutter.repo_name }}/requirements.txt @@ -1,3 +1,7 @@ +# Install library code +-e file:. + +# Development dependencies black~=22.0 flake8>=3.7.9, <5.0 ipython>=7.31.1, <8.0; python_version < '3.8' @@ -6,8 +10,6 @@ isort~=5.0 jupyter~=1.0 jupyterlab_server>=2.11.1, <2.16.0 jupyterlab~=3.0, <3.6.0 -kedro[pandas.CSVDataSet]=={{ cookiecutter.kedro_version }} -kedro-telemetry~=0.2.0 nbstripout~=0.4 pytest-cov~=3.0 pytest-mock>=1.7.1, <2.0 diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/setup.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/src/setup.py deleted file mode 100644 index 8e62d661f8..0000000000 --- a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/setup.py +++ /dev/null @@ -1,39 +0,0 @@ -from setuptools import find_packages, setup - -entry_point = ( - "{{ cookiecutter.repo_name }} = {{ cookiecutter.python_package }}.__main__:main" -) - - -# get the dependencies and installs -with open("requirements.txt", encoding="utf-8") as f: - # Make sure we strip all comments and options (e.g "--extra-index-url") - # that arise from a modified pip.conf file that configure global options - # when running kedro build-reqs - requires = [] - for line in f: - req = line.split("#", 1)[0].strip() - if req and not req.startswith("--"): - requires.append(req) - -setup( - name="{{ cookiecutter.python_package }}", - version="0.1", - packages=find_packages(exclude=["tests"]), - entry_points={"console_scripts": [entry_point]}, - install_requires=requires, - extras_require={ - "docs": [ - "docutils<0.18.0", - "sphinx~=3.4.3", - "sphinx_rtd_theme==0.5.1", - "nbsphinx==0.8.1", - "nbstripout~=0.4", - "sphinx-autodoc-typehints==1.11.1", - "sphinx_copybutton==0.3.1", - "ipykernel>=5.3, <7.0", - "Jinja2<3.1.0", - "myst-parser~=0.17.2", - ] - }, -) diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/__init__.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/tests/__init__.py similarity index 100% rename from kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/__init__.py rename to kedro/templates/project/{{ cookiecutter.repo_name }}/tests/__init__.py diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/pipelines/__init__.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/tests/pipelines/__init__.py similarity index 100% rename from kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/pipelines/__init__.py rename to kedro/templates/project/{{ cookiecutter.repo_name }}/tests/pipelines/__init__.py diff --git a/kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/test_run.py b/kedro/templates/project/{{ cookiecutter.repo_name }}/tests/test_run.py similarity index 100% rename from kedro/templates/project/{{ cookiecutter.repo_name }}/src/tests/test_run.py rename to kedro/templates/project/{{ cookiecutter.repo_name }}/tests/test_run.py diff --git a/tests/framework/cli/micropkg/conftest.py b/tests/framework/cli/micropkg/conftest.py index ff8348b755..faf7b13e91 100644 --- a/tests/framework/cli/micropkg/conftest.py +++ b/tests/framework/cli/micropkg/conftest.py @@ -26,7 +26,7 @@ def cleanup_micropackages(fake_repo_path, fake_package_path): if each.is_file(): each.unlink() - tests = fake_repo_path / "src" / "tests" / micropackage + tests = fake_repo_path / "tests" / micropackage if tests.is_dir(): shutil.rmtree(str(tests)) @@ -35,7 +35,7 @@ def cleanup_micropackages(fake_repo_path, fake_package_path): def cleanup_pipelines(fake_repo_path, fake_package_path): pipes_path = fake_package_path / "pipelines" old_pipelines = {p.name for p in pipes_path.iterdir() if p.is_dir()} - requirements_txt = fake_repo_path / "src" / "requirements.txt" + requirements_txt = fake_repo_path / "requirements.txt" requirements = requirements_txt.read_text() yield @@ -53,7 +53,7 @@ def cleanup_pipelines(fake_repo_path, fake_package_path): if each.is_file(): each.unlink() - tests = fake_repo_path / "src" / "tests" / "pipelines" / pipeline + tests = fake_repo_path / "tests" / "pipelines" / pipeline if tests.is_dir(): shutil.rmtree(str(tests)) diff --git a/tests/framework/cli/micropkg/test_micropkg_requirements.py b/tests/framework/cli/micropkg/test_micropkg_requirements.py index b0070a1bee..845f08c0d2 100644 --- a/tests/framework/cli/micropkg/test_micropkg_requirements.py +++ b/tests/framework/cli/micropkg/test_micropkg_requirements.py @@ -81,7 +81,10 @@ def test_existing_complex_project_requirements_txt( self, fake_project_cli, fake_metadata, fake_package_path, fake_repo_path ): """Pipeline requirements.txt and project requirements.txt.""" - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + # FIXME: This assumes that requirements live in `requirements.txt`, + # but in the new project template they are split + # between `pyproject.toml` (library deps) and `requirements.txt` (development deps) + project_requirements_txt = fake_repo_path / "requirements.txt" with open(project_requirements_txt, "a", encoding="utf-8") as file: file.write(COMPLEX_REQUIREMENTS) existing_requirements = _safe_parse_requirements( @@ -112,7 +115,7 @@ def test_existing_project_requirements_txt( self, fake_project_cli, fake_metadata, fake_package_path, fake_repo_path ): """Pipeline requirements.txt and project requirements.txt.""" - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" existing_requirements = _safe_parse_requirements( project_requirements_txt.read_text() ) @@ -146,7 +149,7 @@ def test_missing_project_requirements_txt( project level.""" # Remove project requirements.txt - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" project_requirements_txt.unlink() self.call_pipeline_create(fake_project_cli, fake_metadata) @@ -176,7 +179,7 @@ def test_no_requirements( """No pipeline requirements.txt, and also no requirements.txt at project level.""" # Remove project requirements.txt - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" project_requirements_txt.unlink() self.call_pipeline_create(fake_project_cli, fake_metadata) @@ -195,7 +198,7 @@ def test_all_requirements_already_covered( pipeline_requirements_txt = ( fake_package_path / "pipelines" / PIPELINE_NAME / "requirements.txt" ) - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" pipeline_requirements_txt.write_text(SIMPLE_REQUIREMENTS) project_requirements_txt.write_text(SIMPLE_REQUIREMENTS) @@ -214,7 +217,7 @@ def test_no_pipeline_requirements_txt( create project requirements.txt.""" # Remove project requirements.txt - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" project_requirements_txt.unlink() self.call_pipeline_create(fake_project_cli, fake_metadata) @@ -231,7 +234,7 @@ def test_empty_pipeline_requirements_txt( create project requirements.txt.""" # Remove project requirements.txt - project_requirements_txt = fake_repo_path / "src" / "requirements.txt" + project_requirements_txt = fake_repo_path / "requirements.txt" project_requirements_txt.unlink() self.call_pipeline_create(fake_project_cli, fake_metadata) diff --git a/tests/framework/cli/pipeline/conftest.py b/tests/framework/cli/pipeline/conftest.py index f934ab6939..79480ae1ca 100644 --- a/tests/framework/cli/pipeline/conftest.py +++ b/tests/framework/cli/pipeline/conftest.py @@ -26,7 +26,7 @@ def cleanup_micropackages(fake_repo_path, fake_package_path): if each.is_file(): each.unlink() - tests = fake_repo_path / "src" / "tests" / micropackage + tests = fake_repo_path / "tests" / micropackage if tests.is_dir(): shutil.rmtree(str(tests)) @@ -35,7 +35,7 @@ def cleanup_micropackages(fake_repo_path, fake_package_path): def cleanup_pipelines(fake_repo_path, fake_package_path): pipes_path = fake_package_path / "pipelines" old_pipelines = {p.name for p in pipes_path.iterdir() if p.is_dir()} - requirements_txt = fake_repo_path / "src" / "requirements.txt" + requirements_txt = fake_repo_path / "requirements.txt" requirements = requirements_txt.read_text() yield @@ -58,7 +58,7 @@ def cleanup_pipelines(fake_repo_path, fake_package_path): if dirpath.is_dir() and not any(dirpath.iterdir()): dirpath.rmdir() - tests = fake_repo_path / "src" / "tests" / "pipelines" / pipeline + tests = fake_repo_path / "tests" / "pipelines" / pipeline if tests.is_dir(): shutil.rmtree(str(tests)) diff --git a/tests/framework/cli/pipeline/test_pipeline.py b/tests/framework/cli/pipeline/test_pipeline.py index 4bdd965526..07038c9e31 100644 --- a/tests/framework/cli/pipeline/test_pipeline.py +++ b/tests/framework/cli/pipeline/test_pipeline.py @@ -19,7 +19,7 @@ @pytest.fixture(params=["base"]) def make_pipelines(request, fake_repo_path, fake_package_path, mocker): source_path = fake_package_path / "pipelines" / PIPELINE_NAME - tests_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + tests_path = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME conf_path = fake_repo_path / settings.CONF_SOURCE / request.param / "parameters" for path in (source_path, tests_path, conf_path): @@ -72,7 +72,7 @@ def test_create_pipeline( # pylint: disable=too-many-locals assert actual_configs == expected_configs # tests - test_dir = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + test_dir = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME expected_files = {"__init__.py", "test_pipeline.py"} actual_files = {f.name for f in test_dir.iterdir()} assert actual_files == expected_files @@ -94,7 +94,7 @@ def test_create_pipeline_skip_config( conf_dirs = list((fake_repo_path / settings.CONF_SOURCE).rglob(PIPELINE_NAME)) assert conf_dirs == [] # no configs created for the pipeline - test_dir = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + test_dir = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME assert test_dir.is_dir() def test_catalog_and_params( # pylint: disable=too-many-locals @@ -151,12 +151,7 @@ def test_skip_copy(self, fake_repo_path, fake_project_cli, fake_metadata): # create __init__.py in tests tests_init = ( - fake_repo_path - / "src" - / "tests" - / "pipelines" - / PIPELINE_NAME - / "__init__.py" + fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME / "__init__.py" ) tests_init.parent.mkdir(parents=True) tests_init.touch() @@ -268,7 +263,7 @@ def test_delete_pipeline( ) source_path = fake_package_path / "pipelines" / PIPELINE_NAME - tests_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + tests_path = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME params_path = ( fake_repo_path / settings.CONF_SOURCE @@ -304,7 +299,7 @@ def test_delete_pipeline_skip( ["pipeline", "delete", "-y", PIPELINE_NAME], obj=fake_metadata, ) - tests_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + tests_path = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME params_path = ( fake_repo_path / settings.CONF_SOURCE @@ -396,7 +391,7 @@ def test_pipeline_delete_confirmation( ) source_path = fake_package_path / "pipelines" / PIPELINE_NAME - tests_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + tests_path = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME params_path = ( fake_repo_path / settings.CONF_SOURCE @@ -437,7 +432,7 @@ def test_pipeline_delete_confirmation_skip( obj=fake_metadata, ) - tests_path = fake_repo_path / "src" / "tests" / "pipelines" / PIPELINE_NAME + tests_path = fake_repo_path / "tests" / "pipelines" / PIPELINE_NAME params_path = ( fake_repo_path / settings.CONF_SOURCE diff --git a/tests/framework/cli/test_jupyter.py b/tests/framework/cli/test_jupyter.py index d5e0b5dbd3..5e5956e456 100644 --- a/tests/framework/cli/test_jupyter.py +++ b/tests/framework/cli/test_jupyter.py @@ -47,7 +47,7 @@ def test_fail_no_jupyter(self, fake_project_cli, mocker): assert result.exit_code error = ( "Module 'notebook' not found. Make sure to install required project " - "dependencies by running the 'pip install -r src/requirements.txt' command first." + "dependencies by running the 'pip install -r requirements.txt' command first." ) assert error in result.output @@ -97,7 +97,7 @@ def test_fail_no_jupyter(self, fake_project_cli, mocker): assert result.exit_code error = ( "Module 'notebook' not found. Make sure to install required project " - "dependencies by running the 'pip install -r src/requirements.txt' command first." + "dependencies by running the 'pip install -r requirements.txt' command first." ) assert error in result.output @@ -147,7 +147,7 @@ def test_fail_no_jupyter(self, fake_project_cli, mocker): assert result.exit_code error = ( "Module 'jupyterlab' not found. Make sure to install required project " - "dependencies by running the 'pip install -r src/requirements.txt' command first." + "dependencies by running the 'pip install -r requirements.txt' command first." ) assert error in result.output diff --git a/tests/framework/cli/test_project.py b/tests/framework/cli/test_project.py index 92e0d024cd..e71440698e 100644 --- a/tests/framework/cli/test_project.py +++ b/tests/framework/cli/test_project.py @@ -121,7 +121,7 @@ def test_pytest_not_installed( fake_project_cli, ["test", "--random-arg", "value"], obj=fake_metadata ) expected_message = NO_DEPENDENCY_MESSAGE.format( - module="pytest", src=str(fake_repo_path / "src") + module="pytest", src=str(fake_repo_path) ) assert result.exit_code @@ -148,7 +148,7 @@ def test_lint( assert not result.exit_code, result.stdout expected_files = files or ( - str(fake_repo_path / "src/tests"), + str(fake_repo_path / "tests"), str(fake_repo_path / "src/dummy_package"), ) expected_calls = [ @@ -185,7 +185,7 @@ def test_lint_check_only( assert not result.exit_code, result.stdout expected_files = files or ( - str(fake_repo_path / "src/tests"), + str(fake_repo_path / "tests"), str(fake_repo_path / "src/dummy_package"), ) expected_calls = [ @@ -217,7 +217,7 @@ def test_import_not_installed( result = CliRunner().invoke(fake_project_cli, ["lint"], obj=fake_metadata) expected_message = NO_DEPENDENCY_MESSAGE.format( - module=module_name, src=str(fake_repo_path / "src") + module=module_name, src=str(fake_repo_path) ) assert result.exit_code, result.stdout @@ -279,7 +279,7 @@ def test_fail_no_ipython(self, fake_project_cli, mocker): assert result.exit_code error = ( "Module 'IPython' not found. Make sure to install required project " - "dependencies by running the 'pip install -r src/requirements.txt' command first." + "dependencies by running the 'pip install -r requirements.txt' command first." ) assert error in result.output @@ -302,7 +302,7 @@ def test_happy_path( "--outdir", "../dist", ], - cwd=str(fake_repo_path / "src"), + cwd=str(fake_repo_path), ), mocker.call( [ @@ -351,10 +351,10 @@ def test_happy_path( ) python_call_mock.assert_has_calls( [ - mocker.call("pip", ["install", str(fake_repo_path / "src/[docs]")]), + mocker.call("pip", ["install", str(fake_repo_path / "[docs]")]), mocker.call( "pip", - ["install", "-r", str(fake_repo_path / "src/requirements.txt")], + ["install", "-r", str(fake_repo_path / "requirements.txt")], ), mocker.call("ipykernel", ["install", "--user", "--name=dummy_package"]), ] @@ -395,9 +395,9 @@ def test_compile_from_requirements_file( "piptools", [ "compile", - str(fake_repo_path / "src" / "requirements.txt"), + str(fake_repo_path / "requirements.txt"), "--output-file", - str(fake_repo_path / "src" / "requirements.lock"), + str(fake_repo_path / "requirements.lock"), ], ) @@ -410,10 +410,10 @@ def test_compile_from_input_and_to_output_file( fake_metadata, ): # File exists: - input_file = fake_repo_path / "src" / "dev-requirements.txt" + input_file = fake_repo_path / "dev-requirements.txt" with open(input_file, "a", encoding="utf-8") as file: file.write("") - output_file = fake_repo_path / "src" / "dev-requirements.lock" + output_file = fake_repo_path / "dev-requirements.lock" result = CliRunner().invoke( fake_project_cli, @@ -444,7 +444,7 @@ def test_extra_args( extra_args, fake_metadata, ): - requirements_txt = fake_repo_path / "src" / "requirements.txt" + requirements_txt = fake_repo_path / "requirements.txt" result = CliRunner().invoke( fake_project_cli, ["build-reqs"] + extra_args, obj=fake_metadata @@ -457,7 +457,7 @@ def test_extra_args( ["compile"] + extra_args + [str(requirements_txt)] - + ["--output-file", str(fake_repo_path / "src" / "requirements.lock")] + + ["--output-file", str(fake_repo_path / "requirements.lock")] ) python_call_mock.assert_called_once_with("piptools", call_args) @@ -466,7 +466,7 @@ def test_missing_requirements_txt( self, fake_project_cli, mocker, fake_metadata, os_name, fake_repo_path ): """Test error when input file requirements.txt doesn't exists.""" - requirements_txt = fake_repo_path / "src" / "requirements.txt" + requirements_txt = fake_repo_path / "requirements.txt" mocker.patch("kedro.framework.cli.project.os").name = os_name mocker.patch.object(Path, "is_file", return_value=False) diff --git a/tests/framework/cli/test_starters.py b/tests/framework/cli/test_starters.py index 26fc6ac3e5..37c0512110 100644 --- a/tests/framework/cli/test_starters.py +++ b/tests/framework/cli/test_starters.py @@ -17,7 +17,7 @@ KedroStarterSpec, ) -FILES_IN_TEMPLATE = 31 +FILES_IN_TEMPLATE = 30 @pytest.fixture @@ -70,9 +70,7 @@ def _assert_template_ok( assert (full_path / ".gitignore").is_file() assert project_name in (full_path / "README.md").read_text(encoding="utf-8") assert "KEDRO" in (full_path / ".gitignore").read_text(encoding="utf-8") - assert kedro_version in (full_path / "src" / "requirements.txt").read_text( - encoding="utf-8" - ) + assert kedro_version in (full_path / "pyproject.toml").read_text(encoding="utf-8") assert (full_path / "src" / python_package / "__init__.py").is_file()