Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: fixed some grammatical errors #3215

Merged
merged 1 commit into from
Apr 18, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 3 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,9 @@
# limitations under the License.
# ---------------------------------------------------------------------------

- repo: git://github.com/dnephin/pre-commit-golang
sha: HEAD
repos:
- repo: https://github.com/dnephin/pre-commit-golang
rev: master
exclude:
- vendor/.*
hooks:
Expand Down
10 changes: 5 additions & 5 deletions docs/modules/ROOT/pages/contributing/e2e.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,19 +14,19 @@ integration tests. An integration test should start with the following line:
Look into the https://github.com/apache/camel-k/tree/main/e2e[/e2e] directory for examples of integration tests.

Before running an integration test, you need to be connected to a Kubernetes/OpenShift namespace.
After you log in into your cluster, you can run the following command to execute **all** integration tests:
After you log into your cluster, you can run the following command to execute **all** integration tests:

[source]
----
make test-integration
----

The test script will take care to install the operators needed in a random namespace, execute all expected tests and clean themselves. Cleaning may not be performed if the execution of tests fails or the test process is interrupted. In that case you can look for any namespace similar to `test-29ed8147-c9fc-4c04-9c29-744eaf4750c6` and remove manually.
The test script will install the operators needed in a random namespace, execute all expected tests and clean themselves. Cleaning may not be performed if the execution of tests fails or the test process is interrupted. In that case you can look for any namespace similar to `test-29ed8147-c9fc-4c04-9c29-744eaf4750c6` and remove it manually.

[[testing-operator]]
== Testing Operator under development

You probably want to test your changes on camel-k `operator` locally after some development. You will need to make the operator docker image available to your cluster registry before launching the tests. We have a script that will take care of that.
You probably want to test your changes on camel-k `operator` locally after some development. You will need to make the operator docker image available to your cluster registry before launching the tests. We have a script which will take care of that.

First, you must connect and point to the `docker daemon`. If you're on a local environment such as `minikube`, it will be as simple as executing

Expand All @@ -42,7 +42,7 @@ For other cluster types you may check the specific documentation. As soon as you
make images
----

The script will take care to build the operator docker image and push to the underlying docker daemon registry. At this stage, the cluster will be able to pickup this latest image when it executes the tests.
The script will build the operator docker image and push it to the underlying docker daemon registry. At this stage, the cluster will be able to pickup this latest image when it executes the tests.

You can also execute the following script, if by any chance you have some change applied to the `camel-k-runtime`. You can optionally point to your local Camel K runtime project directory if you need to install any SNAPSHOT dependency:

Expand All @@ -56,7 +56,7 @@ make images-dev [CAMEL_K_RUNTIME_DIR=/path/to/camel-k-runtime-project]

To speed up integration testing locally, you may use a https://github.com/sonatype/docker-nexus3[Nexus Repository Manager] for Maven repository mirror.

You can set the environment variable `TEST_ENABLE_NEXUS=true` to enable the usage of Nexus mirror in e2e testing. If `TEST_ENABLE_NEXUS` is set, e2e tests will try to discover an Nexus instance as `nexus` service in `nexus` namespace and if it is found they will use it as the Maven repository mirror for the `camel-k` platform under test.
You can set the environment variable `TEST_ENABLE_NEXUS=true` to enable the usage of Nexus mirror in e2e testing. If `TEST_ENABLE_NEXUS` is set, e2e tests will try to discover a Nexus instance as `nexus` service in `nexus` namespace and if it is found they will use it as the Maven repository mirror for the `camel-k` platform under test.

[source]
----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The following steps assume that

To perform OLM (Operator Lifecycle Manager) based deployment of camel-k, built from source locally on an Openshift cluster, you can follow the steps below.

Login to the cluster using the standard "oc" tool, create new project, do other basic setup. Reference commands below
Login to the cluster using the standard "oc" tool, create new project, complete the basic setup process. Reference commands below

```
oc login -u <user> -p <password>
Expand Down
10 changes: 5 additions & 5 deletions docs/modules/ROOT/pages/contributing/local-development.adoc
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
[[development-environment]]
= Local development environment

If you plan to contribute to Camel K, you will end up needing to run and troubleshoot your operator code locally. Here a guideline that will help you configure your local operator running.
If you plan on contributing to Camel K, you will end up needing to run and troubleshoot your operator code locally. Here is a guideline that will help you configure your local operator running.

[[local-operator]]
== Running operator locally

As soon as you build your operator locally you will ask yourself how to test it. The idea is that you execute it locally and instruct it to **watch** a namespace on a Kubernetes cluster (it may be remote or any local environment). Let's use a namespace called ``operator-test``.

* We start by setting the environment variable ``WATCH_NAMESPACE`` with the namespace you'd like your operator to watch.
* You can start with setting the environment variable ``WATCH_NAMESPACE`` with the namespace you'd like your operator to watch.
----
export WATCH_NAMESPACE=operator-test
----
Expand All @@ -18,7 +18,7 @@ export WATCH_NAMESPACE=operator-test
./kamel install --skip-operator-setup -n operator-test --registry my-registry:5000
----

* Finally, assuming you've builded your application correctly we can run the operator:
* Finally, assuming you've built your application correctly we can run the operator:
-----
./kamel operator
-----
Expand All @@ -28,12 +28,12 @@ export WATCH_NAMESPACE=operator-test
./kamel run xyz.abc -n operator-test
-----

IMPORTANT: make sure no other Camel K Operators are watching that namespace, neither you have a global Camel K Operator installed on your cluster.
IMPORTANT: make sure no other Camel K Operators are watching this namespace, neither you have a global Camel K Operator installed on your cluster.

[[local-minikube]]
== Local operator and local cluster

If you want to run a local operator togheter with ``Minikube`` you will need an additional step in order to let the local operator being able to push images in the local registry. We need to expose the local registry as described in https://minikube.sigs.k8s.io/docs/handbook/registry/#docker-on-windows[this procedure]:
If you want to run a local operator togheter with ``Minikube`` you will need an additional step in order to let the local operator push images in the local registry. We need to expose the local registry as described in https://minikube.sigs.k8s.io/docs/handbook/registry/#docker-on-windows[this procedure]:

* Enable the addon registry (this should be already in place):
----
Expand Down
6 changes: 3 additions & 3 deletions docs/modules/ROOT/pages/troubleshooting/debugging.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ As you can see in the logs, the CLI has configured the integration in debug mode

The JVM is suspended and waits for a debugger to attach by default: this behavior can be turned off using the `--suspend=false` option.

Last thing to do is, with your IDE opened on the **integration file (if using Java, Groovy or Kotlin), the Apache Camel project, or the Camel K Runtime project**, to start a remote debugger on `localhost:5005`.
Last thing to do is, to start a remote debugger on `localhost:5005` with the **integration file (if using Java, Groovy or Kotlin), the Apache Camel project, or the Camel K Runtime project**, opened on your IDE.

The following picture shows the configuration of a remote debugger in IntelliJ IDEA:

Expand All @@ -59,6 +59,6 @@ When the debugging session is done, hitting kbd:[Ctrl+c] on the terminal where t

As we've seen in the previous section, all `Integration` created in Camel K are finally bundled as a Java application, hence, the possibility to debug via JVM debugger. Any `Kamelet` you will be using directly in your `Route` definition or in a `KameletBinding` is automatically converted in a `yaml` route and injected in the Camel Context to be executed. That means that you cannot directly debug a `Kamelet` as you would do with a Java or any other JVM language `Route`.

However, you can troubleshoot individually each `Kamelet` definition by focusing on the specification xref:kamelets/kamelets-user.adoc#_flow[`Flow`]. As an example, you can create a simple `yaml` test `Route` substituting the `kamelet:source` or `kamelet:sink` with any mock endpoint that can help you debugging the single `Kamelet` flow. Even using a `timer` and a `log` component may be enough for a basic check.
However, you can troubleshoot individually each `Kamelet` definition by focusing on the specification xref:kamelets/kamelets-user.adoc#_flow[`Flow`]. As an example, you can create a simple `yaml` test `Route` substituting the `kamelet:source` or `kamelet:sink` with any mock endpoint that can help you in debugging the single `Kamelet` flow. Even using a `timer` and a `log` component may be enough for a basic check.

NOTE: the same idea applies for a `KameletBinding` which translates to an `Integration` type under the hood. If you need to debug a `KameletBinding` just apply the same troubleshooting technique you would apply on an `Integration`.
NOTE: the same idea applies for a `KameletBinding` which translates to an `Integration` type under the hood. If you need to debug a `KameletBinding` just apply the same troubleshooting technique that you would apply on an `Integration`.
2 changes: 1 addition & 1 deletion docs/modules/ROOT/pages/uninstalling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ If you really need to, it is possible to completely uninstall Camel K from OpenS
kamel uninstall
----

This will uninstall from the cluster namespace all Camel K resources along with the operator.
This will uninstall all Camel K resources along with the operator from the cluster namespace.

NOTE: By _default_ the resources possibly shared between clusters such as https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources[CustomResourceDefinitions (CRD)], https://kubernetes.io/docs/reference/access-authn-authz/rbac[ClusterRole] and https://docs.openshift.com/container-platform/4.1/applications/operators/olm-understanding-olm.html[Operator Lifecycle Manager(OLM)] will be **excluded**. To force the inclusion of all resources you can use the **--all** flag. If the **--olm=false** option was specified during installation, which is the case when installing Camel K from sources on CRC, then it also must be used with the uninstall command.

Expand Down