From 1c83b3fc227d72b0c96551513c4acca86456386a Mon Sep 17 00:00:00 2001 From: Jay Bryant Date: Wed, 24 Jun 2020 12:40:05 -0500 Subject: [PATCH] Wording changes Replace potentially insensitive language with more neutral language. Closes gh-25314 --- src/docs/asciidoc/core/core-aop-api.adoc | 24 ++-- src/docs/asciidoc/core/core-aop.adoc | 58 ++++---- src/docs/asciidoc/core/core-beans.adoc | 53 ++++---- src/docs/asciidoc/core/core-resources.adoc | 4 +- src/docs/asciidoc/core/core-validation.adoc | 4 +- src/docs/asciidoc/data-access.adoc | 91 ++++++------- src/docs/asciidoc/integration.adoc | 97 +++++++------- .../asciidoc/languages/dynamic-languages.adoc | 2 +- src/docs/asciidoc/testing.adoc | 126 +++++++++--------- src/docs/asciidoc/web/webflux-functional.adoc | 17 ++- src/docs/asciidoc/web/webflux-websocket.adoc | 4 +- src/docs/asciidoc/web/webflux.adoc | 10 +- src/docs/asciidoc/web/webmvc-functional.adoc | 2 +- src/docs/asciidoc/web/webmvc.adoc | 14 +- 14 files changed, 253 insertions(+), 253 deletions(-) diff --git a/src/docs/asciidoc/core/core-aop-api.adoc b/src/docs/asciidoc/core/core-aop-api.adoc index e6cf6a9618b5..46516b36acbe 100644 --- a/src/docs/asciidoc/core/core-aop-api.adoc +++ b/src/docs/asciidoc/core/core-aop-api.adoc @@ -106,7 +106,7 @@ proxy is created to avoid the need for a test on every method invocation. If the two-argument `matches` method returns `true` for a given method, and the `isRuntime()` method for the MethodMatcher returns `true`, the three-argument matches method is invoked on every method invocation. This lets a pointcut look at the arguments passed to the -method invocation immediately before the target advice is to execute. +method invocation immediately before the target advice starts. Most `MethodMatcher` implementations are static, meaning that their `isRuntime()` method returns `false`. In this case, the three-argument `matches` method is never invoked. @@ -232,7 +232,7 @@ The main example is the `control flow` pointcut. ===== Control Flow Pointcuts Spring control flow pointcuts are conceptually similar to AspectJ `cflow` pointcuts, -although less powerful. (There is currently no way to specify that a pointcut executes +although less powerful. (There is currently no way to specify that a pointcut runs below a join point matched by another pointcut.) A control flow pointcut matches the current call stack. For example, it might fire if the join point was invoked by a method in the `com.mycompany.web` package or by the `SomeCaller` class. Control flow pointcuts @@ -425,7 +425,7 @@ The following listing shows the `MethodBeforeAdvice` interface: .Kotlin ---- interface MethodBeforeAdvice : BeforeAdvice { - + fun before(m: Method, args: Array, target: Any) } ---- @@ -435,8 +435,8 @@ field before advice, although the usual objects apply to field interception and unlikely for Spring to ever implement it.) Note that the return type is `void`. Before advice can insert custom behavior before the join -point executes but cannot change the return value. If a before advice throws an -exception, it aborts further execution of the interceptor chain. The exception +point runs but cannot change the return value. If a before advice throws an +exception, it stops further execution of the interceptor chain. The exception propagates back up the interceptor chain. If it is unchecked or on the signature of the invoked method, it is passed directly to the client. Otherwise, it is wrapped in an unchecked exception by the AOP proxy. @@ -465,7 +465,7 @@ The following example shows a before advice in Spring, which counts all method i class CountingBeforeAdvice : MethodBeforeAdvice { var count: Int = 0 - + override fun before(m: Method, args: Array, target: Any?) { ++count } @@ -509,7 +509,7 @@ The following advice is invoked if a `RemoteException` is thrown (including from .Kotlin ---- class RemoteThrowsAdvice : ThrowsAdvice { - + fun afterThrowing(ex: RemoteException) { // Do something with remote exception } @@ -563,7 +563,7 @@ methods can be combined in a single class. The following listing shows the final .Kotlin ---- class CombinedThrowsAdvice : ThrowsAdvice { - + fun afterThrowing(ex: RemoteException) { // Do something with remote exception } @@ -604,7 +604,7 @@ An after returning advice in Spring must implement the .Kotlin ---- interface AfterReturningAdvice : Advice { - + fun afterReturning(returnValue: Any, m: Method, args: Array, target: Any) } ---- @@ -639,7 +639,7 @@ not thrown exceptions: var count: Int = 0 private set - + override fun afterReturning(returnValue: Any?, m: Method, args: Array, target: Any?) { ++count } @@ -707,7 +707,7 @@ rather than the method, level. You can only use introduction advice with the interface IntroductionAdvisor : Advisor, IntroductionInfo { val classFilter: ClassFilter - + @Throws(IllegalArgumentException::class) fun validateInterfaces() } @@ -829,7 +829,7 @@ The following example shows the example `LockMixin` class: fun locked(): Boolean { return this.locked } - + override fun invoke(invocation: MethodInvocation): Any? { if (locked() && invocation.method.name.indexOf("set") == 0) { throw LockedException() diff --git a/src/docs/asciidoc/core/core-aop.adoc b/src/docs/asciidoc/core/core-aop.adoc index 47e3ae3bec7f..f7daf85c7603 100644 --- a/src/docs/asciidoc/core/core-aop.adoc +++ b/src/docs/asciidoc/core/core-aop.adoc @@ -83,9 +83,9 @@ Spring AOP includes the following types of advice: an exception). * After returning advice: Advice to be run after a join point completes normally (for example, if a method returns without throwing an exception). -* After throwing advice: Advice to be executed if a method exits by throwing an +* After throwing advice: Advice to be run if a method exits by throwing an exception. -* After (finally) advice: Advice to be executed regardless of the means by which a +* After (finally) advice: Advice to be run regardless of the means by which a join point exits (normal or exceptional return). * Around advice: Advice that surrounds a join point such as a method invocation. This is the most powerful kind of advice. Around advice can perform custom behavior @@ -219,7 +219,7 @@ To use @AspectJ aspects in a Spring configuration, you need to enable Spring sup configuring Spring AOP based on @AspectJ aspects and auto-proxying beans based on whether or not they are advised by those aspects. By auto-proxying, we mean that, if Spring determines that a bean is advised by one or more aspects, it automatically generates -a proxy for that bean to intercept method invocations and ensures that advice is executed +a proxy for that bean to intercept method invocations and ensures that advice is run as needed. The @AspectJ support can be enabled with XML- or Java-style configuration. In either @@ -334,7 +334,7 @@ hence, excludes it from auto-proxying. === Declaring a Pointcut Pointcuts determine join points of interest and thus enable us to control -when advice executes. Spring AOP only supports method execution join points for Spring +when advice runs. Spring AOP only supports method execution join points for Spring beans, so you can think of a pointcut as matching the execution of methods on Spring beans. A pointcut declaration has two parts: a signature comprising a name and any parameters and a pointcut expression that determines exactly which method @@ -395,7 +395,7 @@ expressions: annotation (the execution of methods declared in types with the given annotation when using Spring AOP). * `@annotation`: Limits matching to join points where the subject of the join point - (the method being executed in Spring AOP) has the given annotation. + (the method being run in Spring AOP) has the given annotation. .Other pointcut types **** @@ -1211,8 +1211,8 @@ The following example shows how to use after finally advice: ==== Around Advice The last kind of advice is around advice. Around advice runs "`around`" a matched method's -execution. It has the opportunity to do work both before and after the method executes -and to determine when, how, and even if the method actually gets to execute at all. +execution. It has the opportunity to do work both before and after the method runs +and to determine when, how, and even if the method actually gets to run at all. Around advice is often used if you need to share state before and after a method execution in a thread-safe manner (starting and stopping a timer, for example). Always use the least powerful form of advice that meets your requirements (that is, do not use @@ -1221,7 +1221,7 @@ around advice if before advice would do). Around advice is declared by using the `@Around` annotation. The first parameter of the advice method must be of type `ProceedingJoinPoint`. Within the body of the advice, calling `proceed()` on the `ProceedingJoinPoint` causes the underlying method to -execute. The `proceed` method can also pass in an `Object[]`. The values +run. The `proceed` method can also pass in an `Object[]`. The values in the array are used as the arguments to the method execution when it proceeds. NOTE: The behavior of `proceed` when called with an `Object[]` is a little different than the @@ -1783,15 +1783,15 @@ annotation. Consider the following example: } ---- -In the preceding example, the effect of the `'perthis'` clause is that one aspect -instance is created for each unique service object that executes a business service (each -unique object bound to 'this' at join points matched by the pointcut expression). The -aspect instance is created the first time that a method is invoked on the service object. -The aspect goes out of scope when the service object goes out of scope. Before the aspect -instance is created, none of the advice within it executes. As soon as the aspect -instance has been created, the advice declared within it executes at matched join points, -but only when the service object is the one with which this aspect is associated. See the -AspectJ Programming Guide for more information on `per` clauses. +In the preceding example, the effect of the `perthis` clause is that one aspect instance +is created for each unique service object that performs a business service (each unique +object bound to `this` at join points matched by the pointcut expression). The aspect +instance is created the first time that a method is invoked on the service object. The +aspect goes out of scope when the service object goes out of scope. Before the aspect +instance is created, none of the advice within it runs. As soon as the aspect instance +has been created, the advice declared within it runs at matched join points, but only +when the service object is the one with which this aspect is associated. See the AspectJ +Programming Guide for more information on `per` clauses. The `pertarget` instantiation model works in exactly the same way as `perthis`, but it creates one aspect instance for each unique target object at matched join points. @@ -2188,7 +2188,7 @@ significantly improve the readability of your code. The `method` attribute identifies a method (`doAccessCheck`) that provides the body of the advice. This method must be defined for the bean referenced by the aspect element -that contains the advice. Before a data access operation is executed (a method execution +that contains the advice. Before a data access operation is performed (a method execution join point matched by the pointcut expression), the `doAccessCheck` method on the aspect bean is invoked. @@ -2250,7 +2250,7 @@ example, you can declare the method signature as follows: [[aop-schema-advice-after-throwing]] ==== After Throwing Advice -After throwing advice executes when a matched method execution exits by throwing an +After throwing advice runs when a matched method execution exits by throwing an exception. It is declared inside an `` by using the `after-throwing` element, as the following example shows: @@ -2325,8 +2325,8 @@ by using the `after` element, as the following example shows: ==== Around Advice The last kind of advice is around advice. Around advice runs "around" a matched method -execution. It has the opportunity to do work both before and after the method executes -and to determine when, how, and even if the method actually gets to execute at all. +execution. It has the opportunity to do work both before and after the method runs +and to determine when, how, and even if the method actually gets to run at all. Around advice is often used to share state before and after a method execution in a thread-safe manner (starting and stopping a timer, for example). Always use the least powerful form of advice that meets your requirements. Do not use around @@ -2335,7 +2335,7 @@ advice if before advice can do the job. You can declare around advice by using the `aop:around` element. The first parameter of the advice method must be of type `ProceedingJoinPoint`. Within the body of the advice, calling `proceed()` on the `ProceedingJoinPoint` causes the underlying method to -execute. The `proceed` method may also be called with an `Object[]`. The values +run. The `proceed` method may also be called with an `Object[]`. The values in the array are used as the arguments to the method execution when it proceeds. See <> for notes on calling `proceed` with an `Object[]`. The following example shows how to declare around advice in XML: @@ -2563,7 +2563,7 @@ ms % Task name [[aop-ordering]] ==== Advice Ordering -When multiple pieces of advice need to execute at the same join point (executing method) +When multiple pieces of advice need to run at the same join point (executing method) the ordering rules are as described in <>. The precedence between aspects is determined via the `order` attribute in the `` element or by either adding the `@Order` annotation to the bean that backs the aspect or by having @@ -2772,7 +2772,7 @@ call `proceed` multiple times. The following listing shows the basic aspect impl class ConcurrentOperationExecutor : Ordered { private val DEFAULT_MAX_RETRIES = 2 - + private var maxRetries = DEFAULT_MAX_RETRIES private var order = 1 @@ -2787,7 +2787,7 @@ call `proceed` multiple times. The following listing shows the basic aspect impl fun setOrder(order: Int) { this.order = order } - + fun doConcurrentOperation(pjp: ProceedingJoinPoint): Any { var numAttempts = 0 var lockFailureException: PessimisticLockingFailureException @@ -3160,13 +3160,13 @@ The key thing to understand here is that the client code inside the `main(..)` m of the `Main` class has a reference to the proxy. This means that method calls on that object reference are calls on the proxy. As a result, the proxy can delegate to all of the interceptors (advice) that are relevant to that particular method call. However, -once the call has finally reached the target object (the `SimplePojo`, reference in +once the call has finally reached the target object (the `SimplePojo` reference in this case), any method calls that it may make on itself, such as `this.bar()` or `this.foo()`, are going to be invoked against the `this` reference, and not the proxy. This has important implications. It means that self-invocation is not going to result -in the advice associated with a method invocation getting a chance to execute. +in the advice associated with a method invocation getting a chance to run. -Okay, so what is to be done about this? The best approach (the term, "`best,`" is used +Okay, so what is to be done about this? The best approach (the term "best" is used loosely here) is to refactor your code such that the self-invocation does not happen. This does entail some work on your part, but it is the best, least-invasive approach. The next approach is absolutely horrendous, and we hesitate to point it out, precisely @@ -3433,7 +3433,7 @@ exact semantics of "`after returning from the initialization of a new object`" a fine. In this context, "`after initialization`" means that the dependencies are injected after the object has been constructed. This means that the dependencies are not available for use in the constructor bodies of the class. If you want the -dependencies to be injected before the constructor bodies execute and thus be +dependencies to be injected before the constructor bodies run and thus be available for use in the body of the constructors, you need to define this on the `@Configurable` declaration, as follows: diff --git a/src/docs/asciidoc/core/core-beans.adoc b/src/docs/asciidoc/core/core-beans.adoc index 9f721f50d31b..47a99e4b18d5 100644 --- a/src/docs/asciidoc/core/core-beans.adoc +++ b/src/docs/asciidoc/core/core-beans.adoc @@ -1471,7 +1471,7 @@ The following example shows various values being set: - + ---- @@ -1491,7 +1491,7 @@ XML configuration: p:driverClassName="com.mysql.jdbc.Driver" p:url="jdbc:mysql://localhost:3306/mydb" p:username="root" - p:password="masterkaoli"/> + p:password="misterkaoli"/> ---- @@ -3193,10 +3193,10 @@ which explains the methods you need to implement in more detail. The `Scope` interface has four methods to get objects from the scope, remove them from the scope, and let them be destroyed. -The session scope -implementation, for example, returns the session-scoped bean (if it does not exist, -the method returns a new instance of the bean, after having bound it to the session for -future reference). The following method returns the object from the underlying scope: +The session scope implementation, for example, returns the session-scoped bean (if it +does not exist, the method returns a new instance of the bean, after having bound it to +the session for future reference). The following method returns the object from the +underlying scope: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -3209,10 +3209,10 @@ future reference). The following method returns the object from the underlying s fun get(name: String, objectFactory: ObjectFactory<*>): Any ---- -The session scope -implementation, for example, removes the session-scoped bean from the underlying session. -The object should be returned, but you can return null if the object with the specified -name is not found. The following method removes the object from the underlying scope: +The session scope implementation, for example, removes the session-scoped bean from the +underlying session. The object should be returned, but you can return `null` if the +object with the specified name is not found. The following method removes the object from +the underlying scope: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -3225,7 +3225,7 @@ name is not found. The following method removes the object from the underlying s fun remove(name: String): Any ---- -The following method registers the callbacks the scope should execute when it is +The following method registers a callback that the scope should invoke when it is destroyed or when the specified object in the scope is destroyed: [source,java,indent=0,subs="verbatim,quotes",role="primary"] @@ -3255,7 +3255,6 @@ The following method obtains the conversation identifier for the underlying scop fun getConversationId(): String ---- - This identifier is different for each scope. For a session scoped implementation, this identifier can be the session identifier. @@ -3628,7 +3627,7 @@ following example: class DefaultBlogService : BlogService { private var blogDao: BlogDao? = null - + // this is (unsurprisingly) the initialization callback method fun init() { if (blogDao == null) { @@ -3689,10 +3688,10 @@ As of Spring 2.5, you have three options for controlling bean lifecycle behavior annotations>>. You can combine these mechanisms to control a given bean. NOTE: If multiple lifecycle mechanisms are configured for a bean and each mechanism is -configured with a different method name, then each configured method is executed in the +configured with a different method name, then each configured method is run in the order listed after this note. However, if the same method name is configured -- for example, `init()` for an initialization method -- for more than one of these lifecycle mechanisms, -that method is executed once, as explained in the +that method is run once, as explained in the <>. Multiple lifecycle mechanisms configured for the same bean, with different @@ -3782,7 +3781,7 @@ consider implementing `org.springframework.context.SmartLifecycle` instead. Also, please note that stop notifications are not guaranteed to come before destruction. On regular shutdown, all `Lifecycle` beans first receive a stop notification before the general destruction callbacks are being propagated. However, on hot refresh during a -context's lifetime or on aborted refresh attempts, only destroy methods are called. +context's lifetime or on stopped refresh attempts, only destroy methods are called. ==== The order of startup and shutdown invocations can be important. If a "`depends-on`" @@ -4188,7 +4187,7 @@ Spring container finishes instantiating, configuring, and initializing a bean, y plug in one or more custom `BeanPostProcessor` implementations. You can configure multiple `BeanPostProcessor` instances, and you can control the order -in which these `BeanPostProcessor` instances execute by setting the `order` property. +in which these `BeanPostProcessor` instances run by setting the `order` property. You can set this property only if the `BeanPostProcessor` implements the `Ordered` interface. If you write your own `BeanPostProcessor`, you should consider implementing the `Ordered` interface, too. For further details, see the javadoc of the @@ -4449,7 +4448,7 @@ in one container are not post-processed by `BeanFactoryPostProcessor` instances container, even if both containers are part of the same hierarchy. ==== -A bean factory post-processor is automatically executed when it is declared inside an +A bean factory post-processor is automatically run when it is declared inside an `ApplicationContext`, in order to apply changes to the configuration metadata that define the container. Spring includes a number of predefined bean factory post-processors, such as `PropertyOverrideConfigurer` and @@ -4996,7 +4995,7 @@ The same applies for typed collections, as the following example shows: @Autowired lateinit var movieCatalogs: Set - + // ... } ---- @@ -5045,7 +5044,7 @@ corresponding bean names, as the following example shows: @Autowired lateinit var movieCatalogs: Map - + // ... } ---- @@ -5851,7 +5850,7 @@ configuration: @Bean fun stringStore() = StringStore() - + @Bean fun integerStore() = IntegerStore() } @@ -5999,7 +5998,7 @@ named `movieFinder` injected into its setter method: @Resource private lateinit var movieFinder: MovieFinder - + } ---- @@ -6487,7 +6486,7 @@ You can also override the value for the `proxyMode`, as the following example sh @Service @SessionScope(proxyMode = ScopedProxyMode.INTERFACES) class SessionScopedUserService : UserService { - // ... + // ... } ---- @@ -6537,7 +6536,7 @@ are eligible for such autodetection: ---- @Repository class JpaMovieFinder : MovieFinder { - // implementation elided for clarity + // implementation elided for clarity } ---- @@ -7162,7 +7161,7 @@ technique: @Component @Genre("Action") class ActionMovieCatalog : MovieCatalog { - // ... + // ... } ---- @@ -9896,7 +9895,7 @@ it programmatically against the `Environment` API which is available through an val ctx = AnnotationConfigApplicationContext().apply { environment.setActiveProfiles("development") register(SomeConfig::class.java, StandaloneDataConfig::class.java, JndiDataConfig::class.java) - refresh() + refresh() } ---- @@ -10335,7 +10334,7 @@ are as follows: argument.required=The {0} argument is required. ---- -The next example shows a program to execute the `MessageSource` functionality. +The next example shows a program to run the `MessageSource` functionality. Remember that all `ApplicationContext` implementations are also `MessageSource` implementations and so can be cast to the `MessageSource` interface. diff --git a/src/docs/asciidoc/core/core-resources.adoc b/src/docs/asciidoc/core/core-resources.adoc index ff2b1dd2cbfd..760c889202e5 100644 --- a/src/docs/asciidoc/core/core-resources.adoc +++ b/src/docs/asciidoc/core/core-resources.adoc @@ -273,7 +273,7 @@ application contexts may be used to obtain `Resource` instances. When you call `getResource()` on a specific application context, and the location path specified doesn't have a specific prefix, you get back a `Resource` type that is appropriate to that particular application context. For example, assume the following -snippet of code was executed against a `ClassPathXmlApplicationContext` instance: +snippet of code was run against a `ClassPathXmlApplicationContext` instance: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -286,7 +286,7 @@ snippet of code was executed against a `ClassPathXmlApplicationContext` instance val template = ctx.getResource("some/resource/path/myTemplate.txt") ---- -Against a `ClassPathXmlApplicationContext`, that code returns a `ClassPathResource`. If the same method were executed +Against a `ClassPathXmlApplicationContext`, that code returns a `ClassPathResource`. If the same method were run against a `FileSystemXmlApplicationContext` instance, it would return a `FileSystemResource`. For a `WebApplicationContext`, it would return a `ServletContextResource`. It would similarly return appropriate objects for each context. diff --git a/src/docs/asciidoc/core/core-validation.adoc b/src/docs/asciidoc/core/core-validation.adoc index 58979e4cbea0..1ac316cdc322 100644 --- a/src/docs/asciidoc/core/core-validation.adoc +++ b/src/docs/asciidoc/core/core-validation.adoc @@ -1097,7 +1097,7 @@ might match only if the target entity type declares a static finder method (for === The `ConversionService` API `ConversionService` defines a unified API for executing type conversion logic at -runtime. Converters are often executed behind the following facade interface: +runtime. Converters are often run behind the following facade interface: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -1218,7 +1218,7 @@ it like you would for any other bean. The following example shows how to do so: ---- @Service class MyService(private val conversionService: ConversionService) { - + fun doIt() { conversionService.convert(...) } diff --git a/src/docs/asciidoc/data-access.adoc b/src/docs/asciidoc/data-access.adoc index a8fd91252b72..f5312a7b0acf 100644 --- a/src/docs/asciidoc/data-access.adoc +++ b/src/docs/asciidoc/data-access.adoc @@ -250,9 +250,9 @@ mocked or stubbed as necessary. The `TransactionDefinition` interface specifies: -* Propagation: Typically, all code executed within a transaction scope runs in +* Propagation: Typically, all code within a transaction scope runs in that transaction. However, you can specify the behavior if - a transactional method is executed when a transaction context already exists. For + a transactional method is run when a transaction context already exists. For example, code can continue running in the existing transaction (the common case), or the existing transaction can be suspended and a new transaction created. Spring offers all of the transaction propagation options familiar from EJB CMT. To read @@ -715,9 +715,9 @@ The following example shows an implementation of the preceding interface: ---- Assume that the first two methods of the `FooService` interface, `getFoo(String)` and -`getFoo(String, String)`, must execute in the context of a transaction with read-only -semantics, and that the other methods, `insertFoo(Foo)` and `updateFoo(Foo)`, must -execute in the context of a transaction with read-write semantics. The following +`getFoo(String, String)`, must run in the context of a transaction with read-only +semantics and that the other methods, `insertFoo(Foo)` and `updateFoo(Foo)`, must +run in the context of a transaction with read-write semantics. The following configuration is explained in detail in the next few paragraphs: [source,xml,indent=0,subs="verbatim"] @@ -778,8 +778,8 @@ configuration is explained in detail in the next few paragraphs: Examine the preceding configuration. It assumes that you want to make a service object, the `fooService` bean, transactional. The transaction semantics to apply are encapsulated in the `` definition. The `` definition reads as "all methods -starting with `get` are to execute in the context of a read-only transaction, and all -other methods are to execute with the default transaction semantics". The +starting with `get` are to run in the context of a read-only transaction, and all +other methods are to run with the default transaction semantics". The `transaction-manager` attribute of the `` tag is set to the name of the `TransactionManager` bean that is going to drive the transactions (in this case, the `txManager` bean). @@ -791,7 +791,7 @@ you want to wire in has any other name, you must use the `transaction-manager` attribute explicitly, as in the preceding example. The `` definition ensures that the transactional advice defined by the -`txAdvice` bean executes at the appropriate points in the program. First, you define a +`txAdvice` bean runs at the appropriate points in the program. First, you define a pointcut that matches the execution of any operation defined in the `FooService` interface (`fooServiceOperation`). Then you associate the pointcut with the `txAdvice` by using an advisor. The result indicates that, at the execution of a `fooServiceOperation`, @@ -1899,14 +1899,14 @@ transactions. See Spring's {api-spring-framework}/jdbc/datasource/DataSourceTran [[transaction-declarative-applying-more-than-just-tx-advice]] ==== Advising Transactional Operations -Suppose you want to execute both transactional operations and some basic profiling advice. +Suppose you want to run both transactional operations and some basic profiling advice. How do you effect this in the context of ``? When you invoke the `updateFoo(Foo)` method, you want to see the following actions: * The configured profiling aspect starts. -* The transactional advice executes. -* The method on the advised object executes. +* The transactional advice runs. +* The method on the advised object runs. * The transaction commits. * The profiling aspect reports the exact duration of the whole transactional method invocation. @@ -2011,14 +2011,14 @@ transactional aspects applied to it in the desired order: - + - + @@ -2065,13 +2065,13 @@ declarative approach: - + - + @@ -2098,7 +2098,7 @@ declarative approach: The result of the preceding configuration is a `fooService` bean that has profiling and transactional aspects applied to it in that order. If you want the profiling advice -to execute after the transactional advice on the way in and before the +to run after the transactional advice on the way in and before the transactional advice on the way out, you can swap the value of the profiling aspect bean's `order` property so that it is higher than the transactional advice's order value. @@ -2194,10 +2194,10 @@ couples you to Spring's transaction infrastructure and APIs. Whether or not prog transaction management is suitable for your development needs is a decision that you have to make yourself. -Application code that must execute in a transactional context and that explicitly uses the +Application code that must run in a transactional context and that explicitly uses the `TransactionTemplate` resembles the next example. You, as an application developer, can write a `TransactionCallback` implementation (typically expressed as an -anonymous inner class) that contains the code that you need to execute in the context of +anonymous inner class) that contains the code that you need to run in the context of a transaction. You can then pass an instance of your custom `TransactionCallback` to the `execute(..)` method exposed on the `TransactionTemplate`. The following example shows how to do so: @@ -2216,7 +2216,7 @@ a transaction. You can then pass an instance of your custom `TransactionCallback public Object someServiceMethod() { return transactionTemplate.execute(new TransactionCallback() { - // the code in this method executes in a transactional context + // the code in this method runs in a transactional context public Object doInTransaction(TransactionStatus status) { updateOperation1(); return resultOfUpdateOperation2(); @@ -2377,7 +2377,7 @@ couples you to Spring's transaction infrastructure and APIs. Whether or not prog transaction management is suitable for your development needs is a decision that you have to make yourself. -Application code that must execute in a transactional context and that explicitly uses +Application code that must run in a transactional context and that explicitly uses the `TransactionOperator` resembles the next example: [source,java,indent=0,subs="verbatim,quotes",role="primary"] @@ -2395,7 +2395,7 @@ the `TransactionOperator` resembles the next example: public Mono someServiceMethod() { - // the code in this method executes in a transactional context + // the code in this method runs in a transactional context Mono update = updateOperation1(); @@ -2454,7 +2454,7 @@ method on the supplied `ReactiveTransaction` object, as follows: [[tx-prog-operator-cancel]] ===== Cancel Signals -In Reactive Streams, a `Subscriber` can cancel its `Subscription` and terminate its +In Reactive Streams, a `Subscriber` can cancel its `Subscription` and stop its `Publisher`. Operators in Project Reactor, as well as in other libraries, such as `next()`, `take(long)`, `timeout(Duration)`, and others can issue cancellations. There is no way to know the reason for the cancellation, whether it is due to an error or a simply lack of @@ -2536,7 +2536,7 @@ following example shows how to do so: TransactionStatus status = txManager.getTransaction(def); try { - // execute your business logic here + // put your business logic here } catch (MyException ex) { txManager.rollback(status); @@ -2554,7 +2554,7 @@ following example shows how to do so: val status = txManager.getTransaction(def) try { - // execute your business logic here + // put your business logic here } catch (ex: MyException) { txManager.rollback(status) throw ex @@ -2585,8 +2585,8 @@ following example shows how to do so: Mono reactiveTx = txManager.getReactiveTransaction(def); reactiveTx.flatMap(status -> { - - Mono tx = ...; // execute your business logic here + + Mono tx = ...; // put your business logic here return tx.then(txManager.commit(status)) .onErrorResume(ex -> txManager.rollback(status).then(Mono.error(ex))); @@ -2603,7 +2603,7 @@ following example shows how to do so: val reactiveTx = txManager.getReactiveTransaction(def) reactiveTx.flatMap { status -> - val tx = ... // execute your business logic here + val tx = ... // put your business logic here tx.then(txManager.commit(status)) .onErrorResume { ex -> txManager.rollback(status).then(Mono.error(ex)) } @@ -2978,7 +2978,7 @@ takes care of and which actions are your responsibility. | | X -| Prepare and execute the statement. +| Prepare and run the statement. | X | @@ -3030,11 +3030,12 @@ advanced features require a JDBC 3.0 driver. the column names. This works only if the database provides adequate metadata. If the database does not provide this metadata, you have to provide explicit configuration of the parameters. -* RDBMS objects, including `MappingSqlQuery`, `SqlUpdate` and `StoredProcedure`, require - you to create reusable and thread-safe objects during initialization of your data-access - layer. This approach is modeled after JDO Query, wherein you define your query - string, declare parameters, and compile the query. Once you do that, execute methods - can be called multiple times with various parameter values. +* RDBMS objects — including `MappingSqlQuery`, `SqlUpdate`, and `StoredProcedure` — + require you to create reusable and thread-safe objects during initialization of your + data-access layer. This approach is modeled after JDO Query, wherein you define your + query string, declare parameters, and compile the query. Once you do that, + `execute(...)`, `update(...)`, and `findObject(...)` methods can be called multiple + times with various parameter values. @@ -4520,14 +4521,14 @@ The following example shows a batch update that uses a batch size of 100: } ---- -The batch update methods for this call returns an array of `int` arrays that contain an array -entry for each batch with an array of the number of affected rows for each update. The top -level array's length indicates the number of batches executed and the second level array's -length indicates the number of updates in that batch. The number of updates in each batch -should be the batch size provided for all batches (except that the last one that might -be less), depending on the total number of update objects provided. The update count for -each update statement is the one reported by the JDBC driver. If the count is not -available, the JDBC driver returns a value of `-2`. +The batch update methods for this call returns an array of `int` arrays that contains an +array entry for each batch with an array of the number of affected rows for each update. +The top-level array's length indicates the number of batches run, and the second level +array's length indicates the number of updates in that batch. The number of updates in +each batch should be the batch size provided for all batches (except that the last one +that might be less), depending on the total number of update objects provided. The update +count for each update statement is the one reported by the JDBC driver. If the count is +not available, the JDBC driver returns a value of `-2`. @@ -5088,7 +5089,7 @@ You can call a stored function in almost the same way as you call a stored proce that you provide a function name rather than a procedure name. You use the `withFunctionName` method as part of the configuration to indicate that you want to make a call to a function, and the corresponding string for a function call is generated. A -specialized execute call (`executeFunction`) is used to execute the function, and it +specialized call (`executeFunction`) is used to run the function, and it returns the function return value as an object of a specified type, which means you do not have to retrieve the return value from the results map. A similar convenience method (named `executeObject`) is also available for stored procedures that have only one `out` @@ -5244,7 +5245,7 @@ The list of actors is then retrieved from the results map and returned to the ca === Modeling JDBC Operations as Java Objects The `org.springframework.jdbc.object` package contains classes that let you access -the database in a more object-oriented manner. As an example, you can execute queries +the database in a more object-oriented manner. As an example, you can run queries and get the results back as a list that contains business objects with the relational column data mapped to the properties of the business object. You can also run stored procedures and run update, delete, and insert statements. @@ -5325,7 +5326,7 @@ data from the `t_actor` relation to an instance of the `Actor` class: The class extends `MappingSqlQuery` parameterized with the `Actor` type. The constructor for this customer query takes a `DataSource` as the only parameter. In this constructor, you can call the constructor on the superclass with the `DataSource` and the SQL -that should be executed to retrieve the rows for this query. This SQL is used to +that should be run to retrieve the rows for this query. This SQL is used to create a `PreparedStatement`, so it may contain placeholders for any parameters to be passed in during execution. You must declare each parameter by using the `declareParameter` method passing in an `SqlParameter`. The `SqlParameter` takes a name, and the JDBC type @@ -6439,7 +6440,7 @@ boolean value from system properties or from an environment bean). The following The second option to control what happens with existing data is to be more tolerant of failures. To this end, you can control the ability of the initializer to ignore certain -errors in the SQL it executes from the scripts, as the following example shows: +errors in the SQL it runs from the scripts, as the following example shows: [source,xml,indent=0,subs="verbatim,quotes"] ---- diff --git a/src/docs/asciidoc/integration.adoc b/src/docs/asciidoc/integration.adoc index b1da44a06ddb..80c51376a75c 100644 --- a/src/docs/asciidoc/integration.adoc +++ b/src/docs/asciidoc/integration.adoc @@ -1954,7 +1954,7 @@ MapMessage={ While the send operations cover many common usage scenarios, you might sometimes want to perform multiple operations on a JMS `Session` or `MessageProducer`. The `SessionCallback` and `ProducerCallback` expose the JMS `Session` and `Session` / -`MessageProducer` pair, respectively. The `execute()` methods on `JmsTemplate` execute +`MessageProducer` pair, respectively. The `execute()` methods on `JmsTemplate` run these callback methods. @@ -5725,8 +5725,8 @@ callback interface. In the following example, the `mailSender` property is of ty ---- NOTE: The mail code is a crosscutting concern and could well be a candidate for -refactoring into a <>, which then could -be executed at appropriate joinpoints on the `OrderManager` target. +refactoring into a <>, which could then +be run at appropriate joinpoints on the `OrderManager` target. The Spring Framework's mail support ships with the standard JavaMail implementation. See the relevant javadoc for more information. @@ -5900,7 +5900,7 @@ In all likelihood, you should never need to implement your own. The variants that Spring provides are as follows: * `SyncTaskExecutor`: - This implementation does not execute invocations asynchronously. Instead, each + This implementation does not run invocations asynchronously. Instead, each invocation takes place in the calling thread. It is primarily used in situations where multi-threading is not necessary, such as in simple test cases. * `SimpleAsyncTaskExecutor`: @@ -5972,7 +5972,7 @@ out a set of messages: As you can see, rather than retrieving a thread from the pool and executing it yourself, you add your `Runnable` to the queue. Then the `TaskExecutor` uses its internal rules to -decide when the task gets executed. +decide when the task gets run. To configure the rules that the `TaskExecutor` uses, we expose simple bean properties: @@ -6186,7 +6186,7 @@ invocation: ---- @Scheduled(fixedDelay=5000) public void doSomething() { - // something that should execute periodically + // something that should run periodically } ---- @@ -6199,7 +6199,7 @@ successive start times of each invocation): ---- @Scheduled(fixedRate=5000) public void doSomething() { - // something that should execute periodically + // something that should run periodically } ---- @@ -6212,19 +6212,19 @@ number of milliseconds to wait before the first execution of the method, as the ---- @Scheduled(initialDelay=1000, fixedRate=5000) public void doSomething() { - // something that should execute periodically + // something that should run periodically } ---- If simple periodic scheduling is not expressive enough, you can provide a cron expression. -For example, the following executes only on weekdays: +The following example runs only on weekdays: [source,java,indent=0] [subs="verbatim"] ---- @Scheduled(cron="*/5 * * * * MON-FRI") public void doSomething() { - // something that should execute on weekdays only + // something that should run on weekdays only } ---- @@ -6263,7 +6263,7 @@ to a method that returns `void`, as the following example shows: ---- @Async void doSomething() { - // this will be executed asynchronously + // this will be run asynchronously } ---- @@ -6277,7 +6277,7 @@ a legitimate application of the `@Async` annotation: ---- @Async void doSomething(String s) { - // this will be executed asynchronously + // this will be run asynchronously } ---- @@ -6292,7 +6292,7 @@ that returns a value: ---- @Async Future returnSomething(int i) { - // this will be executed asynchronously + // this will be run asynchronously } ---- @@ -6355,7 +6355,7 @@ used when executing a given method. The following example shows how to do so: ---- @Async("otherExecutor") void doSomething(String s) { - // this will be executed asynchronously by "otherExecutor" + // this will be run asynchronously by "otherExecutor" } ---- @@ -6499,9 +6499,9 @@ various behaviors: ---- Finally, the `keep-alive` setting determines the time limit (in seconds) for which threads -may remain idle before being terminated. If there are more than the core number of threads +may remain idle before being stopped. If there are more than the core number of threads currently in the pool, after waiting this amount of time without processing a task, excess -threads get terminated. A time value of zero causes excess threads to terminate +threads get stopped. A time value of zero causes excess threads to stop immediately after executing a task without remaining follow-up work in the task queue. The following example sets the `keep-alive` value to two minutes: @@ -6539,7 +6539,7 @@ The scheduler is referenced by the outer element, and each individual task includes the configuration of its trigger metadata. In the preceding example, that metadata defines a periodic trigger with a fixed delay indicating the number of milliseconds to wait after each task execution has completed. Another option is -`fixed-rate`, indicating how often the method should be executed regardless of how long +`fixed-rate`, indicating how often the method should be run regardless of how long any previous execution takes. Additionally, for both `fixed-delay` and `fixed-rate` tasks, you can specify an 'initial-delay' parameter, indicating the number of milliseconds to wait before the first execution of the method. For more control, you can instead provide a `cron` attribute. @@ -6786,17 +6786,17 @@ https://en.wikipedia.org/wiki/Cache_(computing)#The_difference_between_buffer_an At its core, the cache abstraction applies caching to Java methods, thus reducing the number of executions based on the information available in the cache. That is, each time a targeted method is invoked, the abstraction applies a caching behavior that checks -whether the method has been already executed for the given arguments. If it has been -executed, the cached result is returned without having to execute the actual method. -If the method has not been executed, then it is executed, and the result is cached and +whether the method has been already invoked for the given arguments. If it has been +invoked, the cached result is returned without having to invoke the actual method. +If the method has not been invoked, then it is invoked, and the result is cached and returned to the user so that, the next time the method is invoked, the cached result is -returned. This way, expensive methods (whether CPU- or IO-bound) can be executed only +returned. This way, expensive methods (whether CPU- or IO-bound) can be invoked only once for a given set of parameters and the result reused without having to actually -execute the method again. The caching logic is applied transparently without any +invoke the method again. The caching logic is applied transparently without any interference to the invoker. IMPORTANT: This approach works only for methods that are guaranteed to return the same -output (result) for a given input (or arguments) no matter how many times it is executed. +output (result) for a given input (or arguments) no matter how many times it is invoked. The caching abstraction provides other cache-related operations, such as the ability to update the content of the cache or to remove one or all entries. These are useful if @@ -6854,7 +6854,7 @@ For caching declaration, Spring's caching abstraction provides a set of Java ann As the name implies, you can use `@Cacheable` to demarcate methods that are cacheable -- that is, methods for which the result is stored in the cache so that, on subsequent invocations (with the same arguments), the value in the cache is returned without -having to actually execute the method. In its simplest form, the annotation declaration +having to actually invoke the method. In its simplest form, the annotation declaration requires the name of the cache associated with the annotated method, as the following example shows: @@ -6867,13 +6867,13 @@ example shows: In the preceding snippet, the `findBook` method is associated with the cache named `books`. Each time the method is called, the cache is checked to see whether the invocation has -already been executed and does not have to be repeated. While in most cases, only one +already been run and does not have to be repeated. While in most cases, only one cache is declared, the annotation lets multiple names be specified so that more than one -cache is being used. In this case, each of the caches is checked before executing the +cache is being used. In this case, each of the caches is checked before invoking the method -- if at least one cache is hit, the associated value is returned. NOTE: All the other caches that do not contain the value are also updated, even though -the cached method was not actually executed. +the cached method was not actually invoked. The following example uses `@Cacheable` on the `findBook` method: @@ -7061,13 +7061,13 @@ documentation of your cache provider for more details. [[cache-annotations-cacheable-condition]] ===== Conditional Caching -Sometimes, a method might not be suitable for caching all the time (for example, it -might depend on the given arguments). The cache annotations support such functionality -through the `condition` parameter, which takes a `SpEL` expression that is evaluated to -either `true` or `false`. If `true`, the method is cached. If not, it behaves as if the -method is not cached (that is, the method is executed every time no matter what values are in the cache -or what arguments are used). For example, the following method is cached only -if the argument `name` has a length shorter than 32: +Sometimes, a method might not be suitable for caching all the time (for example, it might +depend on the given arguments). The cache annotations support such use cases through the +`condition` parameter, which takes a `SpEL` expression that is evaluated to either `true` +or `false`. If `true`, the method is cached. If not, it behaves as if the method is not +cached (that is, the method is invoked every time no matter what values are in the cache +or what arguments are used). For example, the following method is cached only if the +argument `name` has a length shorter than 32: [source,java,indent=0] [subs="verbatim,quotes"] @@ -7080,8 +7080,8 @@ if the argument `name` has a length shorter than 32: In addition to the `condition` parameter, you can use the `unless` parameter to veto the adding of a value to the cache. Unlike `condition`, `unless` expressions are evaluated -after the method has been called. To expand on the previous example, perhaps we -only want to cache paperback books, as the following example does: +after the method has been invoked. To expand on the previous example, perhaps we only +want to cache paperback books, as the following example does: [source,java,indent=0] [subs="verbatim,quotes"] @@ -7146,7 +7146,7 @@ available to the context so that you can use them for key and conditional comput | `caches` | Root object -| Collection of caches against which the current method is executed +| Collection of caches against which the current method is run | `#root.caches[0].name` | Argument name @@ -7170,7 +7170,7 @@ available to the context so that you can use them for key and conditional comput ==== The `@CachePut` Annotation When the cache needs to be updated without interfering with the method execution, -you can use the `@CachePut` annotation. That is, the method is always executed and its +you can use the `@CachePut` annotation. That is, the method is always invoked and its result is placed into the cache (according to the `@CachePut` options). It supports the same options as `@Cacheable` and should be used for cache population rather than method flow optimization. The following example uses the `@CachePut` annotation: @@ -7184,8 +7184,8 @@ method flow optimization. The following example uses the `@CachePut` annotation: IMPORTANT: Using `@CachePut` and `@Cacheable` annotations on the same method is generally strongly discouraged because they have different behaviors. While the latter causes the -method execution to be skipped by using the cache, the former forces the execution in -order to execute a cache update. This leads to unexpected behavior and, with the exception +method invocation to be skipped by using the cache, the former forces the invocation in +order to run a cache update. This leads to unexpected behavior and, with the exception of specific corner-cases (such as annotations having conditions that exclude them from each other), such declarations should be avoided. Note also that such conditions should not rely on the result object (that is, the `#result` variable), as these are validated up-front to @@ -7222,17 +7222,18 @@ Note that the framework ignores any key specified in this scenario as it does no (the entire cache is evicted, not only one entry). You can also indicate whether the eviction should occur after (the default) or before -the method executes by using the `beforeInvocation` attribute. The former provides the +the method is invoked by using the `beforeInvocation` attribute. The former provides the same semantics as the rest of the annotations: Once the method completes successfully, -an action (in this case, eviction) on the cache is executed. If the method does not -execute (as it might be cached) or an exception is thrown, the eviction does not occur. +an action (in this case, eviction) on the cache is run. If the method does not +run (as it might be cached) or an exception is thrown, the eviction does not occur. The latter (`beforeInvocation=true`) causes the eviction to always occur before the method is invoked. This is useful in cases where the eviction does not need to be tied to the method outcome. -Note that `void` methods can be used with `@CacheEvict` - as the methods act as a trigger, -the return values are ignored (as they do not interact with the cache). This is not the case -with `@Cacheable` which adds or updates data into the cache and, thus, requires a result. +Note that `void` methods can be used with `@CacheEvict` - as the methods act as a +trigger, the return values are ignored (as they do not interact with the cache). This is +not the case with `@Cacheable` which adds data to the cache or updates data in the cache +and, thus, requires a result. [[cache-annotations-caching]] @@ -7817,7 +7818,7 @@ declarations without having an actual backing cache configured. As this is an in configuration, an exception is thrown at runtime, since the caching infrastructure is unable to find a suitable store. In situations like this, rather than removing the cache declarations (which can prove tedious), you can wire in a simple dummy cache that -performs no caching -- that is, it forces the cached methods to be executed every time. +performs no caching -- that is, it forces the cached methods to be invoked every time. The following example shows how to do so: [source,xml,indent=0] @@ -7839,7 +7840,7 @@ through the `fallbackToNoOpCache` flag, adds a no-op cache for all the definitio handled by the configured cache managers. That is, every cache definition not found in either `jdkCache` or `gemfireCache` (configured earlier in the example) is handled by the no-op cache, which does not store any information, causing the target method to be -executed every time. +invoked every time. diff --git a/src/docs/asciidoc/languages/dynamic-languages.adoc b/src/docs/asciidoc/languages/dynamic-languages.adoc index b255356df28b..8464f980cac1 100644 --- a/src/docs/asciidoc/languages/dynamic-languages.adoc +++ b/src/docs/asciidoc/languages/dynamic-languages.adoc @@ -596,7 +596,7 @@ description: ---- BeanShell is a small, free, embeddable Java source interpreter with dynamic language -features, written in Java. BeanShell dynamically executes standard Java syntax and +features, written in Java. BeanShell dynamically runs standard Java syntax and extends it with common scripting conveniences such as loose types, commands, and method closures like those in Perl and JavaScript. ---- diff --git a/src/docs/asciidoc/testing.adoc b/src/docs/asciidoc/testing.adoc index e4ceb0231323..d9b78d552647 100644 --- a/src/docs/asciidoc/testing.adoc +++ b/src/docs/asciidoc/testing.adoc @@ -1219,7 +1219,7 @@ The following example shows how to use the `@BeforeTransaction` annotation: ---- @BeforeTransaction // <1> void beforeTransaction() { - // logic to be executed before a transaction is started + // logic to be run before a transaction is started } ---- <1> Run this method before a transaction. @@ -1229,7 +1229,7 @@ The following example shows how to use the `@BeforeTransaction` annotation: ---- @BeforeTransaction // <1> fun beforeTransaction() { - // logic to be executed before a transaction is started + // logic to be run before a transaction is started } ---- <1> Run this method before a transaction. @@ -1249,7 +1249,7 @@ methods. ---- @AfterTransaction // <1> void afterTransaction() { - // logic to be executed after a transaction has ended + // logic to be run after a transaction has ended } ---- <1> Run this method after a transaction. @@ -1259,7 +1259,7 @@ methods. ---- @AfterTransaction // <1> fun afterTransaction() { - // logic to be executed after a transaction has ended + // logic to be run after a transaction has ended } ---- <1> Run this method after a transaction. @@ -1278,7 +1278,7 @@ it: @Test @Sql({"/test-schema.sql", "/test-user-data.sql"}) // <1> void userTest() { - // execute code that relies on the test schema and test data + // run code that relies on the test schema and test data } ---- <1> Run two scripts for this test. @@ -1289,7 +1289,7 @@ it: @Test @Sql("/test-schema.sql", "/test-user-data.sql") // <1> fun userTest() { - // execute code that relies on the test schema and test data + // run code that relies on the test schema and test data } ---- <1> Run two scripts for this test. @@ -1312,7 +1312,7 @@ configured with the `@Sql` annotation. The following example shows how to use it config = @SqlConfig(commentPrefix = "`", separator = "@@") // <1> ) void userTest() { - // execute code that relies on the test data + // run code that relies on the test data } ---- <1> Set the comment prefix and the separator in SQL scripts. @@ -1323,7 +1323,7 @@ configured with the `@Sql` annotation. The following example shows how to use it @Test @Sql("/test-user-data.sql", config = SqlConfig(commentPrefix = "`", separator = "@@")) // <1> fun userTest() { - // execute code that relies on the test data + // run code that relies on the test data } ---- <1> Set the comment prefix and the separator in SQL scripts. @@ -1352,7 +1352,7 @@ The following example shows how to use `@SqlMergeMode` at the class level. @Test @Sql("/user-test-data-001.sql") void standardUserProfile() { - // execute code that relies on test data set 001 + // run code that relies on test data set 001 } } ---- @@ -1369,7 +1369,7 @@ The following example shows how to use `@SqlMergeMode` at the class level. @Test @Sql("/user-test-data-001.sql") fun standardUserProfile() { - // execute code that relies on test data set 001 + // run code that relies on test data set 001 } } ---- @@ -1388,7 +1388,7 @@ The following example shows how to use `@SqlMergeMode` at the method level. @Sql("/user-test-data-001.sql") @SqlMergeMode(MERGE) // <1> void standardUserProfile() { - // execute code that relies on test data set 001 + // run code that relies on test data set 001 } } ---- @@ -1405,7 +1405,7 @@ The following example shows how to use `@SqlMergeMode` at the method level. @Sql("/user-test-data-001.sql") @SqlMergeMode(MERGE) // <1> fun standardUserProfile() { - // execute code that relies on test data set 001 + // run code that relies on test data set 001 } } ---- @@ -1430,7 +1430,7 @@ annotation. The following example shows how to declare an SQL group: @Sql("/test-user-data.sql") )} void userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } ---- <1> Declare a group of SQL scripts. @@ -1443,7 +1443,7 @@ annotation. The following example shows how to declare an SQL group: Sql("/test-schema.sql", config = SqlConfig(commentPrefix = "`")), Sql("/test-user-data.sql")) fun userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } ---- <1> Declare a group of SQL scripts. @@ -1610,7 +1610,7 @@ example shows how to use it: ---- @Timed(millis = 1000) // <1> public void testProcessWithOneSecondTimeout() { - // some logic that should not take longer than 1 second to execute + // some logic that should not take longer than 1 second to run } ---- <1> Set the time period for the test to one second. @@ -1620,7 +1620,7 @@ example shows how to use it: ---- @Timed(millis = 1000) // <1> fun testProcessWithOneSecondTimeout() { - // some logic that should not take longer than 1 second to execute + // some logic that should not take longer than 1 second to run } ---- <1> Set the time period for the test to one second. @@ -1637,7 +1637,7 @@ before failing. ===== `@Repeat` `@Repeat` indicates that the annotated test method must be run repeatedly. The number of -times that the test method is to be executed is specified in the annotation. +times that the test method is to be run is specified in the annotation. The scope of execution to be repeated includes execution of the test method itself as well as any setting up or tearing down of the test fixture. The following example shows @@ -1889,7 +1889,7 @@ example, you can create a custom `@EnabledOnMac` annotation as follows: ===== `@DisabledIf` `@DisabledIf` is used to signal that the annotated JUnit Jupiter test class or test -method is disabled and should not be executed if the supplied `expression` evaluates to +method is disabled and should not be run if the supplied `expression` evaluates to `true`. Specifically, if the expression evaluates to `Boolean.TRUE` or a `String` equal to `true` (ignoring case), the test is disabled. When applied at the class level, all test methods within that class are automatically disabled as well. @@ -2241,7 +2241,7 @@ Spring test suite for further information and examples of various implementation ===== `TestContext` -`TestContext` encapsulates the context in which a test is executed (agnostic of the +`TestContext` encapsulates the context in which a test is run (agnostic of the actual testing framework in use) and provides context management and caching support for the test instance for which it is responsible. The `TestContext` also delegates to a `SmartContextLoader` to load an `ApplicationContext` if requested. @@ -3696,7 +3696,7 @@ Furthermore, it is sometimes necessary to resolve active profiles for tests programmatically instead of declaratively -- for example, based on: * The current operating system. -* Whether tests are being executed on a continuous integration build server. +* Whether tests are being run on a continuous integration build server. * The presence of certain environment variables. * The presence of custom class-level annotations. * Other concerns. @@ -4363,7 +4363,7 @@ faster. ==== The Spring TestContext framework stores application contexts in a static cache. This means that the context is literally stored in a `static` variable. In other words, if -tests execute in separate processes, the static cache is cleared between each test +tests run in separate processes, the static cache is cleared between each test execution, which effectively disables the caching mechanism. To benefit from the caching mechanism, all tests must run within the same process or test @@ -4385,7 +4385,7 @@ alternative, you can set the same property programmatically by using the `SpringProperties` API. Since having a large number of application contexts loaded within a given test suite can -cause the suite to take an unnecessarily long time to execute, it is often beneficial to +cause the suite to take an unnecessarily long time to run, it is often beneficial to know exactly how many contexts have been loaded and cached. To view the statistics for the underlying context cache, you can set the log level for the `org.springframework.test.context.cache` logging category to `DEBUG`. @@ -5086,7 +5086,7 @@ JUnit Jupiter's `@BeforeAll` or `@AfterAll` and methods annotated with TestNG's `@BeforeSuite`, `@AfterSuite`, `@BeforeClass`, or `@AfterClass` — are _not_ run within a test-managed transaction. -If you need to execute code in a suite-level or class-level lifecycle method within a +If you need to run code in a suite-level or class-level lifecycle method within a transaction, you may wish to inject a corresponding `PlatformTransactionManager` into your test class and then use that with a `TransactionTemplate` for programmatic transaction management. @@ -5275,7 +5275,7 @@ for further details. [[testcontext-tx-before-and-after-tx]] ===== Running Code Outside of a Transaction -Occasionally, you may need to execute certain code before or after a transactional test +Occasionally, you may need to run certain code before or after a transactional test method but outside the transactional context -- for example, to verify the initial database state prior to running your test or to verify expected transactional commit behavior after your test runs (if the test was configured to commit the transaction). @@ -5343,7 +5343,7 @@ following example shows the relevant annotations: @AfterEach void tearDownWithinTransaction() { - // execute "tear down" logic within the transaction + // run "tear down" logic within the transaction } @AfterTransaction @@ -5381,7 +5381,7 @@ following example shows the relevant annotations: @AfterEach fun tearDownWithinTransaction() { - // execute "tear down" logic within the transaction + // run "tear down" logic within the transaction } @AfterTransaction @@ -5521,7 +5521,7 @@ The following example shows matching methods for JPA: ==== Executing SQL Scripts When writing integration tests against a relational database, it is often beneficial to -execute SQL scripts to modify the database schema or insert test data into tables. The +run SQL scripts to modify the database schema or insert test data into tables. The `spring-jdbc` module provides support for _initializing_ an embedded or existing database by executing SQL scripts when the Spring `ApplicationContext` is loaded. See <> and @@ -5530,7 +5530,7 @@ embedded database>> for details. Although it is very useful to initialize a database for testing _once_ when the `ApplicationContext` is loaded, sometimes it is essential to be able to modify the -database _during_ integration tests. The following sections explain how to execute SQL +database _during_ integration tests. The following sections explain how to run SQL scripts programmatically and declaratively during integration tests. [[testcontext-executing-sql-programmatically]] @@ -5546,7 +5546,7 @@ integration test methods. `ScriptUtils` provides a collection of static utility methods for working with SQL scripts and is mainly intended for internal use within the framework. However, if you -require full control over how SQL scripts are parsed and executed, `ScriptUtils` may suit +require full control over how SQL scripts are parsed and run, `ScriptUtils` may suit your needs better than some of the other alternatives described later. See the {api-spring-framework}/jdbc/datasource/init/ScriptUtils.html[javadoc] for individual methods in `ScriptUtils` for further details. @@ -5560,10 +5560,10 @@ default value. See the {api-spring-framework}/jdbc/datasource/init/ResourceDatabasePopulator.html[javadoc] for details on default values. To run the scripts configured in a `ResourceDatabasePopulator`, you can invoke either the `populate(Connection)` method to -execute the populator against a `java.sql.Connection` or the `execute(DataSource)` method -to execute the populator against a `javax.sql.DataSource`. The following example +run the populator against a `java.sql.Connection` or the `execute(DataSource)` method +to run the populator against a `javax.sql.DataSource`. The following example specifies SQL scripts for a test schema and test data, sets the statement separator to -`@@`, and executes the scripts against a `DataSource`: +`@@`, and run the scripts against a `DataSource`: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -5576,7 +5576,7 @@ specifies SQL scripts for a test schema and test data, sets the statement separa new ClassPathResource("test-data.sql")); populator.setSeparator("@@"); populator.execute(this.dataSource); - // execute code that uses the test schema and data + // run code that uses the test schema and data } ---- [source,kotlin,indent=0,subs="verbatim,quotes",role="secondary"] @@ -5590,7 +5590,7 @@ specifies SQL scripts for a test schema and test data, sets the statement separa ClassPathResource("test-data.sql")) populator.setSeparator("@@") populator.execute(dataSource) - // execute code that uses the test schema and data + // run code that uses the test schema and data } ---- @@ -5598,7 +5598,7 @@ Note that `ResourceDatabasePopulator` internally delegates to `ScriptUtils` for and running SQL scripts. Similarly, the `executeSqlScript(..)` methods in <> and <> -internally use a `ResourceDatabasePopulator` to run SQL scripts. See the javadoc for the +internally use a `ResourceDatabasePopulator` to run SQL scripts. See the Javadoc for the various `executeSqlScript(..)` methods for further details. [[testcontext-executing-sql-declaratively]] @@ -5638,13 +5638,13 @@ within a JUnit Jupiter based integration test class: @Test void emptySchemaTest() { - // execute code that uses the test schema without any test data + // run code that uses the test schema without any test data } @Test @Sql({"/test-schema.sql", "/test-user-data.sql"}) void userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } } ---- @@ -5658,13 +5658,13 @@ within a JUnit Jupiter based integration test class: @Test fun emptySchemaTest() { - // execute code that uses the test schema without any test data + // run code that uses the test schema without any test data } @Test @Sql("/test-schema.sql", "/test-user-data.sql") fun userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } } ---- @@ -5701,7 +5701,7 @@ The following example shows how to use `@Sql` as a repeatable annotation with Ja @Sql(scripts = "/test-schema.sql", config = @SqlConfig(commentPrefix = "`")) @Sql("/test-user-data.sql") void userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } ---- [source,kotlin,indent=0,subs="verbatim,quotes",role="secondary"] @@ -5727,7 +5727,7 @@ other JVM languages such as Kotlin. @Sql("/test-user-data.sql") )} void userTest() { - // execute code that uses the test schema and test data + // run code that uses the test schema and test data } ---- [source,kotlin,indent=0,subs="verbatim,quotes",role="secondary"] @@ -5738,14 +5738,14 @@ other JVM languages such as Kotlin. Sql("/test-schema.sql", config = SqlConfig(commentPrefix = "`")), Sql("/test-user-data.sql")) fun userTest() { - // execute code that uses the test schema and test data + // Run code that uses the test schema and test data } ---- [[testcontext-executing-sql-declaratively-script-execution-phases]] ====== Script Execution Phases -By default, SQL scripts are executed before the corresponding test method. However, if +By default, SQL scripts are run before the corresponding test method. However, if you need to run a particular set of scripts after the test method (for example, to clean up database state), you can use the `executionPhase` attribute in `@Sql`, as the following example shows: @@ -5764,7 +5764,7 @@ following example shows: executionPhase = AFTER_TEST_METHOD ) void userTest() { - // execute code that needs the test data to be committed + // run code that needs the test data to be committed // to the database outside of the test's transaction } ---- @@ -5779,7 +5779,7 @@ following example shows: config = SqlConfig(transactionMode = ISOLATED), executionPhase = AFTER_TEST_METHOD)) fun userTest() { - // execute code that needs the test data to be committed + // run code that needs the test data to be committed // to the database outside of the test's transaction } ---- @@ -5857,7 +5857,7 @@ that uses JUnit Jupiter and transactional tests with `@Sql`: void usersTest() { // verify state in test database: assertNumUsers(2); - // execute code that uses the test data... + // run code that uses the test data... } int countRowsInTable(String tableName) { @@ -5884,7 +5884,7 @@ that uses JUnit Jupiter and transactional tests with `@Sql`: fun usersTest() { // verify state in test database: assertNumUsers(2) - // execute code that uses the test data... + // run code that uses the test data... } fun countRowsInTable(tableName: String): Int { @@ -5922,7 +5922,7 @@ via `@SqlMergeMode(OVERRIDE)`. Consult the < route = route() === Nested Routes -It is common for a group of router functions to have a shared predicate, for instance a shared -path. -In the example above, the shared predicate would be a path predicate that matches `/person`, -used by three of the routes. -When using annotations, you would remove this duplication by using a type-level `@RequestMapping` - annotation that maps to `/person`. -In WebFlux.fn, path predicates can be shared through the `path` method on the router function builder. -For instance, the last few lines of the example above can be improved in the following way by using nested routes: +It is common for a group of router functions to have a shared predicate, for instance a +shared path. In the example above, the shared predicate would be a path predicate that +matches `/person`, used by three of the routes. When using annotations, you would remove +this duplication by using a type-level `@RequestMapping` annotation that maps to +`/person`. In WebFlux.fn, path predicates can be shared through the `path` method on the +router function builder. For instance, the last few lines of the example above can be +improved in the following way by using nested routes: [source,java,indent=0,subs="verbatim,quotes",role="primary"] .Java @@ -848,7 +847,7 @@ The following example shows how to do so: ---- The preceding example demonstrates that invoking the `next.handle(ServerRequest)` is optional. -We allow only the handler function to be executed when access is allowed. +We only let the handler function be run when access is allowed. Besides using the `filter` method on the router function builder, it is possible to apply a filter to an existing router function via `RouterFunction.filter(HandlerFilterFunction)`. diff --git a/src/docs/asciidoc/web/webflux-websocket.adoc b/src/docs/asciidoc/web/webflux-websocket.adoc index 435e891fff6a..4ad5281f8fa9 100644 --- a/src/docs/asciidoc/web/webflux-websocket.adoc +++ b/src/docs/asciidoc/web/webflux-websocket.adoc @@ -128,7 +128,7 @@ requirements, the unified flow completes when: * At a chosen point, through the `close` method of `WebSocketSession`. When inbound and outbound message streams are composed together, there is no need to -check if the connection is open, since Reactive Streams signals terminate activity. +check if the connection is open, since Reactive Streams signals end activity. The inbound stream receives a completion or error signal, and the outbound stream receives a cancellation signal. @@ -369,7 +369,7 @@ WebSocket options when running on Tomcat: @Bean fun webSocketService(): WebSocketService { val strategy = TomcatRequestUpgradeStrategy().apply { - setMaxSessionIdleTimeout(0L) + setMaxSessionIdleTimeout(0L) } return HandshakeWebSocketService(strategy) } diff --git a/src/docs/asciidoc/web/webflux.adoc b/src/docs/asciidoc/web/webflux.adoc index a497d30510a6..9f4da0327731 100644 --- a/src/docs/asciidoc/web/webflux.adoc +++ b/src/docs/asciidoc/web/webflux.adoc @@ -101,7 +101,7 @@ operations on the output, but you need to adapt the output for use with another Whenever feasible (for example, annotated controllers), WebFlux adapts transparently to the use of RxJava or another reactive library. See <> for more details. -NOTE: In addition to Reactive APIs, WebFlux can also be used with +NOTE: In addition to Reactive APIs, WebFlux can also be used with <> APIs in Kotlin which provides a more imperative style of programming. The following Kotlin code samples will be provided with Coroutines APIs. @@ -218,7 +218,7 @@ For Undertow, Spring WebFlux uses Undertow APIs directly without the Servlet API Performance has many characteristics and meanings. Reactive and non-blocking generally do not make applications run faster. They can, in some cases, (for example, if using the -`WebClient` to execute remote calls in parallel). On the whole, it requires more work to do +`WebClient` to run remote calls in parallel). On the whole, it requires more work to do things the non-blocking way and that can slightly increase the required processing time. The key expected benefit of reactive and non-blocking is the ability to scale with a small, @@ -893,7 +893,7 @@ not meet the stated goals, please let us know. [[webflux-logging-id]] ==== Log Id -In WebFlux, a single request can be executed over multiple threads and the thread ID +In WebFlux, a single request can be run over multiple threads and the thread ID is not useful for correlating log messages that belong to a specific request. This is why WebFlux log messages are prefixed with a request-specific ID by default. @@ -1104,7 +1104,7 @@ many extra convenient options. `DispatcherHandler` processes requests as follows: * Each `HandlerMapping` is asked to find a matching handler, and the first match is used. -* If a handler is found, it is executed through an appropriate `HandlerAdapter`, which +* If a handler is found, it is run through an appropriate `HandlerAdapter`, which exposes the return value from the execution as `HandlerResult`. * The `HandlerResult` is given to an appropriate `HandlerResultHandler` to complete processing by writing to the response directly or by using a view to render. @@ -4022,7 +4022,7 @@ underlying FreeMarker view technology): @Configuration @EnableWebFlux class WebConfig : WebFluxConfigurer { - + override fun configureViewResolvers(registry: ViewResolverRegistry) { registry.freeMarker() } diff --git a/src/docs/asciidoc/web/webmvc-functional.adoc b/src/docs/asciidoc/web/webmvc-functional.adoc index 4b8dc554df25..435fc960812c 100644 --- a/src/docs/asciidoc/web/webmvc-functional.adoc +++ b/src/docs/asciidoc/web/webmvc-functional.adoc @@ -781,7 +781,7 @@ The following example shows how to do so: ---- The preceding example demonstrates that invoking the `next.handle(ServerRequest)` is optional. -We allow only the handler function to be executed when access is allowed. +We only let the handler function be run when access is allowed. Besides using the `filter` method on the router function builder, it is possible to apply a filter to an existing router function via `RouterFunction.filter(HandlerFilterFunction)`. diff --git a/src/docs/asciidoc/web/webmvc.adoc b/src/docs/asciidoc/web/webmvc.adoc index 129fa2827673..6c1cd68586f5 100644 --- a/src/docs/asciidoc/web/webmvc.adoc +++ b/src/docs/asciidoc/web/webmvc.adoc @@ -518,7 +518,7 @@ The `DispatcherServlet` processes requests as follows: information about multipart handling. * An appropriate handler is searched for. If a handler is found, the execution chain associated with the handler (preprocessors, postprocessors, and controllers) is - executed in order to prepare a model or rendering. Alternatively, for annotated + run to prepare a model for rendering. Alternatively, for annotated controllers, the response can be rendered (within the `HandlerAdapter`) instead of returning a view. * If a model is returned, the view is rendered. If no model is returned (maybe due to @@ -584,8 +584,8 @@ a principal. Interceptors must implement `HandlerInterceptor` from the `org.springframework.web.servlet` package with three methods that should provide enough flexibility to do all kinds of pre-processing and post-processing: -* `preHandle(..)`: Before the actual handler is executed -* `postHandle(..)`: After the handler is executed +* `preHandle(..)`: Before the actual handler is run +* `postHandle(..)`: After the handler is run * `afterCompletion(..)`: After the complete request has finished The `preHandle(..)` method returns a boolean value. You can use this method to break or @@ -955,7 +955,7 @@ The `SessionLocaleResolver` lets you retrieve `Locale` and `TimeZone` from the session that might be associated with the user's request. In contrast to `CookieLocaleResolver`, this strategy stores locally chosen locale settings in the Servlet container's `HttpSession`. As a consequence, those settings are temporary -for each session and are, therefore, lost when each session terminates. +for each session and are, therefore, lost when each session ends. Note that there is no direct relationship with external session management mechanisms, such as the Spring Session project. This `SessionLocaleResolver` evaluates and @@ -4615,7 +4615,7 @@ TIP: Spring MVC supports Reactor and RxJava through the `spring-core`, which lets it adapt from multiple reactive libraries. For streaming to the response, reactive back pressure is supported, but writes to the -response are still blocking and are executed on a separate thread through the +response are still blocking and are run on a separate thread through the <> `TaskExecutor`, to avoid blocking the upstream source (such as a `Flux` returned from `WebClient`). By default, `SimpleAsyncTaskExecutor` is used for the blocking writes, but that is not @@ -5432,7 +5432,7 @@ The following example shows how to achieve the same configuration in XML: This is a shortcut for defining a `ParameterizableViewController` that immediately forwards to a view when invoked. You can use it in static cases when there is no Java controller -logic to execute before the view generates the response. +logic to run before the view generates the response. The following example of Java configuration forwards a request for `/` to a view called `home`: @@ -5976,7 +5976,7 @@ hook of the Spring `ApplicationContext`, as the following example shows: ---- @Component class MyPostProcessor : BeanPostProcessor { - + override fun postProcessBeforeInitialization(bean: Any, name: String): Any { // ... }