You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would appreciate a driver that takes a function that closes over multiple property definitions and tests each given property in a round robin, allowing side effects from each property check to bleed into the next one.
Motivation
Allows us to re-use values from slow generators and re-use intermediate values that take a long time to generate. This would primarily be useful in integration tests.
Example
Suppose we are designing a REST API for a to-do list app (or, we could be testing its data access layer). We want to test the following properties of the task resource:
GET after POST returns the same field values from the request body (or DTO) (2 requests)
POST creates a unique task on every request. (4 requests)
PUT is idempotent. (5 requests)
GET after DELETE results in a miss. (3 requests)
To check even 10 examples of each test layer will require 140(!) requests to the backend.
With this proposed feature, on the other hand:
the_new_feature(()=>{genFirstPostRequest=arbitraryCreateTaskDto();letfirstPostRequest;letfirstTaskId;letfirstTask;property('GET after POST returns same field values',[genFirstPostRequest],async(firstRequest)=>{firstPostRequest=firstRequest;firstTaskId=awaitmyApi.createTask(firstRequest);firstTask=awaitmyApi.getPost(firstTaskId);expect(firstTask).toContainEqual(firstPostRequest);// 3 requests});property('POST creates a unique task on every request',async()=>{constsecondTaskId=awaitmyAPI.createTask(firstPostRequest);awaitmyAPI.getTask(secondTaskId);expect(firstTask.id).not.toEqual(secondTask.id);// 2 requests});property('PUT is idempotent',[arbitraryPutTaskDto({id: firstTaskId})],async(putRequestData)=>{// Yes, this would seem to require chaining the POST data arbitrary into the PUT data arbitrary.// I think there is a way around that but I'll leave it alone for now.awaitmyApi.updateTask(putRequestData);constupdatedTask=awaitmyApi.getTask(firstTaskId);awaitmyApi.updateTask(putRequestData);constreUpdatedTask=awaitmyApi.getTask(firstTaskId);expect(reUpdatedTask).toEqual(updatedTask);// 4 requests});property('GET after DELETE causes a miss',async()=>{awaitmyApi.deleteTask(firstTaskId);expect(awaitmyApi.getTask(firstTaskId)).toBeNull());// 2 requests});});
(I apologize for any syntax or semantic errors, I haven't checked this code but I think it is good enough to get the idea across)
Now we are testing the same four properties in 110 requests, an improvement of about 20% -- and that was just from re-using a single request!
We also save time generating the test data, which can make a significant difference in large enterprises with complex data types, as you discovered when profiling #2650.
The text was updated successfully, but these errors were encountered:
馃殌 Feature Request
I would appreciate a driver that takes a function that closes over multiple property definitions and tests each given property in a round robin, allowing side effects from each property check to bleed into the next one.
Motivation
Allows us to re-use values from slow generators and re-use intermediate values that take a long time to generate. This would primarily be useful in integration tests.
Example
Suppose we are designing a REST API for a to-do list app (or, we could be testing its data access layer). We want to test the following properties of the
task
resource:GET
afterPOST
returns the same field values from the request body (or DTO) (2 requests)POST
creates a unique task on every request. (4 requests)PUT
is idempotent. (5 requests)GET
afterDELETE
results in a miss. (3 requests)To check even 10 examples of each test layer will require 140(!) requests to the backend.
With this proposed feature, on the other hand:
(I apologize for any syntax or semantic errors, I haven't checked this code but I think it is good enough to get the idea across)
Now we are testing the same four properties in 110 requests, an improvement of about 20% -- and that was just from re-using a single request!
We also save time generating the test data, which can make a significant difference in large enterprises with complex data types, as you discovered when profiling #2650.
The text was updated successfully, but these errors were encountered: