-
Notifications
You must be signed in to change notification settings - Fork 555
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Return failed/successful operations count together with batch operation #18171
Conversation
Operate Test Results778 tests 759 ✅ 16m 48s ⏱️ For more details on these errors, see this check. Results for commit c644d45. ♻️ This comment has been updated with latest results. |
@@ -313,9 +324,52 @@ public List<OperationDto> getOperationsByBatchOperationId(String batchOperationI | |||
} | |||
} | |||
|
|||
@Override | |||
public List<BatchOperationDto> enrichBatchEntitiesWithMetadata( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like it would be better in its own component dedicated to data transformation of this type. OperationReader and the opensearch version are pretty large as-is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. There are some parts in the code around here that don't quite follow the single responsibility principle, but I shouldn't be adding on top of that.
I'll change this tomorrow. Let me know if there's anything else you notice meanwhile.
Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved the data transformation to an own component (DataAggregator), but kept the requests for reading Operations in the OperationReader as I thought that's the most fitting.
I also changed the requests to use ID-batches rather than request every ID individually and instead work more with aggregations.
c644d45
to
454b4fa
Compare
Tasklist Test Results549 tests ±0 544 ✅ ±0 1h 34m 14s ⏱️ + 2m 35s Results for commit ea03334. ± Comparison against base commit 0d2e6d2. This pull request removes 13 and adds 13 tests. Note that renamed tests count towards both.
♻️ This comment has been updated with latest results. |
b3f16ef
to
f4ea6b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are integration test failures in two files, please take a look at those. You can run integration tests locally with:
mvn clean install -PoperateItStage1
The operateItStage1 tests passed on my local machine now.
|
final List<BatchOperationEntity> batchEntities) { | ||
|
||
if (batchEntities.isEmpty()) { | ||
return new ArrayList<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few suggestions here (applies to the OpensearchDataAggregator too):
-
If there's a chance that batchEntities might be null and not just empty, use CollectionUtils.isEmpty() for the check
-
Return List.of(), which creates an unmutable empty list instance. This makes it consistent with the toList() return value below, which also creates an unmutable list instance.
-
If you do need to create a new concrete list type, consider if you actually want ArrayList vs LinkedList. The reason for this is, ArrayList actually allocates a block of memory behind the list bigger than the initial list size, and has to allocate a new block/copy elements as the list grows. The benefit to ArrayList (and the reason for this overhead) is if you ever have to index elements directly, it's constant time. If your list usage is only to iterate over the whole list, or add/remove elements from the end, consider a LinkedList. It's a very minor optimization but as we start scaling, it can end up making a difference.
|
||
@Conditional(ElasticsearchCondition.class) | ||
@Component | ||
public class ElasticsearchDataAggregator implements DataAggregator { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Would it make sense to make DataAggregator an abstract class instead of an interface and put the common code that checks for null and creates the dtos in it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That definitely makes sense and I should have seen that myself to be honest. I changed that.
|
||
final Terms batchIdAggregation = | ||
operationReader.getOperationsAggregatedByBatchOperationId(idList, metadataAggregation); | ||
for (final Terms.Bucket bucket : batchIdAggregation.getBuckets()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any chance that batchIdAggregation can be null? Or batchIdAggregation.getBuckets() return null? (Same for OpensearchDataAggregator)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked into this and no, this should not happen. When sending requests with aggregations the response should always contain the aggregations. They may be empty (with docCount==0), but shouldn't be null.
Same for the buckets.
I'll leave this as-is for now but will keep it in mind for test cases to verify.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, just wanted to doublecheck as we have had issues with random NPEs in edge cases with customers
@@ -28,6 +28,8 @@ public class BatchOperationDto { | |||
private Integer instancesCount = 0; | |||
private Integer operationsTotalCount = 0; | |||
private Integer operationsFinishedCount = 0; | |||
private Integer failedOperationsCount = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hashCode() needs to be updated as well with these two new fields
@@ -199,6 +219,16 @@ public boolean equals(Object o) { | |||
: that.operationsFinishedCount != null) { | |||
return false; | |||
} | |||
if (failedOperationsCount != null |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Intellij had a suggestion to use Object.equals for a lot of the fields in here (not just the new ones) and make the code a bit simpler
package io.camunda.operate.webapp.transform; | ||
|
||
/* | ||
* Copyright Camunda Services GmbH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this might be the old license header, you might want to check to see if this is supposed to be in here.
I left a few minor suggestions but approved otherwise. I'm not sure why codeQL is failing a check on a file you didn't change, that particular issue was scanned over a month ago. You may need to rebase from main and re-run all the checks. |
d416b6a
to
51a2e98
Compare
LGTM, don't forget to rebase from main if you haven't yet to pick up the repository rename changes |
4619361
to
07317cb
Compare
-added method to get failed/successful operation count to OperationReader -added fields for operationsFailedCount and operationsCompletedCount to BatchOperationDto -changed API call to add failed/successful operation count to response closes camunda/operate/6294
…s count -added method to get failed/successful operation count to OperationReader # Conflicts: # operate/webapp/src/main/java/io/camunda/operate/webapp/opensearch/reader/OpensearchOperationReader.java
-Moved data transformation/aggregation to new class DataTransformer -Changed search requests and interfaces to use nested aggregations rather than multiple ID requests to elasticsearch/opensearch # Conflicts: # operate/webapp/src/main/java/io/camunda/operate/webapp/opensearch/reader/OpensearchOperationReader.java
due to missing DataAggregator Bean
- fixed BatchOperations without finished/failed operations getting ignored - fixed initial batch operation sorting getting messed up by metadata addition
and refactored equals method
- moved duplicated code to abstract class - removed old license header - adjusted extending classes of DataAggregator
07317cb
to
3f88f8d
Compare
Description
Added feature to return the failed and successfully completed count of operations for batch operations to display them in the UI.
Notes for reviewer
I decided to fetch the additional data from the Operation index individually instead of for all currently active operations (as suggested in the issue), since the batch operations are loaded progressively while scrolling.
Performance could probably be improved by requesting information for all the operations that are being loaded in one scroll at once. I'd like to hear opinions on that (and on whether it should be done before we merge the feature or if we can improve that later).
Related issues
closes #https://github.com/camunda/operate/issues/6294