You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working on the TTC KMEHR to FHIR case today, I noticed that the benchmark driver in its reference solution, transforms a File into a Resource, rather than a File to a File. This is done so the "Run" phase of the measurements does not include the time used in saving the model.
To make results comparable, I decided to make the same change, and have the ETL transformation go from an EmfModel to an InMemoryEmfModel. When I did that, however, I noticed it significantly slowed down. VisualVM points to the maintenance of the allContents cache:
This wasn't an issue with EmfModel. It turns out that at some point, I added some code to register CachedContentsAdapters automatically in the initialisation of InMemoryEmfModel. I wonder why I didn't check whether caching was enabled or not at that time - I can't remember at the moment.
Later on, Sina changed the code to just use setCachingEnabled(true), which performs the same work but is also consistent with the cached flag. This is after a commit where he fixed EmfModel::setCachingEnabled to add/remove the CachedContentsAdapter itself (as it should have).
Looking at this again, I wonder if we should drop this altogether from InMemoryEmfModel, and just let users decide if they want to turn on caching or not by themselves:
// Since 1.6, having CachedContentsAdapter implies cached=true, otherwise it's inconsistent.
setCachingEnabled(true);
The text was updated successfully, but these errors were encountered:
I think the user should always decide. If not, users will see/perceive the performance/memory hit and be confused if they are not selecting the cached option.
Working on the TTC KMEHR to FHIR case today, I noticed that the benchmark driver in its reference solution, transforms a File into a Resource, rather than a File to a File. This is done so the "Run" phase of the measurements does not include the time used in saving the model.
To make results comparable, I decided to make the same change, and have the ETL transformation go from an
EmfModel
to anInMemoryEmfModel
. When I did that, however, I noticed it significantly slowed down. VisualVM points to the maintenance of the allContents cache:This wasn't an issue with
EmfModel
. It turns out that at some point, I added some code to registerCachedContentsAdapter
s automatically in the initialisation ofInMemoryEmfModel
. I wonder why I didn't check whether caching was enabled or not at that time - I can't remember at the moment.Later on, Sina changed the code to just use setCachingEnabled(true), which performs the same work but is also consistent with the
cached
flag. This is after a commit where he fixedEmfModel::setCachingEnabled
to add/remove theCachedContentsAdapter
itself (as it should have).Looking at this again, I wonder if we should drop this altogether from
InMemoryEmfModel
, and just let users decide if they want to turn on caching or not by themselves:The text was updated successfully, but these errors were encountered: