Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: better mesh support with weighted averaging of random field values #249

Open
Michael-P-Crisp opened this issue Aug 6, 2022 · 2 comments
Milestone

Comments

@Michael-P-Crisp
Copy link

Michael-P-Crisp commented Aug 6, 2022

Hi, I'm encountering issues related to random field mesh element values being evaluated exclusively at the centroid, where this value isn't representative for larger elements.

I've implemented a solution in my script like the one below: srf calculates the values at the nodes, and srf_centroid calculates it at the centroid. The result is then a weighted average (50% centroid, 50% element nodes). The problem is it's not very generalised for different distributions, for example a lognormal random field should use the geometric average rather than arithmetic average.

X1 = srf(seed=seed, store=False)[self.connectivity] # get field values at nodes
X2 = np.atleast_2d(srf_centroid(seed=seed, store=False)).T # get field values at centroids
X = np.hstack((X1, X2*X1.shape[1])).sum(1)/(X1.shape[1]*2) # get weighted average, with weights depending on the number of element nodes

I was wondering if something like this could be implemented in a more robust and generalised way in the package itself? It's fairly efficient in that a lot of elements share nodes. I imagine there wouldn't be much need for a point_volume input since there's already some variance reduction through local averaging occuring?

@LSchueler
Copy link
Member

Hi Michael-P-Crisp,

depending on the application you have in mind, you could have a look at the already implemented course graining procedure. For the application it is intended for, it is the mathematically "correct" way of doing upscaling.

Did you check with a high resolution reference field and a low resolution field, if your solution gives accurate estimates of the reference, when applied to the low resolution field?

I think my approach would be to subdivide the given mesh and then to calculate the srf on that. We have a good binding to PyVista, which has an example of how to apply such a subdivision. Of course this approach would be much more computationally demanding.
We have decided against implementing mesh manipulation routines, as there are already excellent Python packages out there for exactly this kind of application and we treat it as a preprocessing step before using GSTools.
But scaling srf's (if generated by GSTools or loaded from somewhere else doesn't matter) is of course a completely different topic and this would fit into GSTools very nicely.

Maybe it would help me to understand your problem better, if you could share a bit of background on your problem/ application?

@Michael-P-Crisp
Copy link
Author

Hi LSchueler,

Thank you for the quick reply. I'm working on plastic deformation of solids under loading, in a Finite Element Analysis program. I'm hesitant to increase the mesh resolution since this can impact the computational solver time. This is for civil engineering work.

I've done a comparison between values generated at the centroids (with coarse_graining and appropriate areas of the mesh elements), and an equivalent mesh field derived from averaging the point statistics within each element (from a 0.1 m grid, much smaller than the element sizes).

The correlation length is 3 m in both directions using an exponential model (no nugget). Lognormal distribution (I took the log of the values here for better plotting). A comparison of the two cases is given in the below image. The mean and standard deviation of the two is very similar, but you can see from the plot that the mesh with the grid averaging has an overall smoother appearance with more continuous values across elements compared to the centroid values.

image

A discrepancy of course is that the mesh uses irregular triangles while the coarse_graining assumes regular squares. However I'm wondering if the exponential model could be the reason for the slight difference? The page you linked to indicates that it is for a gaussian covariance function.

I've noticed the field's variance when using point_volumes is higher than expected compared to the below plot (variance reduction as a proportion of element length/correlation length). The dashed line is a gaussian covariance, and the solid line is an exponential covariance. Using a correlation of 0.4m, I'm getting a reduction of 80% and 50% when using a point_volume of 0.4^2 and 0.8^2 respectively. That's much closer to, but still higher than, the gaussian covariance curve (70% and 45%). I've taken the image from the book "Risk assessment in geotechnical engineering", full reference:
Fenton, Gordon A., and D. Vaughan Griffiths. Risk assessment in geotechnical engineering. Vol. 461. New York: John Wiley & Sons, 2008.
These curves are based on the properties of each element being an average value, which what I gather the coarse_graining is meant to do? I admit I haven't come across the term "upscaling" in this context. The book doesn't use it, although it references a paper here which does use it in the title: https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/WR026i004p00691

image

@MuellerSeb MuellerSeb added this to the 1.6 milestone Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants