New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting memory to more than 1 GiB still only allow 1 GiB to be used #3716
Comments
I found a few problems with this issue:
|
Thanks for reporting this issue, we're going to look into it. |
@ibobo This is a great feature request. I'm moving this issue to firebase-tools where i think the implementation will need to happen. |
So right now we can add the any news on this? |
@ikusuki Good news - Google Cloud Functions team recently made a patch to the Node.js runtime that will automatically set the appropriate const functions = require("firebase-functions");
const v8 = require('v8');
exports.heap = functions.runWith({ memory: "8GB" }).https.onRequest((request, response) => {
functions.logger.log("memory", v8.getHeapStatistics());
response.send(v8.getHeapStatistics());
}); returns for me
Please give your functions another deploy and let us know if you are not seeing this on your side. |
Hey @ibobo. We need more information to resolve this issue but there hasn't been an update in 7 weekdays. I'm marking the issue as stale and if there are no new updates in the next 3 days I will close it automatically. If you have more information that will help us get to the bottom of this, just add a comment! |
Since there haven't been any recent updates here, I am going to close this issue. @ibobo if you're still experiencing this problem and want to continue the discussion just leave a comment here and we are happy to re-open this. |
This doesn't appear to be working for us. We need to manually specify the max_old_space_size environment variable. |
@scblaze Can you share the node.js version you are using? I'm on node.js14 and the heap size does seem to be correctly sized in production. |
It's the node16 runtime. This fix should be happening on the backend right? It should not be related to the firebase-tools or functions package versions should it? Also it shouldn't matter whether I deploy a function by itself or I deploy a batch of functions each of which has different memory requirements? When deploying all the functions in a batch, the correct limit should be applied automatically for each function right? |
I just deployed my function (in #3716 (comment)) w/ nodejs 16 runtime, but I'm seeing the heap size match the memory limit without having to set the environment variables. Can you share with me how you are checking the memory size? |
This appears to be related to whether I deploy a single function or a batch of functions. Deploying a single 4GB function with:
I get:
Deploying all my functions (most without a specified limit or with limits less than 4GB), with:
That specific 4GB function now gets:
Roughly a 10th of what it was before. The code I am using to get the available size is this at the top of the function call:
|
@scblaze I think I see what's happening here. To shorten function deploy time, Firebase CLI re-uses previously built container to avoid having to build the same container multiple times. I'm guessing that node flag for changing the heap size ( Let me dig a bit more to confirm. |
Hi there, I had the same issue deploying 75 functions with different memory configurations (256MB, 4GB) and it crashed. Regarding your theory @taeold I deployed all functions with 4GB and it works well now as a temporary fix, no more out of memory for node. So it might confirm that reusing the docker create the issue of wrong max_old_space_size whereas the VM as the right amount of memory. This will need to reopen this issue or open another to fix it. |
@VladimirKosmalaBAL Thanks for providing another data point. I think we do have the issue as described in comment, so I'll go ahead and re-open the issue. |
Hey @ibobo. We need more information to resolve this issue but there hasn't been an update in 7 weekdays. I'm marking the issue as stale and if there are no new updates in the next 3 days I will close it automatically. If you have more information that will help us get to the bottom of this, just add a comment! |
How about the |
Today, Function deployments are made in batches of functions grouped by region. Combined w/ source token to re-used built containers for deployment, we get meaningful decrease in deploy time and in Cloud Build resource usage. Unfortunately, this setup has a peculiar bug; Google Cloud Functions bakes in the flag to expand heap size for appropriate for the memory configuration of a function (`--max_old_space_size`) on the container itself. That means that if you batch two functions with differing memory configuration (e.g. 256MB vs 4 GB), it's guaranteed that one function will have wrongly configured `--max_old_space_size` flag value (Follow #3716 (comment) for how we discovered this issue). This PR proposes batching function by region AND `availalbeMemoryMB` to fix this bug. We do this by refactoring `calculateRegionalChanges` to generate a collection of `Changeset` (previously known as `RegionalChanges`) that sub-divides functions in a region by their memory. In fact, we generalize the approach by allowing arbitrary subdivision by `keyFn` (e.g. `keyFn: (endpoint) => endpoint.availableMemoryMb`) as I anticipate that we will revisit this section of the code once I start working on "codebases". Fixes #3716
Related issues
None.
[REQUIRED] Version info
node: v12.22.1
firebase-functions: 3.15.4
firebase-tools: 9.16.5
firebase-admin: 9.11.1
[REQUIRED] Test case
[REQUIRED] Steps to reproduce
Just try to deploy the above mentioned function
[REQUIRED] Expected behavior
The function gets deployed with the indicated memory and it can fully use the 8 GiB memory.
[REQUIRED] Actual behavior
The function gets deployed with the indicated memory, but, it won't be able to use it all, since the runtime is configured to only use 1 or 2 GiB.
As stated in the Google Cloud documentation here, you need to specify the
NODE_OPTIONS
environment variable to have amax_old_space_size
value (8192 for 8 GiB), but this is not possible with current firebase-functions deployment process.I suggest that when a function is required to have more than 1 GiB memory size the given
NODE_OPTIONS
environment variable is set automatically. At this stage, to have it set, I need to go to Google Cloud Console, edit the function and manually add the environment variable (or at least check it was retained) after every deploy.Were you able to successfully deploy your functions?
Yes.
The text was updated successfully, but these errors were encountered: