Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting memory to more than 1 GiB still only allow 1 GiB to be used #3716

Closed
ibobo opened this issue Aug 24, 2021 · 17 comments · Fixed by #4253
Closed

Setting memory to more than 1 GiB still only allow 1 GiB to be used #3716

ibobo opened this issue Aug 24, 2021 · 17 comments · Fixed by #4253

Comments

@ibobo
Copy link

ibobo commented Aug 24, 2021

Related issues

None.

[REQUIRED] Version info

node: v12.22.1

firebase-functions: 3.15.4

firebase-tools: 9.16.5

firebase-admin: 9.11.1

[REQUIRED] Test case

export const test = functions
  .runWith({
    memory: '8GB',
  })
  .https.onRequest(async (req, res) => {
    console.log(process.env);
  });

[REQUIRED] Steps to reproduce

Just try to deploy the above mentioned function

[REQUIRED] Expected behavior

The function gets deployed with the indicated memory and it can fully use the 8 GiB memory.

[REQUIRED] Actual behavior

The function gets deployed with the indicated memory, but, it won't be able to use it all, since the runtime is configured to only use 1 or 2 GiB.

As stated in the Google Cloud documentation here, you need to specify the NODE_OPTIONS environment variable to have a max_old_space_size value (8192 for 8 GiB), but this is not possible with current firebase-functions deployment process.

I suggest that when a function is required to have more than 1 GiB memory size the given NODE_OPTIONS environment variable is set automatically. At this stage, to have it set, I need to go to Google Cloud Console, edit the function and manually add the environment variable (or at least check it was retained) after every deploy.

Were you able to successfully deploy your functions?

Yes.

@google-oss-bot
Copy link
Contributor

I found a few problems with this issue:

  • I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight.
  • This issue does not seem to follow the issue template. Make sure you provide all the required information.

@colerogers
Copy link
Contributor

Thanks for reporting this issue, we're going to look into it.

@elvisun elvisun assigned elvisun and taeold and unassigned elvisun Aug 30, 2021
@taeold
Copy link
Contributor

taeold commented Aug 30, 2021

@ibobo This is a great feature request. I'm moving this issue to firebase-tools where i think the implementation will need to happen.

@taeold taeold transferred this issue from firebase/firebase-functions Aug 30, 2021
@taeold taeold linked a pull request Nov 5, 2021 that will close this issue
@taeold taeold removed a link to a pull request Nov 5, 2021
@ikusuki
Copy link

ikusuki commented Nov 25, 2021

So right now we can add the NODE_OPTIONS runtime env var with --max_old_space_size=2048 (for 2GB for example) but it will be reset on every deploy, right?

any news on this?

@taeold
Copy link
Contributor

taeold commented Nov 26, 2021

@ikusuki Good news - Google Cloud Functions team recently made a patch to the Node.js runtime that will automatically set the appropriate --max_old_space_size based on the memory you selected to deploy the function. e.g.

const functions = require("firebase-functions");
const v8 = require('v8');

exports.heap = functions.runWith({ memory: "8GB" }).https.onRequest((request, response) => {
  functions.logger.log("memory", v8.getHeapStatistics());
  response.send(v8.getHeapStatistics());
});

returns for me

{
 "total_heap_size":27226112,
 "total_heap_size_executable":573440,
 "total_physical_size":17225552,
 "total_available_size":8539261544, /* LOOK - 8GB */
 "used_heap_size":8431848, 
 "heap_size_limit":8547991552,
 "malloced_memory":8192,
 "peak_malloced_memory":1255904,
  "does_zap_garbage":0,
  "number_of_native_contexts":1,
  "number_of_detached_contexts":0
}

Please give your functions another deploy and let us know if you are not seeing this on your side.

@google-oss-bot
Copy link
Contributor

Hey @ibobo. We need more information to resolve this issue but there hasn't been an update in 7 weekdays. I'm marking the issue as stale and if there are no new updates in the next 3 days I will close it automatically.

If you have more information that will help us get to the bottom of this, just add a comment!

@google-oss-bot
Copy link
Contributor

Since there haven't been any recent updates here, I am going to close this issue.

@ibobo if you're still experiencing this problem and want to continue the discussion just leave a comment here and we are happy to re-open this.

@scblaze
Copy link

scblaze commented Jan 12, 2022

This doesn't appear to be working for us. We need to manually specify the max_old_space_size environment variable.

@taeold
Copy link
Contributor

taeold commented Jan 12, 2022

@scblaze Can you share the node.js version you are using? I'm on node.js14 and the heap size does seem to be correctly sized in production.

@scblaze
Copy link

scblaze commented Jan 12, 2022

It's the node16 runtime.

This fix should be happening on the backend right? It should not be related to the firebase-tools or functions package versions should it?

Also it shouldn't matter whether I deploy a function by itself or I deploy a batch of functions each of which has different memory requirements? When deploying all the functions in a batch, the correct limit should be applied automatically for each function right?

@taeold
Copy link
Contributor

taeold commented Jan 12, 2022

@scblaze

  1. Yes. The change was made in the Google Cloud Functions and does not require any updates to the Firebase CLI or the Functions SDK.
  2. Yes that should not matter.

I just deployed my function (in #3716 (comment)) w/ nodejs 16 runtime, but I'm seeing the heap size match the memory limit without having to set the environment variables. Can you share with me how you are checking the memory size?

@scblaze
Copy link

scblaze commented Jan 13, 2022

This appears to be related to whether I deploy a single function or a batch of functions.

Deploying a single 4GB function with:

firebase --project xxx deploy --only functions:xxx

I get:

"total_available_size":4242527344,

Deploying all my functions (most without a specified limit or with limits less than 4GB), with:

firebase --project xxx deploy --only functions 

That specific 4GB function now gets:

"total_available_size":484723728,

Roughly a 10th of what it was before.

The code I am using to get the available size is this at the top of the function call:

  const v8 = require('v8');
  console.log('[MEMORY STATISTICS]', JSON.stringify(v8.getHeapStatistics()));

@taeold
Copy link
Contributor

taeold commented Jan 13, 2022

@scblaze I think I see what's happening here.

To shorten function deploy time, Firebase CLI re-uses previously built container to avoid having to build the same container multiple times. I'm guessing that node flag for changing the heap size (max_old_space_size) is baked into the image and shared across each function instance. So if the function container is built w/ function configured with a small RAM, then the function configured with the larger RAM won't have the correct old space size.

Let me dig a bit more to confirm.

@VladimirKosmalaBAL
Copy link

Hi there,

I had the same issue deploying 75 functions with different memory configurations (256MB, 4GB) and it crashed.

Regarding your theory @taeold I deployed all functions with 4GB and it works well now as a temporary fix, no more out of memory for node.

So it might confirm that reusing the docker create the issue of wrong max_old_space_size whereas the VM as the right amount of memory. This will need to reopen this issue or open another to fix it.

@taeold
Copy link
Contributor

taeold commented Jan 17, 2022

@VladimirKosmalaBAL Thanks for providing another data point.

I think we do have the issue as described in comment, so I'll go ahead and re-open the issue.

@google-oss-bot
Copy link
Contributor

Hey @ibobo. We need more information to resolve this issue but there hasn't been an update in 7 weekdays. I'm marking the issue as stale and if there are no new updates in the next 3 days I will close it automatically.

If you have more information that will help us get to the bottom of this, just add a comment!

@ghost
Copy link

ghost commented Feb 15, 2022

How about the firebase.json? It has runtime parameter. Why not including the runtime specifications in the file?

taeold added a commit that referenced this issue Mar 7, 2022
Today, Function deployments are made in batches of functions grouped by region. Combined w/ source token to re-used built containers for deployment, we get meaningful decrease in deploy time and in Cloud Build resource usage.

Unfortunately, this setup has a peculiar bug; Google Cloud Functions bakes in the flag to expand heap size for appropriate for the memory configuration of a function (`--max_old_space_size`) on the container itself. That means that if you batch two functions with differing memory configuration (e.g. 256MB vs 4 GB), it's guaranteed that one function will have wrongly configured `--max_old_space_size` flag value (Follow #3716 (comment) for how we discovered this issue).

This PR proposes batching function by region AND `availalbeMemoryMB` to fix this bug. We do this by refactoring `calculateRegionalChanges` to generate a collection of `Changeset` (previously known as `RegionalChanges`) that sub-divides functions in a region by their memory. In fact, we generalize the approach by allowing arbitrary subdivision by `keyFn` (e.g. `keyFn: (endpoint) => endpoint.availableMemoryMb`) as I anticipate that we will revisit this section of the code once I start working on "codebases".

Fixes #3716
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants