Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy Error and JS heap out of memory , PLEASE HELP !! #1332

Open
gagan4580 opened this issue Apr 19, 2024 · 2 comments
Open

Proxy Error and JS heap out of memory , PLEASE HELP !! #1332

gagan4580 opened this issue Apr 19, 2024 · 2 comments

Comments

@gagan4580
Copy link

[0] webpack compiled successfully
[0] Files successfully emitted, waiting for typecheck results...
[0] Issues checking in progress...
[1] 2024-04-19T19:27:01.322Z [api] info: Footprint API request started.
[1] 2024-04-19T19:27:01.328Z [Cache] info: Using local cache file...
[1] 2024-04-19T19:27:01.356Z [App] info: Starting AWS Estimations
[1] 2024-04-19T19:27:01.986Z [CostAndUsageReports] info: Started Athena Query Execution
[1] 2024-04-19T19:27:01.986Z [CostAndUsageReports] info: Getting Athena Query Execution
[1] 2024-04-19T19:27:04.507Z [api] info: Footprint API request started.
[1] 2024-04-19T19:27:04.508Z [Cache] info: Using local cache file...
[1] 2024-04-19T19:27:04.527Z [App] info: Starting AWS Estimations
[1] 2024-04-19T19:27:04.793Z [CostAndUsageReports] info: Started Athena Query Execution
[1] 2024-04-19T19:27:04.794Z [CostAndUsageReports] info: Getting Athena Query Execution
[0] No issues found.
[1] 2024-04-19T19:29:29.181Z [CostAndUsageReports] info: Getting Athena Query Result Sets
[1] 2024-04-19T19:29:36.040Z [CostAndUsageReports] info: Getting Athena Query Result Sets
[0] Proxy error: Could not proxy request /api/footprint?start=2023-04-19&end=2024-04-19&ignoreCache=false&groupBy=day&limit=50000&skip=0 from localhost:3000 to http://localhost:4000/.
[0] See https://nodejs.org/api/errors.html#errors_common_system_errors for more information (ECONNRESET).
[0]
[1] 2024-04-19T19:37:04.745Z [api] info: Regions emissions factors API request started
[1]
[1] <--- Last few GCs --->
[1]
[1] [94756:0000018DF6E8D680] 1300898 ms: Mark-Compact 16123.1 (16428.3) -> 16108.6 (16429.4) MB, 9931.14 / 0.04 ms (average mu = 0.103, current mu = 0.024) task; scavenge might not succeed
[1] [94756:0000018DF6E8D680] 1322339 ms: Mark-Compact 16123.9 (16431.7) -> 16112.6 (16433.4) MB, 21200.51 / 0.03 ms (average mu = 0.042, current mu = 0.011) task; scavenge might not succeed
[1]
[1]
[1] <--- JS stacktrace --->
[1]
[1] FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
[1] ----- Native stack trace -----
[1]
[1] 1: 00007FF7D18EC94B node::SetCppgcReference+17979
[1] 2: 00007FF7D1856764 v8::base::CPU::num_virtual_address_bits+89316
[1] 3: 00007FF7D22D49A1 v8::Isolate::ReportExternalAllocationLimitReached+65
[1] 4: 00007FF7D22BE0F8 v8::Function::Experimental_IsNopFunction+1336
[1] 5: 00007FF7D211FBA0 v8::Platform::SystemClockTimeMillis+659328
[1] 6: 00007FF7D211CC28 v8::Platform::SystemClockTimeMillis+647176
[1] 7: 00007FF7D20D8828 v8::Platform::SystemClockTimeMillis+367624
[1] 8: 00007FF7D17DE4D3 DH_get0_priv_key+4803
[1] 9: 00007FF7D17DCE86 node::TriggerNodeReport+81462
[1] 10: 00007FF7D1951E0B uv_update_time+491
[1] 11: 00007FF7D1951984 uv_run+900
[1] 12: 00007FF7D19228B5 node::SpinEventLoop+405
[1] 13: 00007FF7D1802CD8 DH_get0_priv_key+154312
[1] 14: 00007FF7D18A35BD node::Start+4909
[1] 15: 00007FF7D18A22C0 node::Start+48
[1] 16: 00007FF7D165D90C AES_cbc_encrypt+151356
[1] 17: 00007FF7D2AD975C inflateValidate+19196
[1] 18: 00007FFE6C3484D4 BaseThreadInitThunk+20
[1] 19: 00007FFE6C5B1791 RtlUserThreadStart+33
[1] yarn start-api exited with code 1

@dragonscypher
Copy link

Did you try increasing the heap size
node --max_old_space_size=8192 your_script.js

@4upz
Copy link
Member

4upz commented May 16, 2024

@gagan4580 Sorry for the late assistance on this. Are you expecting a large amount of usage data within your AWS account? It seems that you are querying for a year's worth of data with a daily granularity. This means that CCF will attempt to fetch and calculate rows of usage for every service in your account for each day of the year -- which can easily be upwards of thousands or millions of rows if you have a sizeable account. As you can imagine, processing this in memory can be pretty expensive. We're working on ways to mitigate this, with one idea being a request range limit based on grouping method.

I recommend adjusting the granularity of your request (i.e. from daily to monthly or quarterly grouping) or adjusting the date range itself (from a year to a single month). Alternatively, you could try @dragonscypher's recommendation of increasing the heap size (you will need to add this flag to the appropriate yarn start command).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants