Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Memory leak on v1.5.0 when using pd.to_datetime() #49164

Closed
3 tasks done
morteymike opened this issue Oct 18, 2022 · 6 comments
Closed
3 tasks done

BUG: Memory leak on v1.5.0 when using pd.to_datetime() #49164

morteymike opened this issue Oct 18, 2022 · 6 comments
Labels
Timestamp pd.Timestamp and associated methods

Comments

@morteymike
Copy link

morteymike commented Oct 18, 2022

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
import numpy as np
import time

big_num = 30_000_000
num_loops = 5_000_000
for i in range(num_loops):
    time.sleep(.1)
    d = {
        'date_col': np.random.choice(pd.date_range('1970-10-01', '2021-10-31'), big_num), 
        'another_col': range(big_num)
    }

    for y in range(0, big_num, 20):
        d['date_col'][y] = None

    df = pd.DataFrame(d)

    df['date_col'] = df['date_col'].astype(object).where(df['date_col'].notnull(), None)
    df['date_col'] = pd.to_datetime(df['date_col'], infer_datetime_format=True)

    df = None

Issue Description

There seems to be a memory leak in the latest 1.5.0 release when using pd.to_datetime() on a column. I triaged my production code to the pd.to_datetime() line and can verify that this line is the cause. My code loops over a function similar to this with a different df several hundred times, so I've created an example here to help replicate my situation.

Running the above code on my personal machine, I can see a significant difference in memory consumption between 1.4.0 and 1.5.0 (I haven't tested any version between 1.4.0 and 1.5.0). However, my machine does appear to run the garbage collector and reduce memory periodically (something the production Docker container does not do with 1.5.0, but does do with 1.4.0).

I've provided an example above, which is a stripped down version of my production application. The process quickly consumes all available memory in my production Docker container and is Killed by the host machine with 1.5.0. Reverting to 1.4.0, the application does not have a memory leak, and everything else is identical.

Expected Behavior

1.4.0 and 1.5.0 memory consumption should be more or less identical when using pd.to_datetime(). There should not be a memory leak on 1.5.0 using this function.

Installed Versions

INSTALLED VERSIONS

commit : 87cfe4e
python : 3.10.6.final.0
python-bits : 64
OS : Darwin
OS-release : 21.6.0
Version : Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 1.5.0
numpy : 1.23.4
pytz : 2022.4
dateutil : 2.8.2
setuptools : 63.4.3
pip : 22.2.2
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
brotli : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyreadstat : None
pyxlsb : None
s3fs : None
scipy : None
snappy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
zstandard : None
tzdata : None

@morteymike morteymike added Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 18, 2022
@MarcoGorelli
Copy link
Member

MarcoGorelli commented Oct 18, 2022

thanks @mortymike for the report - fixed in #49053, and version 1.5.1 is coming soon

sorry, might be unrelated, as there's no warnings here

@MarcoGorelli MarcoGorelli reopened this Oct 18, 2022
@MarcoGorelli MarcoGorelli added Regression Functionality that used to work in a prior pandas version and removed Bug Needs Triage Issue that has not been reviewed by a pandas team member labels Oct 18, 2022
@phofl
Copy link
Member

phofl commented Oct 18, 2022

Can you reproduce @MarcoGorelli ? The memory usage is constant on my machine after the garbage collector kicks in. This is consistent with 1.4.0

@MarcoGorelli
Copy link
Member

I haven't tried to reproduce it yet - @mortymike 1.5.0 did introduce some warnings in to_datetime, and those did introduce a memory leak, does your production code produce any warnings? If so, then #49053 has likely fixed it

@phofl phofl added the Needs Info Clarification about behavior needed to assess issue label Oct 18, 2022
@morteymike
Copy link
Author

Thanks for looking into this - there are no errors or warnings on the production instance, memory consumption just slowly increases with each iteration of the loop until the VM kills the process with 1.5.0.

The example I provided has higher peak and average memory consumption for me on my Macbook (it may help to increase wait time between loops to see memory profile per loop iteration), but maybe this is not the root cause since garbage collector is actually running.

Since my production instance is 100% reproducible and fatal every time on 1.5.0, I can try a couple things later today:

  1. Pull the latest main branch into the production instance and try with that (I only tried this on my personal machine previously)
  2. Try with 1.4.4 (only tried with 1.4.0 previously)
  3. Spend some more time to create a better reproducible example if no luck on the above.

@morteymike
Copy link
Author

Updates on the above:

  1. Latest main branch does NOT have the memory leak from my testing. Interestingly, neither the latest main branch, 1.5.0, 1.4.4, or 1.4.0 log any errors or warnings on my application, and I'm logging info-level messages before and after the call to to_datetime()
  2. 1.4.4 does NOT have the memory leak
  3. I did not spend time on this one because the issue is no longer present in the latest main branch.

All this to say it looks like the latest main branch has fixed this issue and I think we can close this - I'll use 1.4.4 until 1.5.1 is released. Thanks for looking into this!

@phofl
Copy link
Member

phofl commented Oct 19, 2022

Thanks for investigating, it could be possible that warnings were raised and caught again internally

@phofl phofl closed this as completed Oct 19, 2022
@phofl phofl added Timestamp pd.Timestamp and associated methods and removed Regression Functionality that used to work in a prior pandas version Needs Info Clarification about behavior needed to assess issue labels Oct 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Timestamp pd.Timestamp and associated methods
Projects
None yet
Development

No branches or pull requests

3 participants