Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update pos_constants.hpp #201

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

xorinox
Copy link
Contributor

@xorinox xorinox commented Mar 31, 2021

Allow for up to 1024 buckets

Allow for up to 1024 buckets
@arvidn
Copy link
Contributor

arvidn commented Apr 1, 2021

looks good to me, but I'd like to defer approval to someone who knows where that limit comes from.

@arvidn
Copy link
Contributor

arvidn commented Apr 1, 2021

@rostislav do you know?

@rostislav
Copy link
Contributor

Similarly to PR #202, I would suggest making the commit message more descriptive.
Regarding the reason for the original limit, it might be related to the limit on open file handles in Windows, but I'm not sure.

@arvidn
Copy link
Contributor

arvidn commented Apr 1, 2021

indeed. it seems we would need this for windows.

C run-time I/O now supports up to 8,192 files open simultaneously at the low I/O level. This level includes files opened and accessed using the _open, _read, and _write family of I/O functions. By default, up to 512 files can be open simultaneously at the stream I/O level. This level includes files opened and accessed using the fopen, fgetc, and fputc family of functions. The limit of 512 open files at the stream I/O level can be increased to a maximum of 8,192 by use of the _setmaxstdio function.
Because stream I/O-level functions, such as fopen, are built on top of the low I/O-level functions, the maximum of 8,192 is a hard upper limit for the number of simultaneously open files accessed through the C run-time library.

@mariano54
Copy link
Contributor

I think this is a good idea and hopefully should allow us to drastically reduce memory requirements.

  1. The reason I set it to 128 was the following: The idea was that splitting writes into too many buckets would severely slow down HDD plotting because it would constantly be seeking through these writes. However this was just a theory, i never actually tested anything more than 128.
  2. If we do this change, I think we should tweak the amount of memory allocated to the sortmanager for writes by increasing it, and decrease the amount of memory going to the reads. This means that more stuff will be cached before writing to disk. Hopefully that should improve HDD performance. That might increase total memory requirements slightly.
  3. Do you have any data for performance xorinox? I think we should test this on an HDD.
  4. I'm curious about what the minimum ram requirements will be changed to, with different -u sizes. We should confirm that the code actually runs with the decreased -b parameters, and doesn't crash

@xorinox
Copy link
Contributor Author

xorinox commented Apr 1, 2021

A lot of buckets certainly not the idea for spinner plotting, but for SSD I found increasing from 128 to 512 beneficial, approx 10% plotting throughput gains on multi process plotting, more importantly, can comfortably plot at no speed loss with -b 1024, 8 concurrent procs on a machine with only 32G of RAM. Thanks to reducing the disk read/write buffers from 4 to 1MB per bucket file. My testing was all done on 0.12.41 but I am on my way to check out the differences on chaipos commit a614cee also.

USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 17286 133 9.2 3755712 3017696 pts/6 SLl+ Mar31 1450:45 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 5 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk03
root 17945 132 7.5 3165916 2476448 pts/7 SLl+ Mar31 1438:53 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 5 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk04
root 5463 134 3.0 1625768 991732 pts/3 RL+ Mar31 1622:20 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 5 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk02
root 4734 134 3.0 1625772 984764 pts/2 RL+ Mar31 1629:11 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 5 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk01
root 24196 130 3.0 1625740 984820 pts/9 RL+ Mar31 1336:41 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 4 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk04
root 11637 137 2.9 1625760 965336 pts/5 RL+ Mar31 1568:19 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 4 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk02
root 10995 137 2.9 1625760 967108 pts/4 RL+ Mar31 1576:23 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 4 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk01
root 23544 130 1.6 1152696 548224 pts/8 SLl+ Mar31 1339:46 /opt/chia-blockchain/venv/bin/python3.9 /opt/chia-blockchain/venv/bin/chia plots create -e -k 32 -b 1024 -r 8 -u 512 -n 4 -t /chia/scratch/disk01 -2 /chia/scratch/disk01 -d /chia/plots/disk03

[root@node-1d ~]# free -h
total used free shared buff/cache available
Mem: 31Gi 11Gi 286Mi 1.0Mi 19Gi 19Gi
Swap: 4.0Gi 255Mi 3.8Gi

node-1 t_plots,a_plots,procs,avg sec,TiB/day: 22 4 8 21232.6 3.2225
t_cpu, t_mem: 1067.0/1600, 34.3
screens, procs, logs, stale: 8, 8, 8, 0

I can share the log files of this current also.

Copy link
Contributor

@wjblanke wjblanke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to check the system ulimit (platform specific?) and use that to determine max buckets. Otherwise there will be failures later when the file handles are exhausted and a lot of support questions. Some machines have a very low setting. If we add that it is fine by me.

@github-actions
Copy link

'This PR has been flagged as stale due to no activity for over 60
days. It will not be automatically closed, but it has been given
a stale-pr label and should be manually reviewed.'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants