-
-
Notifications
You must be signed in to change notification settings - Fork 671
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide lower api to write/read value by writer's postion #1714
Comments
Did you know about If you want precise "seek" control over where the reader will read next, why not just create a As for writing by position, I don't think there's any way we could extend There has been talk of possibly bringing back the old |
@AArnott |
If you have your own custom file header that gives you direct indexes into the stream to start reading from, I feel like creating a Do you actually need a seeking writer? And if so, can you do something similar by seeking the underlying Stream, creating an |
Thank you, It seems that we have to adjust the storage logic to adapt to the new version of the Message Pack framework. Our original intention was to serialize large objects into binary files using Message Pack, and when it is first serialized, the memory usage may reach up to 10GB (which is why lower versions of Message Pack can no longer be used), but this memory usage is not the focus (as it is only used once and will not be as high in the future). Then, after it (the large object) is serialized, we will only use 1% (100MB) of it each time in the future. Other objects should not be deserialized, and files should not be scanned to attempt to deserialize them. We should skip that part directly, but the current version of the API seems to be constantly preventing us from completing it. We have tried MessagePackReader.Skip(), but all objects' memory are alloced. |
We found the 'MessagePack.IngoreMemberAttribute' is not Inherited(after overriding property) in 2.5.140. |
Yes. Use the
I guess I don't understand where you're storing the indexes so you could seek around the file without loading it all in order to deserialize pieces of it then.
That sounds like a bug, actually. Can you file a separate issue for that?
IIRC that isn't supposed to be changed behavior either. But 1.9 was early on in my contribution to the library. If you can file a separate issue for this (or is it the same one as I suggest filing above?) with a repro, I'll be happy to look into it. |
As a free and open source library, all or the majority of its development is done without any monetary benefit to its authors, and all of it is made available free of charge. But sponsorship is very much appreciated and, with agreement from a particular contributor and/or the owning team, can be a great way to support the project and help get your favorite features or bugs addressed in an upcoming version. As for your special 10GB read/write requirement, that wouldn't fall under free support (from me, anyway). If you're interested in paid support, I'm confident we can find (or create) a solution for you. |
Thanks, I filed a separate issue: #1717 |
Is your feature request related to a problem? Please describe.
Our team migrate 1.9.3 to 2.5.4, but there are some issues.
in 1.9.3 version, we write many custom blocks when serializing.
for one of blocks,
the first, we write 0 to block start, recorded as offset1. (WriteInt32ForceInt32Block)
the second, we write block(some objects, struct.), recorded as offset2.
the last, we update (offset2 - offset1) to block start.(WriteInt32ForceInt32Block)
when deserializing, we selectively skip some blocks to accelerate deserialization speed.
(some blocks needn't be deserialized, and we don't use them.)
but in 2.5.4, we don't control offset, position and writing value's position.
Describe the solution you'd like
Provide lower api to write/read value by writer's postion
Describe alternatives you've considered
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: