Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] MarshalSerializer for structs #20

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from

Conversation

Horusiath
Copy link
Contributor

@Horusiath Horusiath commented Jan 5, 2017

Work in progress

This is a experimental serializer for structs that uses Marshal class for direct struct → byte array mapping instead of writing values field by field. Some initial benchmarks:

BenchmarkDotNet=v0.10.1, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i5-6300HQ CPU 2.30GHz, ProcessorCount=4
Frequency=2249998 Hz, Resolution=444.4448 ns, Timer=TSC
  [Host]     : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1586.0
  DefaultJob : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1586.0
Method Mean StdDev Min Max Op/s Gen 0 Allocated
OldStructSerializer 2.1599 us 0.0201 us 2.1340 us 2.1913 us 462974.75 0.1226 652 B
NewStructSerializer 1.9715 us 0.0120 us 1.9572 us 2.0021 us 507222.93 0.1144 664 B
OldStructSerializerKnownTypes 1.8513 us 0.0046 us 1.8456 us 1.8609 us 540153.52 0.0748 516 B
NewStructSerializerKnownTypes 1.9560 us 0.0070 us 1.9461 us 1.9732 us 511259.87 0.1144 664 B
OldStructSerializerKnownTypesReuseSession 1.6177 us 0.0152 us 1.6026 us 1.6491 us 618171.67 0.0366 404 B
NewStructSerializerKnownTypesReuseSession 1.5210 us 0.0034 us 1.5138 us 1.5259 us 657446.69 0.0341 416 B

New struct serializer is around 10% faster, which actually is a little disappointing. I'm probably missing something important here - maybe Marshal methods are slower than expected, or maybe this goal could be achieved by using different technique. @Scooletz would you like to review and give some tips?

@Scooletz
Copy link
Contributor

Scooletz commented Jan 6, 2017

Unfortunately, considering my current workload I can't make it.

@Horusiath
Copy link
Contributor Author

I've found a very helpful article by Sasha Goldshtein here.

Given 3 different types of deserializer methods:

// Option 1 - using marshaller
public static T MarshallerSerializer<T>(byte[] data) where T : struct
{
    var gch = GCHandle.Alloc(data, GCHandleType.Pinned);
    try
    {
        return (T) Marshal.PtrToStructure(gch.AddrOfPinnedObject(), typeof(T));
    }
    finally
    {
        gch.Free();
    }
}

// Option 2 - casting to ptr to generic struct
public static T PtrCastSerializer<T>(byte[] data) where T : struct
{
    unsafe
    {
        fixed (byte* p = &data[0])
        {
            return (T)Marshal.PtrToStructure(new IntPtr(p), typeof(T));
        }
    }
}

// Option 3 - direct casting from byte array to struct (non-generic)
public static A DirectPtrSerializer(byte[] data)
{
    unsafe
    {
        fixed (byte* packet = &data[0])
        {
            return *(A*) packet;
        }
    }
}

, for test struct with 3 readonly int fields. The results are:

BenchmarkDotNet=v0.10.1, OS=Microsoft Windows NT 6.2.9200.0
Processor=Intel(R) Core(TM) i5-6300HQ CPU 2.30GHz, ProcessorCount=4
Frequency=2250001 Hz, Resolution=444.4442 ns, Timer=TSC
  [Host]     : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1586.0
  DefaultJob : Clr 4.0.30319.42000, 32bit LegacyJIT-v4.6.1586.0
Method Mean StdDev Min Max Op/s Allocated
MarshallerSerializer 911.9199 ns 3.1450 ns 908.0449 ns 920.0977 ns 1096587.49 20 B
PtrCastSerializer 622.2314 ns 0.6368 ns 621.1142 ns 623.6268 ns 1607119.24 20 B
DirectPtrSerializer 7.3451 ns 0.0235 ns 7.3148 ns 7.3806 ns 136145347.6 0 B

This test only checks for deserialization process - 1st option is basically what this PR brings. 2nd option can be used over generic code and it's arund 50% faster. 3rd option is most interesting, but cannot be used over generic types - we need a code gen here. However I think, potential results may be worth of the effort: 123 times faster that the first option with no heap allocations!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants