Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Optimizations #51

Open
philipbjorge opened this issue Jun 21, 2022 · 0 comments
Open

Performance Optimizations #51

philipbjorge opened this issue Jun 21, 2022 · 0 comments

Comments

@philipbjorge
Copy link

philipbjorge commented Jun 21, 2022

Background
We use Olive Branch on an API that serves 1+ million requests per hour.
We noticed Olive Branch middleware was consuming 200-300 milliseconds for some of our large response bodies.

Optimizations for Oj
After benchmarking the implementation, we noticed two major opportunities for optimization:

  1. Calling Oj.load and Oj.dump directly instead of via MultiJson -- This resulted in a decrease of 13.2 seconds to 4.77 seconds in a synthetic benchmark serializing a problem JSON response 100 times.
  2. Replacing the recursive, ruby based inflector to one based on Oj's SC Parser -- This resulted in an additional speed up from 4.77 seconds to 2.40 seconds

Our implementation:

module OurCompany
  class FastOliveBranchMiddleware
    class OliveBranchHandler < Oj::ScHandler
      def initialize(inflection)
        @inflection = inflection || :camel
      end

      def hash_key(key)
        return FastCamel.camelize(key) if @inflection == :camel
        return key.underscore if @inflection == :snake
        return key.dasherize if @inflection == :dash
        return key.underscore.camelize(:upper) if @inflection == :pascal
        key
      end

      def hash_start
        {}
      end

      def hash_set(h, k, v)
        h[k] = v
      end

      def array_start
        []
      end

      def array_append(a, v)
        a << v
      end
    end

    def initialize app
      @app = app
    end

    def call env
      underscore_params(env)
      status, headers, response = @app.call(env)
      [status, headers, format_responses(env, response)]
    end

    private

    def underscore_params(env)
      req = ActionDispatch::Request.new(env)
      req.request_parameters
      req.query_parameters

      env["action_dispatch.request.request_parameters"].deep_transform_keys!(&:underscore)
      env["action_dispatch.request.query_parameters"].deep_transform_keys!(&:underscore)
    end

    def format_responses(env, response)
      new_responses = []

      handler = OliveBranchHandler.new(env["HTTP_X_KEY_INFLECTION"]&.to_sym)
      response.each do |body|
        begin
          new_response = Oj.sc_parse(handler, body)
        rescue JSON::ParserError
          new_responses << body
          next
        end

        new_responses << Oj.dump(new_response)
      end

      new_responses
    end
  end
end

This has resulted in an incredible reduction in time spent applying the correct inflection to our JSON documents -- Cutting out ~200+ms on responses with large payloads.

Goals

We recognize this is a general purpose library and our optimizations are really specific to Oj. We don't expect changes in this project to incorporate our optimizations.

That being said, I did want to post these in case someone else ran into performance issues w/ this library in the future and wanted some ideas to remediate ✌️.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant