-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doc: update syntax-highlighter scalability info #32359
Conversation
Current dependencies on/for this PR:
This comment was auto-generated by Graphite. |
b75139d
to
9ce6c73
Compare
That's true, but the connection code does not currently allow horizontal scaling of this service. There are also some major issues with the service which make vertical scaling the better choice for now regardless of if we could. Can provide more details if useful, but, yeah, we can't scale this service right now except vertically. |
I would love to learn more about this. Are you talking about how clients talk to Assuming I am looking at the right place, it looks like a pretty standard HTTP client. That said, I don't know golang or our product code well enough.
My intention is more about the ability to run multiple replicas across multiple nodes/zones to avoid a single point of failure (the worker node/zone goes down) over scale horizontally. |
OK I see. Yeah, I am talking about how clients connect. Right now we connect using the env var here so it's a simple HTTP connection to There is no sharding logic in code (like for example what we do for zoekt/gitserver), so you would need to have that point to a load balancer in order to get traffic directed to both pods I believe (but my k8s is rusty, maybe the ingress does that for us?) If the intention is just redundancy in case one zone goes down, then yeah that should work. The three things to note if we are talking about scaling for perf, though:
Those are the reasons vertical scaling for this service is better than horizontal in general: the service becomes more reliable the more resources it is given due to internal worker processes. But sounds like your goal is to have redundancy across AZs, which is great, so feel free to go ahead on that, we may need some LB in front that can divide up the requests or something. The only thing I'd be worried about is if people in the future see we've got |
;) k8s Service does all the magic you mention for you To clarify, all inter service comm in Sourcegraph route through k8s Services, where it handles the load balancing and route traffic intelligently for you (if a pod is unhealthy it will stop routing traffic to the broken pod) |
lol, you wouldn't think I was on Delivery for ~3yrs and forgot that. My brain's RAM is running out :) |
Good point. Also, this would be a good addition to our scaling guide https://docs.sourcegraph.com/admin/install/kubernetes/scale We probably won't be able to give recommendations based on repo size (at least I don't have the data), but maybe we can just say, increase replicas and tune |
I would phrase it as:
|
9ce6c73
to
bff885b
Compare
Follow up on https://sourcegraph.slack.com/archives/CHXHX7XAS/p1646783148867999
syntax-highlighter
is a stateless service and has no external dependencies (e.g. pg), so it should be horizontally scaleable.Test plan
Go to http://localhost:5080/dev/background-information/architecture#diagram
syntech server
should be surrounded by a box instead of a rectangle.Should we experiment it on dogfod? https://github.com/sourcegraph/deploy-sourcegraph-dogfood-k8s/pull/4008mergeddogfood syntaxhighliting seems fine to me
container logs look fine too