Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow changing options in live client #97

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

vorner
Copy link
Contributor

@vorner vorner commented Nov 14, 2018

Hello

I'd like to have a way to change all the configuration of my application at runtime. That would also mean being able changing configuration of sentry and I didn't find a way to do it with current API. I noticed there's a way to bind a new client into the main Hub. However, if I read the code correctly, derived thread-local hubs clone Arc to the client on creation. That would mean that if I ever change the client in the main hub, only new threads would get the new client and previous threads would keep the old one.

This addition would allow me to replace the „insides“ of a client ‒ change its options and transport and it would be reflected in all the hubs.

Does that make sense? Or is there a better way to reach the goal, or some other API change that would make it possible?

Unfortunately, because of the changes inside Client, I had to change the return values of two reader methods, which would make this a semver-incompatible change ☹.

* New Client::change_options method that changes the options and reopens
  a new transport using the new options.
* Change of return types of Client::options() and Client::dsn() because
  of internal changes.
@mitsuhiko
Copy link
Member

It's currently quite intentional that changes to the options are not possible. I would be curious how this is useful because it introduces some things that make the integration code much more complex (they can no longer rely on options being immutable).

@vorner
Copy link
Contributor Author

vorner commented Nov 22, 2018

Well, a lot of services are OK to be restarted, because they contain little state and start up fast.

However, this is not always the case. Few examples:

  • A DNS resolver at an ISP would have a big in-memory cache. When restarted, it would start fresh, making much more upstream traffic and responding in slower manner before it re-fills the cache. It can take something like 30 minutes or sometimes even more.
  • A database is going to do sanity checks when starting up.
  • Something like nginx should be able to add more endpoints at runtime, not to disturb the current ones if at all possible.
  • A VPN server drops all clients connections on restart, degrading user's experience because they all have to reconnect somewhere.

In these cases, it is desirable the service or daemon can reload and update the configuration at runtime.

While sentry configuration is probably not part of the thing you want to usually update at runtime for these, the situation when some subset of all config doesn't update when requested is surprising.

And my own motivation is, I'm trying to create a library that helps with these kinds of reloadable daemons (https://crates.io/crates/spirit). I'd like to add a support for sentry too ‒ that the application would just include a configuration fragment in its config definition and the library would take care of the rest. However, I can't do that if there's no way to either update the configuration or maybe somehow tear down and re-initialize it at runtime.

@mitsuhiko
Copy link
Member

@vorner i'm just not sure why you need to be able to reconfigure the client instead of just making a new client and binding it. This enables the hot reload situation with configuration changes and we're using that ourselves.

@vorner
Copy link
Contributor Author

vorner commented Nov 30, 2018

That's what I intended to do at first. But then I started to read the documentation and code and came to a conclusion it wouldn't work and started to look for other options. Maybe the conclusion is wrong, but this is what I seem to be reading:

  1. I create the very first client and bind it to the main hub.
  2. A new thread is spawned and it accesses its „current“ hub.
  3. This is the first access to the THREAD_HUB thread-local variable. So it'll initialize it by calling Hub::new_from_top(&PROCESS_HUB.0) (hub.rs:29).
  4. The new_from_top clones the client. The client is full of Arcs, so it more or less points to the same client.
  5. Now, I try to create a new client and again bind it to the main hub. This replaces it in the main hub. However, the thread-local hub of the second thread still points to the old one and uses that one, so I haven't updated the client for it.

Or do I read the code wrong? How is it supposed to work or be used?

Thank you

@mitsuhiko
Copy link
Member

@vorner That is correct. Each thread until the end of its lifetime will hold on to the old hub which is intentional. If you have long running threads that you want to reconfigure the hub on you would have to keep track of all hubs created.

I'm not sure yet what an improvement here would look like but i worry about making the options writable. That causes all kinds of concurrency issues.

@vorner
Copy link
Contributor Author

vorner commented Dec 1, 2018

Hmm. Many applications have long-running threads (especially the ones I mentioned above), but keeping track of them could be challenging. For example, the default tokio runtime keeps a threadpool and spawns and shuts down threads as needed, but most of them will live long. The reqwest http client spawns its own thread that is alive as long as one holds on to the Client ‒ which is recommended to reuse. And even if it was possible, my position will be that of library ‒ so I won't have access to all the threads a user spawns and asking the user to keep track of them for me seems wrong.

I agree that my solution is a bit hacky, but it felt like the smallest change I could do.

Brainstorming some ideas what might be helpful from sentry's side:

  • Providing such list of top-level Hubs so I could iterate through them and bind the new client onto them. But that kind of smells too.
  • Putting the main hub on the bottom of hub stack (do I get it right that it has some kind of stack of hubs so something can be reconfigured on a local scope?), then place the thread-local on top of it. But leave the thread-local without its own client ‒ so when searching for a client to use, it would reach the main one. This would allow replacing the „main“ client in the whole application. A thread, if it wanted, could still bind its own client and override the main client for itself, but if it did nothing, it would get the up-to-date client.

Would anything like that make sense?

@vorner
Copy link
Contributor Author

vorner commented Jan 8, 2019

Ping?

I don't insist on this particular approach, but I'd appreciate a suggestion what would be a viable way forward.

@mitsuhiko
Copy link
Member

@vorner we will revisit this soon. I don't have a good solution to this yet :(

There is no stack of hubs, just a stack within the hub. I agree that enumerating hubs is not a great solution. My best suggestion right now is more indirection which I'm not super happy with.

@vorner
Copy link
Contributor Author

vorner commented Dec 26, 2019

Hello

Was there a conclusion?

@Swatinem
Copy link
Member

Swatinem commented May 6, 2020

Hi!
So I have come quite some way with my restructuring of the crate, and I have a better outlook and ideas what to do.
I discussed some of those, and also your usecase with the team.

While I do think its a valid usecase, it is kind of niche, and we don‘t know yet how it really fits into the Unified API design.

However, I noticed that the way the current Client works is a bit suboptimal right now. I imagine to make the Client a newtype around Arc<RwLock<Option<…>>>, so its internally clonable and mutable. When you close it, the internals really get cleaned up and dropped, and with that, you can also reconfigure it across all threads. Not sure how efficient that would be however. And its quite a change either way.

@vorner
Copy link
Contributor Author

vorner commented May 7, 2020

Thank you for looking at that. I'd say that reconfiguring doesn't really need to be efficient as it happens only as a result of user interaction ‒ therefore only once every blue moon.

@Swatinem
Copy link
Member

Swatinem commented May 7, 2020

My concern is rather the overhead of locking and arc-ing all the time even though you never re-configure.

I think the best way forward here would be to actually measure and benchmal all this. I would love to have at least some benchmarks/numbers for:

  • no with_client_implementation <- presumably the fastest noop case
  • with_client_implementation, but with a disabled client. should noop in most cases, but has some overhead of thread locals, checks and arcs
  • regular, active client
  • this case basically, that does more locking and arcing

I would especially love to prove with numbers that client impl feature is useless, and be able to remove it, since it is super painful to maintain. Especially now that the core is getting more modular, compilation times and dependencies become less of a concern.
I would also like to get rid of more unsafe code. One example is the use of UnsafeCell to switch the thread local hub, which could be safely done only with a lock I guess, so I would like to also measure that.

@vorner
Copy link
Contributor Author

vorner commented May 8, 2020

I'll admit I don't remember much of the details and haven't looked under the hood recently, so I guess a bit about a lot of what you talk about. Nevertheless, if the concern is about the Arc and RwLock, I'll do a shameless plug here. I have this arc-swap library that could replace it but be possibly much faster. It's less powerfull (you probably could not outright mutate the client, only replace it as a whole).

Furthermore, with the Cache it's possible to make even faster, but with some other drawbacks (like old never-to-be-used-again clients may stay around for longer time).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants