Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: extracted PortForwarderWebsocketListener and added tests #4159

Merged
merged 1 commit into from May 19, 2022

Conversation

manusa
Copy link
Member

@manusa manusa commented May 18, 2022

Description

refactor: extracted PortForwarderWebsocketListener and added tests

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change
  • Chore (non-breaking change which doesn't affect codebase;
    test, version modification, documentation, etc.)

Checklist

  • Code contributed by me aligns with current project license: Apache 2.0
  • I Added CHANGELOG entry regarding this change
  • I have implemented unit tests to cover my changes
  • I have added/updated the javadocs and other documentation accordingly
  • No new bugs, code smells, etc. in SonarCloud report
  • I tested my code in Kubernetes
  • I tested my code in OpenShift

@manusa manusa mentioned this pull request May 18, 2022
11 tasks
@manusa manusa marked this pull request as ready for review May 18, 2022 13:58
Copy link
Contributor

@shawkins shawkins left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. Just a couple of thoughts. It might be simpler if PortForwarderWebsocketListener implemented PortForward - that tends to be the way it's done for the other listeners.

This won't induce the failure mentioned with #4115 (comment) - but it seems like your goal was just to have it be more testable in general.

There will be some rebasing to do depending on what is committed first given the changes in #4148

@manusa
Copy link
Member Author

manusa commented May 18, 2022

It might be simpler if PortForwarderWebsocketListener implemented PortForward - that tends to be the way it's done for the other listeners.

I considered that option, however, I decided to proceed this way to properly separate concerns and keep things simple.

It also simplifies specifying the behaviors to be tested.

This won't induce the failure mentioned with #4115 (comment) - but it seems like your goal was just to have it be more testable in general.

I didn't try altering the method calls, but part of the goal should be to cover that scenario and make sure the issue won't get reintrduced. I'll give it another spin in case that's not prevented.

There will be some rebasing to do depending on what is committed first given the changes in #4148

I couldn't check 4148, so I'm not sure if you decided on one of the suggested approaches. I'll check tomorrow and ping you.

@shawkins
Copy link
Contributor

I'll give it another spin in case that's not prevented.

Causing it is based upon a timing issue with the WebSocket implementation. See the breakup of partial writes occurring in PodTest - emitting "Hell" and "o world" separately. From there if remote websocket close / done is processed in a racey way with the final write you can get truncated output.

@manusa
Copy link
Member Author

manusa commented May 19, 2022

So I've checked moving the webSocket.request(); around.

The current tests do verify that more data is only requested once we're ready to process it. So moving the webSocket.request(); request invocation causes a few test failures.

There is no specific test for the race condition, but I'm not even sure that this is a valid behavior. Clients should be responsible of closing the websocket when they've received the requested data or when they meet any other condition. IMO the PodTest#testPortForward is not OK, since it assumes that the transmission is complete when the SocketChannel is empty, which might or might not be true. The sync nature of the onMessage tests do verify that the websocket is processed in an orderly fashion, which is also important.

The current onMessage implementation, where the data is processed asynchronously in the SerialExecutor would also suffer from moving the webSocket.request(); out, since the "backpressure" is lost and we might get flooded with messages from the server (even if the message processing order is still guaranteed by the SerialExecutor). This is also verified.

In any case, I'll try to add a few more test to verify additional scenarios.

Signed-off-by: Marc Nuri <marc@marcnuri.com>
@sonarcloud
Copy link

sonarcloud bot commented May 19, 2022

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 1 Code Smell

73.8% 73.8% Coverage
0.0% 0.0% Duplication

@shawkins
Copy link
Contributor

So moving the webSocket.request(); request invocation causes a few test failures.

My initial mistake was this behavior is dependent on the websocket implementation. Based upon the JDK logic my initial understanding is that it didn't matter where you called websocket.request in the listener - the events were being serially delivered because the current thread when it finished the listener method would trigger the processing. With OkHttp it appears could deliver close via the other websocket thread - the one processing timed events and ping/pong - so that as soon as you call request a pending close could get delivered.

@manusa manusa added this to the 6.0.0 milestone May 19, 2022
@manusa manusa merged commit a1f44db into fabric8io:master May 19, 2022
@manusa manusa deleted the refactor/port-forwarder branch May 19, 2022 12:41
shawkins added a commit to shawkins/kubernetes-client that referenced this pull request Oct 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants