Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WebSocket and SSE options #1272

Closed
wants to merge 7 commits into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
22 changes: 22 additions & 0 deletions SPEC
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,28 @@ be implemented by the server.
fatal(message, &block)
<tt>rack.multipart.buffer_size</tt>:: An Integer hint to the multipart parser as to what chunk size to use for reads and writes.
<tt>rack.multipart.tempfile_factory</tt>:: An object responding to #call with two arguments, the filename and content_type given for the multipart form field, and returning an IO-like object that responds to #<< and optionally #rewind. This factory will be used to instantiate the tempfile for each multipart form file upload field, rather than the default class of Tempfile.

Servers that choose to support WebSockets and SSE connections should follow these additional environment specifications:
<tt>rack.upgrade?</tt>:: Is nil or missing if no connection upgrade is possible, :websocket for a WebSocket upgrade, or :sse for an SSE upgrade.
<tt>rack.upgrade</tt>:: Is used to pass a handler object back to the server for WebSocket and SSE connections. The handler MAY implement any or all of the following callbacks.
on_open(client) # called when the connection is complete with a client object.
on_message(client, message) # called when a WebSocket message is received by server.
# <tt>message</tt> will be UTF-8 encoded for text messages
# and Binary encoded for binary messages.
on_shutdown(client) # may be called before a connection is closed due to server shutdown.
on_close(client) # called after the connection is closed.
on_drained(client) # may be called when the number of pending writes drops to zero.

The <tt>client</tt> is used for writing and checking status of the upgraded connect. It has these methods.
write(message) # writes to the WebSocket or SSE connection
close() # forces a close of the WebSocket or SSE connection

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading through @ioquatix and @matthewd 's comments, I wonder:

Perhaps close() should "schedule the connection to close once all pending write calls have been performed"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want to have that, implement it as close_write which is what Ruby calls shutdown(SHUT_WR). close should guarantee after returning that the underlying socket is closed. Otherwise, you are in for a world of pain.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agoo treats a cal to close as a request just like a write and places the close on the queue.

Copy link

@boazsegev boazsegev May 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ohler55, so does iodine. I place a "close" marker on the packet queue, so close is always performed after all scheduled data is sent.

@ioquatix , I think your missing the point of abstracting away the network and the protocol.

My suggestion was about clarifying this little bit, not changing what both iodine and agoo already implement.

We aren't authoring a network layer. We are authoring an abstracted application side API.

The reasonable exception is that write is performed before close. i.e., if my code is:

write "hello"
close

The reasonable expectation is that "hello" is actually written.

There's no force_close or close_write in the specification because the application shouldn't be concerned with these things. If the application doesn't want data written, it can avoid writing it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can make close call flush, or you can flush after every write. But you make close call flush, you better be careful about EPIPE.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a high level protocol like this, calling flush after each write would make sense to me.

It provides the user with a strong expectation, that after calling write, the data has been sent, and pending a network failure, would arrive, or else the write fails right then, with, say, EPIPE. Otherwise you'll just end up with a spaghetti state machine trying to handle all these conditions.

Copy link

@boazsegev boazsegev May 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ioquatix ,

I hope I don't seem too blunt or crude. I very much appreciate the interest and willingness to polish the specification and make it both as clear and as practical as can be.

However, please consider the model to be totally separate from the network - there is no network. There's only this API.

We can change the API if we need features, but we don't expose network bits or logic because part of our job is to abstract these things away - so there is no network, there is no protocol (as much as possible).

In this sense, flush doesn't exist. It's a network / server detail that the application never sees, abstracted away by the server.

The closest an application can come to ask about these things is to ask about all the pending outgoing write events that haven't yet completed. This allows an application to know if the on_drain callback is somewhere in their future.

The pending query doesn't to expose the network, it exposes the progress of existing API calls. This provides important information about the possibility of a slow client (or a slow clearing "queue") allowing an application to stop serving a resource hungry "client".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand your response you are answering yes to the question of whether or not a flush method should be added. Then in any application you wrote you would block until the write completes instead of making use of the on_drained callback. That is your choice of course.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, please consider the model to be totally separate from the network - there is no network. There's only this API.

Fair enough.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand your response you are answering yes to the question of whether or not a flush method should be added. Then in any application you wrote you would block until the write completes instead of making use of the on_drained callback. That is your choice of course.

I really find the inverted flow control of callback style programming horrible. So, I prefer #flush over an #on_drained callback. Callbacks = state machine spaghetti = hard to maintain/buggy code. It's just my personal opinion, FYI.

open?() # returns true if the connection is open, false otherwise
pending() # returns the number of pending writes or -1 if the connection is closed
env() # returns the `env` Hash from the inital call to `#call()`. Note some elements may no longer be relevant.

The <tt>env['rack.upgrade']</tt> option should only be set it the environment options has a non-nil value for the <tt>rack.upgrade?</tt> option,
If the response status is 300 or higher, the server MUST ignore the <tt>rack.upgrade</tt> value (send the response without performing an upgrade).

The server or the application can store their own data in the
environment, too. The keys must contain at least one dot,
and should be prefixed uniquely. The prefix <tt>rack.</tt>
Expand Down