-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Paginated requests for the file list #13915
Comments
Once we have a solution for #983 - this should no longer be an issue because we know the content did change and we can invalidate the pagination. |
Hmmm true, nice idea 😄 |
research topic => 9.0 |
So far there has been no need for paginated requests, the current solution seems to work fine. Close ? (can be reopened if the need arises) |
Reopening to reconsider for 9.2. One idea is to use the REPORT method using pagination like for the comments, and using the PROPFIND-like response from the "tags filter" section. On the JS side, more work will be required because some parts of the code require the whole list to be known upfront, for example for detecting duplicate file names. CC @butonic |
For reference: the comments implementation for REPORT LIMIT and OFFSET filters is in https://github.com/owncloud/core/blob/master/apps/dav/lib/Comments/CommentsPlugin.php#L49 |
Simply using multiple ajax requests to load the whole list may not be the solution because the sheer number of files might be too much to handle in a browser:
|
Also note that with pagination and multiple ajax per page, there is a risk of missing files in case new files have appeared in the list. Now thinking of it, we might be able to use the Webdav precondition headers to specify the last known folder etag. In case the list changed between two paginated calls, we need to find a way to recover from it. |
@DeepDiver1975 pointed out that if we had a different data structure where everything the user views is on a single table, filtering and pagination would be more accurate and faster. |
... because currently the result of whatever the user views is pulled from different tables, especially when dealing with mount points. So it's not a simple flat list. |
would need adding pagination stuff to all layers: #6103 I also thought about #4936 a bit. If we keep etag history (webdav sync?!) it means it should still be possible to query an older version of the file list and paginate on that. Well, but that only works for the full file list, not for search. |
I'll try do give pagination a shot using JS. Since I'm fairly new to OC, I don't know how to talk to the API just yet, but virgin eyes are always good :-) Is there a way to talk JSON with/to the API? |
Have a look at the PROPFIND calls in the network console after browsing into a folder in the web UI. It seems that Webdav should be extensible enough to allow us to make it read JSON and output JSON. But I'm not sure whether SabreDAV is extensible enough. We'll likely need adding layers there. |
Main problem still is that we currently need to generate the full file list on the server and that list can be paginated. Short term idea: cache the generated file list for a short time frame and reuse this on subsequent calls. |
The problem is less showing the list, rather than fetching it from the server. XML serialization of 10000 entries alone takes a a few seconds. Also pagination can be passed down to the db layer, allowing filecache queries to be paginated as well. |
some day... not now... because as we already know the filecache doesn't contain all entries we want to return, these need to be combined with mount points... pagination becomes quite complex then. |
We can improve this, because the current implementation is suboptimal (all in memory instead of streaming) see #14531 (comment) and owncloud/client#3111 (comment) |
but then the client has to support streaming as well ... otherwise we still have to wait until the whole response has arrived. that is not how jquery ajax requests work. And if we were to implement comet-style long polling we would quickly leave the WebDAV spec. |
In any case with streaming you should still get a speed improvement as you can save on all the in-memory huge tree overhead and just stream out the XML. I think nginx in its default configuration is even tempted to spool the reply to a file before sending to the client socket :[ For real streamed reading: |
Currently the only thing I think we can offer: Server side
Web UI side
I think I'm more happy to provide the server part but I'm a bit scared how this will scar the legacy FileList code even more to cram in this new pagination, considering that a lot of code rely on the assumption that the whole list is available. |
👍 We might re-use this for the sync client. But in general I'm still disagreeing with doing multiple requests per page. Everyone is trying to go away from multiple requests for latency/mobile reasoins (e.g. CSS sprites, HTTP2, bundling, (...)) and you want to introduce it :-( Just saying.. |
Multiple request is only if you actually scroll down. I wouldn't mind giving up all the pagination stuff if "fetching 1000 entries in one go" is the way to go. Because pagination is complicated especially when the crappy FS layer cannot support it. |
moving to triage for rescheduling |
What is the status of this issue? Since searching in mobile clients depends on it, it would be really helpful for usability. |
new effort in owncloud/web#116 |
this is about API pagination, not about the frontend. This case is also valid for ocis because webDAV will also be the file access protocol. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 10 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed. |
At some point we'll move the files web UI to WebDAV #12353
We might want to introduce real paginated requests but will have to somehow extend the WebDAV protocol in some way.
Please note that the current list.php doesn't support pagination. It still loads the whole list but the JS code will cut it into pages.
One problem with pagination is always about updates. If between the call to the first page, a file has been inserted in the first page, then the call for the second page will be shifted and missing that file.
@DeepDiver1975 @icewind1991
The text was updated successfully, but these errors were encountered: