Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Desired features/changes for Spark 3.0 #1105

Open
perwendel opened this issue Mar 22, 2019 · 29 comments
Open

Desired features/changes for Spark 3.0 #1105

perwendel opened this issue Mar 22, 2019 · 29 comments

Comments

@perwendel
Copy link
Owner

Hi,
A 2.9.0 release will be done shortly and after that my work will be fully focused on 3.0.
Any input on what would be fitting for Spark 3.0 is much appreciated. Please post in this thread.
Thanks!

@fwgreen
Copy link

fwgreen commented Mar 23, 2019

Please consider reopening the issues labeled Fix in version 3: They were closed two years ago without actually being resolved.

@perwendel
Copy link
Owner Author

@fwgreen I'll go through them and check if there are any that shouldn't have been closed.

@RyanSusana
Copy link

Native support for uploaded files instead of getting the raw request.

Multiple static file locations

@mcgivrer
Copy link
Contributor

mcgivrer commented Apr 4, 2019

Would it be possible to add internal metrics (usages, perfs, custom) to answer my personal needs of control :) More seriously, in a world of containers, metrics are mandatory to monitor services. Maybe a look at microprofile-metrics and their annotations could inspire developers ? A /metrics output with a prometheus format (anyway something standard) would be a must ;)
I clearly understand the need to KISS, and not using annotation make sens, but having a easy way to declare metric would be a killer feature (config file, fluent API extension of get(), post() etc... ?).

@johnnybigoode-zz
Copy link

@RyanSusana I was wondering the same for static files (#568)

Could you explain more about your use case?

@RyanSusana
Copy link

RyanSusana commented Apr 9, 2019

@johnnybigoode
Well I would like to have one Spark instance to be able to hook on various staticfile locations.

One for the JS/CSS and one for /uploads or something

This would allow me to split my application up better.

For my specific use-case:
I am developing a CMS framework and the Admin UI has it's own static resources. I would like my framework users to be able to hook on their own staticfiles.

Right now how I solve it, is that I traverse the classpath/jar and add a route for every file I have

@laliluna
Copy link

I have two ideas and if there is interest, I could try to provide pull requests.

  1. Enhance testability
    In order to test routes and their output it is currently required to change the way how you declare routes. Actually you cannot test routing in combination with testing the output. If we change the Service to implement an interface and allow to swap it Spark.enableMock()
    This allows to write tests as demod here: Add proper testing lib #1085

  2. Allow to decorate response and answer
    If I could decorate a response with a custom class extending the response, I could add behaviour and implement routes more elegant.

Once somewhere

    Spark.decorateResponse((response) -> return MySuperDuperResponse(response));

In your routes

    Spark.get("sample", (request, response) -> {
        return response.json(loadWhatever()).httpOk404IfNull();
     });

@perwendel
Copy link
Owner Author

@RyanSusana @mcgivrer @laliluna
Good suggestions. We'll evaluate! Some of them will likely be part of 3.0.

@mlitcher
Copy link
Contributor

Two big things on my wish list: break apart core and jetty, to allow for other embeddable servers (#137), and leverage servlet vs filter (#193).

@OzzyTheGiant
Copy link

CSRF Tokens would be a nice simple feature. I use them for Single Page Web Apps, storing them in sessions. Normally, in other languages, there are standalone libraries or packages that provide this functionality to be used with any framework. In the Java world, CSRF tokens are either already integrated into other frameworks (Spring Security for example) or are part of old packages that are no longer being maintained or have more complex configurations in XML that, frankly, I don't understand how to set up. Do you think this is something that could be added in? or do you happen to know of a library that I can pick up that has little to no configuration and is standalone? I tried searching Maven Central but no luck.

@Technerder
Copy link

Request: Method to respond with a File

@robax
Copy link
Contributor

robax commented Jul 23, 2019

One thing that might be useful is the option to use Jax-RS style annotations on routes. This way, instead of reaching into the request object and grabbing seemingly random fields, you can define the expected inputs via annotations.

If there's any interest in this, we've already developed something we use internally. I could spin it out into a PR easily!

@ontehfritz
Copy link

A big thing that would be nice to have is OpenApi/Swagger support, or a plugin/maven package to add support. Most frameworks out there have this to autogenerate open api specs and have swagger UI integrated, it makes testing and auto generating interfaces from the spec for your api's really awesome!

@rbygrave
Copy link
Contributor

Jax-RS style annotations

Note that I have done a APT based code generation project for Javalin and would look to do the same for Spark. The Javalin one is documented at: https://dinject.io/docs/javalin/ ... I just need to adapt the code generation for Spark request/response.

OpenApi/Swagger support,

As part of the APT code generation for controllers it also generates OpenApi/Swagger docs. The nice thing here is that APT has access to javadoc/kotlindoc so actually we just javadoc our controller methods and that goes into the generated swagger.

This approach is more similar to the jax-rs style with dependency injection and controllers. Note that the DI also uses APT code generation so it is fast and light (but people could swap it out for slower heavier DI like Guice or Spring if they wanted to).

@Technerder
Copy link

A way for the get post and other methods alike to listen for requests with a specific host parameter. Something like

Spark.get("/", "test.example.com", (request, response) -> {
    return "Hello!";
}

@perwendel
Copy link
Owner Author

Thanks everyone for your suggestions. It's been a long summer vacation with a resulting dip in project activity. Ramping up will begin within a month!

@Chlorek
Copy link

Chlorek commented Aug 25, 2019

I am just now working on my first project with Spark and I like its minimalism, as time goes I will probably find more things, but these are some features I found missing early in development:

  • separable URLs for static files with prefixes:
    staticFiles.externalLocation("resources", "static");
    would result in /static/* serving files from resources
  • proper files upload support

These are not deal-breakers, so I continue development and it's really good so far.
However, I would like to add my two cents in matter of support of multiple HTTP server solutions: in my (maybe not so popular) opinion Spark should handle just one HTTP server very well, because - well it is literally just HTTP server, let's not make it more complicated than it is.

@brixzen
Copy link

brixzen commented Oct 1, 2019

Please add option to disable GZip in staticFiles response.

@sid-ihycq
Copy link

Allow other embeddable servers will be great!!

@jlorenzen
Copy link

A little late to the game but here are some improvements I'd like to suggest. I ran into this hurdles when I used sparkjava to implement a basic REST service that only had a few endpoints. Overall experience was great and I loved the simplicity of sparkjava.

  • Add support for content negotiation. I like how ninja has implemented it.
  • Refactor error handling to be more functional and have less side effects. What I mean by this is my service had a before filter that handled authorization. If authorization failed I wanted to immediately halt the request and return an error response. Spark accomplishes this by calling Spark.halt or throwing a HaltException. Ideally I could just return a response in the before filter but the handler returns void so that's not possible. The other downside is since we are using OAuth 2, according to RFC6750, we must return a WWW-Authenticate response header. But the halt and HaltException solution doesn't allow me to set a response header. So I had to resort to throwing an exception and using an error handler to catch that. It all worked but in a codebase where we are trying to avoid side effects and be more functional it felt dirty.

That's about it. Appreciate all the hard work and if these suggestions sound interesting I think I'd be able to submit some patches if given some direction.

@skedastik
Copy link

skedastik commented Mar 17, 2020

An option to disable automatic gzip compression based on the presence of a Content-Type: gzip response header would be extremely useful. To wax philosophical for a second, I'm generally opposed to magic in frameworks. This is one of the reasons I gravitated toward Spark in the first place: It's thin, transparent, and almost entirely free of magic. Except for this feature which has no opt-out or clean workaround of any kind. Example use case: I have an endpoint that serves as an authenticated gateway to resources in S3. These resources are gzipped for good reason (consume less storage and less data over the wire). If I want to stream these resources I'm forced to wrap the InputStream in a GZIPInputStream, otherwise Spark will forcibly zip my resource twice when I include the relevant HTTP header.

@RyanSusana
Copy link

RyanSusana commented Mar 17, 2020

@skedastik

I ran into that same issue TODAY. How did you solve it?

@skedastik
Copy link

skedastik commented Mar 17, 2020

@RyanSusana I posted my (grotesque) workaround on Stack Overflow.

@JusticeN
Copy link

plugin system like in javalin will make spark extensible. Then creating plugin for common task like

  • Graphql support
  • production feature like metrics about routes call, endpoins, health, rpc/grpc ...
  • oauth2

@realkarmakun
Copy link

GraphQL would be very nice

@grishka
Copy link

grishka commented Jun 11, 2021

A response type transformer as I've written in detail in #1181

@sid-ihycq
Copy link

Will 3.0 be release?

@Typografikon
Copy link

What about http2 support according to pr #1183 ? Also is there some release plan for 3.0 ?

@lepe
Copy link

lepe commented May 16, 2023

What about http2 support according to pr #1183 ? Also is there some release plan for 3.0 ?

Already implemented in the Unofficial Build among with other features. As far as I know @perwendel is planning to come back and keep going with this project, but meanwhile, I'm merging and fixing what I can in that repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests