From 3988ecc213ffe3587f5b0146e743722edbf4a66e Mon Sep 17 00:00:00 2001 From: Uma Annamalai Date: Mon, 28 Aug 2023 12:26:46 -0700 Subject: [PATCH] GraphQL Async Instrumentation Support (#908) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Add GraphQL Server Sanic Instrumentation * Co-authored-by: Timothy Pansino Co-authored-by: Uma Annamalai * Add co-authors Co-authored-by: Timothy Pansino Co-authored-by: Uma Annamalai * Comment out Copyright notice message Co-authored-by: Timothy Pansino Co-authored-by: Uma Annamalai * Finalize Sanic testing * Fix flask framework details with callable * Parametrized testing for graphql-server * GraphQL Async Resolvers Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai * GraphQL Proper Coro and Promise Support (#508) * Fix GraphQL async issues * Fix nonlocal binding issues in python 2 * Fix promises with async graphql * Issues with promises * Fix promises in graphql2 * Fixed all graphql async issues * Fix Py27 quirks * Update tox * Fix importing paths of graphqlserver * Fix broken import path * Unpin pypy37 * Fix weird import issues * Fix graphql impl coros (#522) * Strawberry Async Updates (#521) * Parameterize strawberry tests * Remove duplicate functions * Fix strawberry version testing * Updates * Finalize strawberry updates * Clean out code * Ariadne Async Testing (#523) * Parameterize ariadne tests * Fixing ariadne tests * Fixing ariadne middleware * Set 0 extra spans for graphql core tests * Add spans attr to strawberry tests * Graphene Async Testing (#524) * Graphene Async Testing * Fix missing extra spans numbers * Graphene promise tests * Fix py2 imports * Removed unused __init__ * Update code level metrics validator for py2 * Unify graphql testing imports * Fix ariadne imports * Fix other imports * Fix import issues * Merge main into develop-graphql-async (#892) * Update Versioning Scheme (#651) * Update versioning scheme to 3 semver digits * Fix version indexing Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei * Remove version truncation * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: TimPansino * Fix Trace Finalizer Crashes (#652) * Patch crashes in various traces with None settings * Add tests for graphql trace types to unittests * Add test to ensure traces don't crash in finalizer * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Add usage tracking metrics for Kafka clients. (#658) * Add usage tracking metrics for Kafka clients. * Fix double import lint error * [Mega-Linter] Apply linters fixes * Create version util file and add metrics to consumer. * Address linting errors. * Add missing semi-colon. * [Mega-Linter] Apply linters fixes * Bump tests. Co-authored-by: Hannah Stepanek Co-authored-by: hmstepanek Co-authored-by: umaannamalai * Deprecate add_custom_parameter(s) API (#655) * Deprecate add_custom_parameter(s) API * Fix unicode tests and some pylint errors * Fix more pylint errors * Revert "Fix more pylint errors" This reverts commit 807ec1c5c40fe421300ccdcd6fedd81f288dce2c. * Edit deprecation message in add_custom_parameters * Add usage metrics for Daphne and Hypercorn. (#665) * Add usage metrics for Daphne and Hypercorn. * [Mega-Linter] Apply linters fixes Co-authored-by: umaannamalai * Fix Flask view support in Code Level Metrics (#664) * Fix Flask view support in Code Level Metrics Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * [Mega-Linter] Apply linters fixes * Bump tests * Fix CLM tests for flaskrest * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: TimPansino Co-authored-by: Uma Annamalai * Fix aioredis version crash (#661) Co-authored-by: Uma Annamalai * Add double wrapped testing for Hypercorn and Daphne and dispatcher argument to WSGI API. (#667) * Add double wrapped app tests. * Fix linting errors. * [Mega-Linter] Apply linters fixes * Add co-authors. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Add Python 3.11 Support (#654) * Add py311 tests * Fix typo * Added 3.11 support for aiohttp framework Co-authored-by: Timothy Pansino * Set up environment to run Python 3.11 Co-authored-by: Timothy Pansino * Add Python 3.11 support for agent_features Co-authored-by: Timothy Pansino * Partial Python 3.11 support added for Tornado Co-authored-by: Timothy Pansino * Adjust postgres versions * Fix tornado install path locally * Remove aioredis py311 tests * Update 3.11 to dev in tests * Fix sanic instrumentation and imp/importlib deprecation Co-authored-by: Timothy Pansino * Simplify wheel build options * Update cibuildwheel for 3.11 * Remove falconmaster py311 test Co-authored-by: Lalleh Rafeei Co-authored-by: Timothy Pansino * Remove devcontainer submodule (#669) * Uncomment NewRelicContextFormatter from agent.py (#676) * Fix botocore tests for botocore v1.28.1+ (#675) * Fix botocore tests for botocore v1.28.1+ Co-authored-by: Timothy Pansino * Fix boto3 tests for botocore v1.28.1+ Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Fix boto3 tests for python 2.7 Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Feature increased custom event limit (#674) * Update reservoir size for custom events. * [Mega-Linter] Apply linters fixes * Increase custom event limit. (#666) * Remove duplicated CUSTOM_EVENT_RESERVOIR_SIZE Co-authored-by: Tim Pansino Co-authored-by: TimPansino Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add python 3.11 stable release to GHA (#671) * Double kafka test runners (#677) Co-authored-by: Hannah Stepanek * Fix failing flask_rest tests (#683) * Pin flask-restx in flask_rest tests for 2.7 flask-restx dropped support for 2.7 in 1.0.1. * Drop support for flask-restplus flask-restx replaced flask-restplus. flask-restplus's latest version supports 3.6 which we don't even support anymore. * Fix failing botocore tests (#684) * Change queue url for botocore>=1.29.0 botocore >=1.29.0 uses sqs.us-east-1.amazonaws.com url instead of queue.amazonaws.com. * Use tuple version instead of str * Change botocore129->botocore128 * Add record_log_event to public api (#681) * Add patch for sentry SDK to correct ASGI v2/v3 detection. (#680) * Add patch for sentry to correct ASGI v2/v3 detection. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * [Mega-Linter] Apply linters fixes Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Update pip install command (#688) * Validator transfer from fixtures.py to validators directory, Part 1 (#672) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Fix import issues * Fix (more) import issues * Fix validate_transaction_metrics import in aioredis * Remove commented code from fixtures.py * Initialize ExternalNode properties (#687) Co-authored-by: Hannah Stepanek * Fix package_version_utils.py logic (#689) * Fix package_version_utils.py logic Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Move description of func into func itself * typecast lists into tuples * Remove breakpoints * Empty _test_package_version_utils.py * Make changes to the test Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Pin Github Actions Runner to Ubuntu 20 for Py27 (#698) * Pin Github Actions runner to ubuntu 20 for Py27 * Upgrade setup-python * Fix Confluent Kafka Producer Arguments (#699) * Add confluentkafka test for posargs/kwargs * Fix confluent kafka topic argument bug * More sensible producer arguments * Fix tornado master tests & instrument redis 4.3.5 (#695) * Remove 3.7 testing of tornado master tornadomaster dropped support for 3.7 * Instrument new redis 4.3.5 client methods Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Remove pylint codes from flake8 config (#701) * Validator transfer from fixtures.py to validators directory, Part 2 (#690) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Move validate_application_error_trace_count into validators directory * Move validate_application_error_event_count into validators directory * Move validate_synthetics_transaction_trace into validators directory * Move validate_tt_collector_json to validators directory * Move validate_transaction_trace_attributes into validator directory * Move validate_transaction_error_trace_attributes into validator directory * Move validate_error_trace_collector_json into validator directory * Move validate_error_event_collector_json into validator directory * Move validate_transaction_event_collector_json into validator directory * Fix import issues from merge * Fix some pylint errors * Revert 'raise ValueError' to be PY2 compatible * Delete commented lines * Fix bug in celery where works don't report data (#696) This fixes Missing information from Celery workers when using MAX_TASKS_PER_CHILD issue. Previously, if celery was run with the --loglevel=INFO flag, an agent instance would be created for the main celery process and after the first worker shutdown, all following worker's agent instances would point to that agent instance instead of creating a new instance. This was root caused to incorrectly creating an agent instance when application activate was not set. Now no agent instance will be created for the main celery process. * Reverts removal of flask_restful hooks. (#705) * Update instrumented methods in redis. (#707) Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Add TraceCache Guarded Iteration (#704) * Add MutableMapping API to TraceCache * Update trace cache usage to use guarded APIs. * [Mega-Linter] Apply linters fixes * Bump tests * Fix keys iterator * Comments for trace cache methods * Reorganize tests * Fix fixture refs * Fix testing refs * [Mega-Linter] Apply linters fixes * Bump tests * Upper case constant Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Fix Type Constructor Classes in Code Level Metrics (#708) * Fix CLM exception catching * Reorganize CLM Tests * Add type constructor tests to CLM * Fix line number * Pin tox version * Fix lambda tests in CLM * Fix lint issues * Turn helper func into pytest fixture Co-authored-by: Hannah Stepanek * Fix sanic and starlette tests (#734) * Fix sanic tests * Tweak test fix for sanic * Remove test for v18.12 in sanic (no longer supported) * Pin starlette latest to v0.23.1 (for now) * Add comment in tox about pinned starlette version * Add methods to instrument (#738) * Add card to instrumented methods in Redis (#740) * Add DevContainer (#711) * Add devcontainer setup * Add newrelic env vars to passenv * Add default extensions * Add devcontainer instructions to contributing docs * Convert smart quotes in contributing docs. * Apply proper RST formatting * [Mega-Linter] Apply linters fixes * Add GHCR to prerequisites * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: TimPansino * Module classmethod fix (#662) * Fix function_wrapper calls to module * Fix wrapper in pika hook * Revert elasticsearch instrumentation * Revert some wrap_function_wrappers to orig * Remove comments/breakpoints * Fix hooks in elasticsearch Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Fix log decorating to be JSON compatible (#736) * Initial addition of JSON capability * Add NR-LINKING metadata JSON combatibility * Remove breakpoint * Hardcode local log decorating tests * Tweak linking metatdata parsing/adding * Revert "Fix log decorating to be JSON compatible" (#746) * Revert "Fix log decorating to be JSON compatible (#736)" This reverts commit 0db5fee1e5d44b0791dc517ac9f5d88d1240a340. * [Mega-Linter] Apply linters fixes * Trigger tests Co-authored-by: hmstepanek * Add apdexPerfZone attribute to Transaction. (#753) Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Fix tests in starlette v0.23.1 (#752) * Fix tests in starlette v0.23.1 * Fix conditional tests * Add comment to bg_task test * Support `redis.asyncio` (#744) * Support `redis.asyncio` * Fix `flake8` issues Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Redis Asyncio Testing (#750) * Add standardized method for package version tuples * Adapt aioredis tests to redis.asyncio * Standardize version tuple * Refresh uninstrumented redis methods * Fix aioredis version checking * Remove aioredis version function * CodeCov Integration (#710) * Add aggregate coverage settings to tox.ini * Refactor coverage fixture for GHA * Send coverage data files * Linter fixes * Configure codecov report * Yield cov handle from fixture * Fix empty coverage fixture * Specify artifact download dir * Find coverage files with find command * Add concurrency cancelling to github actions * uncomment test deps * Fix or symbol * Fix concurrency groups * Linter fixes * Add comment for yield None in fixture * [Mega-Linter] Apply linters fixes * Bump Tests --------- Co-authored-by: TimPansino * Mergify (#761) * Add mergify config file * Remove priority * Clean up mergify rules * Add non-draft requirement for merge * Add merge method * [Mega-Linter] Apply linters fixes * Don't update draft PRs. * Remove merge rules for develop branches * Linting --------- Co-authored-by: TimPansino * Elasticsearch v8 support (#741) * Fix function_wrapper calls to module * Fix wrapper in pika hook * Revert elasticsearch instrumentation * Revert some wrap_function_wrappers to orig * Remove comments/breakpoints * Fix hooks in elasticsearch * Add new client methods from v8 and their hooks * Add elasticsearch v8 to workflow and tox * Fix indices for elasticsearch01 * Disable xpack security in elasticsearch v8.0 * Start to add try/except blocks in tests * Add support for v8 transport * add support for v8 connection * Add tests-WIP * Clean up most tests * Clean up unused instrumentation Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Remove elastic search source code * Elasticsearch v8 testing Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Scope ES fixture * ES v8 only supports Python3.6+ * Refactor transport tests for v8 Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Kate Anderson Co-authored-by: Enriqueta De Leon * Remove extra comments * Added perform_request_kwargs to test_transport * Fix some linter issues * Remove extra newline * Group es v7 v8 process modules together * Add auto signature detection & binding * Use bind_arguments in ES * Add test for wrapped function * Add validator for datastore trace inputs * Use common bind_arguments for PY3 * Fix tests in starlette v0.23.1 (#752) * Fix tests in starlette v0.23.1 * Fix conditional tests * Add comment to bg_task test * Split below es 8 methods from es 8 methods Note the previous tests in this file to check whether a method was instrumented, did not test anything because they were checking whether the list of methods that we instrumented were instrumented instead of whether there were uninstrumented methods on the es client that we missed. Because we decided due to lack of reporting of bugs by our customers, to not support the buggy wrapping on previous es versions (below es8), we only added tests to assert all methods were wrapped from es8+. We also are only testing es8+ wrapping of methods since the previous versions wrapping behavior may not have been correct due to the signature of the methods changing without us detecting it due to lack of tests. Since our customers have not reported any issues, it seems not worth it at this time to go back and fix these bugs. * Remove signature auto detection implementation * Fixup: remove signature autodetection * Fixup: cleanup * Test method calls on all es versions * Fixup: don't run some methods on es7 --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: mary-martinez Co-authored-by: enriqueta Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Update contributors workspace link in CONTRIBUTING.rst. (#760) * Update link in CONTRIBUTING.rst. * Update to RST syntax. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add Retry to Pip Install (#763) * Add retry to pip install * Fix retry backoff constant * Fix script failures --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add aiohttp support for expected status codes (#735) * Add aiohttp support for expected status codes * Adjust naming convention * Fix expected tests for new validator behavior --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Tim Pansino * Fix PyPy Priority Sampling Test (#766) * Fix pypy priority sampling * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: TimPansino * Config linter fixes (#768) * Fix default value and lazy logging pylint * Fix default value and lazy logging pylint * Fix unnecessary 'else' in pylint * Fix logging-not-lazy in pylint * Fix redefined built-in error in Pylint * Fix implicit string concatenation in Pylint * Fix dict() to {} in Pylint * Make sure eval is OK to use for Pylint * Fix logging format string for Pylint * Change list comprehension to generator expression * [Mega-Linter] Apply linters fixes * Rerun tests --------- Co-authored-by: lrafeei * Sync tests w/ agents/cross_agent_tests/pull/150 (#770) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Infinite Tracing Batching & Compression (#762) * Infinite Tracing Batching and Compression settings (#756) * Add compression setting * Add batching setting * Infinite Tracing Compression (#758) * Initial commit * Add compression option in StreamingRPC * Add compression default to tests * Add test to confirm compression settings * Remove commented out code * Set compression settings from settings override * Infinite Tracing Batching (#759) * Initial infinite tracing batching implementation * Add RecordSpanBatch method to mock grpc server * Span batching settings and testing. Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Add final 8t batching tests * Rename serialization test * Formatting * Guard unittests from failing due to batching * Linting * Simplify batching algorithm * Properly wire batching parametrization * Fix incorrect validator use * Data loss on reconnect regression testing Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Test stream buffer batch sizes * Fix logic in supportability metrics for spans * Clean up nested conditionals in stream buffer * Compression parametrization in serialization test * Formatting * Update 8t test_no_delay_on_ok * Update protobufs * Remove unnecessary patching from test * Fix waiting in supportability metric tests * Add sleep to waiting in test * Reorder sleep and condition check * Mark no data loss xfail for py2. * Fix conditional check * Fix flake8 linter issues --------- Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Infinite Tracing Supportability Feature Toggle Metrics (#769) * Add 8T feature toggle supportability metrics * Remove supportability metrics when 8t is disabled. * Formatting --------- Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Fix DT settings for txn feature tests (#771) * Fix pyramid testing versions (#764) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix Ariadne Middleware Testing (#776) * Fix ariadne middleware testing Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei Co-authored-by: TimPansino * Exclude merged PRs from automatic mergify actions. (#774) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Refactor Code Coverage (#765) * Reorder dependency of code coverage fixture * Fix tests with coverage disabled * Refactor code coverage fixture * Clean out old coverage settings * Fix missing code coverage fixture * Fix pypy priority sampling * Start coverage from pytest-cov for better tracking * Refactor coverage config file * Ripping out coverage fixtures * Move tool config to bottom of tox.ini * Disabling py27 warning * Renaming env var --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add GraphQL Introspection Setting (#783) * Add graphql introspection setting * Sort settings object hierarchy * Add test for introspection queries setting * Expand introspection queries testing * [Mega-Linter] Apply linters fixes * Adjust introspection detection for graphql --------- Co-authored-by: TimPansino * Fix instance info tests for redis. (#784) * Fix instance info tests for redis. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai * Fix Redis Instance Info (#790) * Fix failing redis test for new default behavior * Revert "Fix instance info tests for redis. (#784)" This reverts commit f7108e3c2a54ab02a1104f6c16bd5fd799b9fc7e. * Guard GraphQL Settings Lookup (#787) * Guard graphql settings lookup * [Mega-Linter] Apply linters fixes * Bump tests * Update graphql settings test --------- Co-authored-by: TimPansino Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Errors Inbox Improvements (#791) * Errors inbox attributes and tests (#778) * Initial errors inbox commit Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Add enduser.id field * Move validate_error_trace_attributes into validators directory * Add error callback attributes test * Add tests for enduser.id & error.group.name Co-authored-by: Timothy Pansino * Uncomment code_coverage * Drop commented out line --------- Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Error Group Callback API (#785) * Error group initial implementation * Rewrite error callback to pass map of info * Fixed incorrect validators causing errors Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Fix validation of error trace attributes * Expanded error callback test * Add incorrect type to error callback testing * Change error group callback to private setting * Add testing for error group callback inputs * Separate error group callback tests * Add explicit testing for the set API * Ensure error group is string * Fix python 2 type validation --------- Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * User Tracking for Errors Inbox (#789) * Add user tracking feature for errors inbox. * Address review comments, * Add high_security test. * Cleanup invalid tests test. * Update user_id string check. * Remove set_id outside txn test. --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Uma Annamalai * Update Packages (#793) * Update urllib3 to v1.26.15 * Update six to v1.16.0 * Update coverage exclude for newrelic/packages * [Mega-Linter] Apply linters fixes * Drop removed package from urllib3 * Update pytest * Downgrade websockets version for old sanic testing --------- Co-authored-by: TimPansino * Remove Unused Instrumentation and Tests (#794) * Remove unused instrumentation files * Remove testing for deprecated CherryPy versions * Remove unused pyelasticsearch tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix Loguru Instrumentation for v0.7.0 (#798) * Add autosignature implementation * Fix loguru with auto-signature * [Mega-Linter] Apply linters fixes * Fix tests for Py2 * [Mega-Linter] Apply linters fixes * Bump tests * Remove unwrap from signature utils * Fix arg unpacking * Remove unwrap arg from bind_args * Fix linter errors --------- Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei * Remove Twisted framework (#800) * Initial twisted commit * Remove Twisted Framework * Pin virtualenv, fix pip arg deprecation & disable kafka tests (#803) * Pin virtualenv * Fixup: use 20.21.1 instead * Replace install-options with config-settings See https://github.com/pypa/pip/issues/11358. * Temporarily disable kafka tests * Add tests for pyodbc (#796) * Add tests for pyodbc * Move imports into tests to get import coverage * Fixup: remove time import * Trigger tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add tests for Waitress (#797) * Change import format * Initial commit * Add more tests to adapter_waitress * Remove commented out code * [Mega-Linter] Apply linters fixes * Add assertions to all tests * Add more NR testing to waitress --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add testing for genshi and mako. (#799) * Add testing for genshi and mako. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Omit some frameworks from coverage analysis (#810) * Omit some frameworks from coverage analysis * Remove commas * Change format of omit * Add relative_files option to coverage * Add absolute directory * Add envsitepackagedir * Add coveragerc file * Add codecov.yml * [Mega-Linter] Apply linters fixes * Revert coveragerc file settings * Add files in packages and more frameworks * Remove commented line --------- Co-authored-by: lrafeei Co-authored-by: Hannah Stepanek * Run coverage around pytest (#813) * Run coverage around pytest * Trigger tests * Fixup * Add redis client_no_touch to ignore list * Temporarily remove kafka from coverage * Remove coverage for old libs * Add required option for tox v4 (#795) * Add required option for tox v4 * Update tox in GHA * Remove py27 no-cache-dir * Fix Testing Failures (#828) * Fix tastypie tests * Adjust asgiref pinned version * Make aioredis key PID unique * Pin more asgiref versions * Fix pytest test filtering when running tox (#823) Co-authored-by: Uma Annamalai * Validator transfer p3 (#745) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Move validate_application_error_trace_count into validators directory * Move validate_application_error_event_count into validators directory * Move validate_synthetics_transaction_trace into validators directory * Move validate_tt_collector_json to validators directory * Move validate_transaction_trace_attributes into validator directory * Move validate_transaction_error_trace_attributes into validator directory * Move validate_error_trace_collector_json into validator directory * Move validate_error_event_collector_json into validator directory * Move validate_transaction_event_collector_json into validator directory * Move validate_custom_event_collector_json into validator directory * Move validate_tt_parameters into validator directory * Move validate_tt_parameters into validator directory * Move validate_tt_segment_params into validator directory * Move validate_browser_attributes into validators directory * Move validate_error_event_attributes into validators directory * Move validate_error_trace_attributes_outside_transaction into validators directory * Move validate_error_event_attributes_outside_transaction into validators directory * Fix some pylint errors * Redirect check_error_attributes * Fix more Pylint errors * Fix import issues from move * Fix more import shuffle errors * Sort logging JSON test for PY2 consistency * Fix Pylint errors in validators * Fix import error --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix set output warning using new GHA syntax (#833) * Fix set output warning using new GHA syntax * Fix quoting * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Instrument Redis waitaof (#851) * Add uninstrumented command to redis * Update logic for datastore_aioredis instance info * [Mega-Linter] Apply linters fixes * Bump tests * Update defaults for aioredis port --------- Co-authored-by: TimPansino * Ignore patched hooks files. (#849) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix local scoped package reporting (#837) * Include isort stdlibs for determining stdlib modules * Use isort & sys to eliminate std & builtin modules Previously, the logic would fail to identify third party modules installed within the local user socpe. This fixes that issue by skipping builtin and stdlib modules by name, instead of attempting to identify third party modules based on file paths. * Handle importlib_metadata.version being a callable * Add isort into third party notices * [Mega-Linter] Apply linters fixes * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Trigger tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: hmstepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Uma Annamalai * MSSQL Testing (#852) * For mysql tests into mssql * Add tox envs for mssql * Add mssql DB settings * Add correct MSSQL tests * Add mssql to GHA * Add MSSQL libs to CI image * Pin to dev CI image sha * Swap SQLServer container image * Fix healthcheck * Put MSSQL image back * Drop pypy37 tests * Unpin dev image sha * Exclude command line functionality from test coverage (#855) * FIX: resilient environment settings (#825) if the application uses generalimport to manage optional depedencies, it's possible that generalimport.MissingOptionalDependency is raised. In this case, we should not report the module as it is not actually loaded and is not a runtime dependency of the application. Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Replace drop_transaction logic by using transaction context manager (#832) * Replace drop_transaction call * [Mega-Linter] Apply linters fixes * Empty commit to start tests * Change logic in BG Wrappers --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Upgrade to Pypy38 for TypedDict (#861) * Fix base branch * Revert tox dependencies * Replace all pypy37 with pypy38 * Remove action.yml file * Push Empty Commit * Fix skip_missing_interpreters behavior * Fix skip_missing_interpreters behavior * Pin dev CI image sha * Remove unsupported Tornado tests * Add latest tests to Tornado * Remove pypy38 (for now) --------- Co-authored-by: Tim Pansino * Add profile_trace testing (#858) * Include isort stdlibs for determining stdlib modules * Use isort & sys to eliminate std & builtin modules Previously, the logic would fail to identify third party modules installed within the local user socpe. This fixes that issue by skipping builtin and stdlib modules by name, instead of attempting to identify third party modules based on file paths. * Handle importlib_metadata.version being a callable * Add isort into third party notices * [Mega-Linter] Apply linters fixes * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Trigger tests * Add testing for profile trace. * [Mega-Linter] Apply linters fixes * Ignore __call__ from coverage on profile_trace. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: hmstepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: umaannamalai * Add Transaction API Tests (#857) * Test for suppress_apdex_metric * Add custom_metrics tests * Add distributed_trace_headers testing in existing tests * [Mega-Linter] Apply linters fixes * Remove redundant if-statement * Ignore deprecated transaction function from coverage * [Mega-Linter] Apply linters fixes * Push empty commit * Update newrelic/api/transaction.py --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add tests for jinja2. (#842) * Add tests for jinja2. * [Mega-Linter] Apply linters fixes * Update tox.ini Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Add tests for newrelic/config.py (#860) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix starlette testing matrix for updated behavior. (#869) Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Correct Serverless Distributed Tracing Logic (#870) * Fix serverless logic for distributed tracing * Test stubs * Collapse testing changes * Add negative testing to regular DT test suite * Apply linter fixes * [Mega-Linter] Apply linters fixes --------- Co-authored-by: TimPansino * Fix Kafka CI (#863) * Reenable kafka testing * Add kafka dev lib * Sync install python with devcontainer * Fix kafka local host setting * Drop set -u flag * Pin CI image dev sha * Add parallel flag to kafka * Fix proper exit status * Build librdkafka from source * Updated dev image sha * Remove coverage exclusions * Add new options to better emulate GHA * Reconfigure kafka networking Co-authored-by: Hannah Stepanek * Fix kafka ports on GHA * Run kafka tests serially * Separate kafka consumer groups * Put CI container makefile back * Remove confluent kafka Py27 for latest * Roll back ubuntu version update * Update dev ci sha --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Change image tag to latest (#871) * Change image tag to latest * Use built sha * Fixup * Replace w/ latest * Add full version for pypy3.8 to tox (#872) * Add full version for pypy3.8 * Remove solrpy from tests * Instrument RedisCluster (#809) * Add instrumentation for RedisCluster * Add tests for redis cluster * Ignore Django instrumentation from older versions (#859) * Ignore Django instrumentation from older versions * Ignore Django instrumentation from older versions * Fix text concatenation * Update newrelic/hooks/framework_django.py Co-authored-by: Hannah Stepanek * Update newrelic/hooks/framework_django.py Co-authored-by: Hannah Stepanek --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Modify postgresql tests to include WITH query (#885) * Modify postgresql tests to include WITH * [Mega-Linter] Apply linters fixes --------- Co-authored-by: lrafeei * Develop redis addons (#888) * Added separate instrumentation for redis.asyncio.client (#808) * Added separate instrumentation for redis.asyncio.client Merge main branch updates Add tests for newrelic/config.py (#860) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Modify redis tests * removed redis.asyncio from aioredis instrumentation removed aioredis instrumentation in redis asyncio client removed redis.asyncio from aioredis instrumentation --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Redis asyncio testing (#881) * Add/modify redis asyncio tests * Change to psubscribe * Tweak redis async tests/instrumentation * [Mega-Linter] Apply linters fixes * Push empty commit * Exclude older instrumentation from coverage * Resolve requested testing changes * Tweak async pubsub test * Fix pubsub test --------- Co-authored-by: lrafeei * Remove aioredis and aredis from tox (#891) * Remove aioredis and aredis from tox * Add aredis and aioredis to coverage ignore * Push empty commit * Fix codecov ignore file --------- Co-authored-by: Ahmed Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: lrafeei * Fix drift * Fix ariadne middleware tests for all versions. --------- Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: Lalleh Rafeei Co-authored-by: Kevin Morey Co-authored-by: Kate Anderson <90657569+kanderson250@users.noreply.github.com> Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Dmitry Kolyagin Co-authored-by: mary-martinez Co-authored-by: enriqueta Co-authored-by: Mary Martinez Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Justin Richert Co-authored-by: Ahmed Helil Co-authored-by: Ahmed * Remove graphql 2 & promise support (#875) * Update Versioning Scheme (#651) * Update versioning scheme to 3 semver digits * Fix version indexing Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei * Remove version truncation * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: TimPansino * Fix Trace Finalizer Crashes (#652) * Patch crashes in various traces with None settings * Add tests for graphql trace types to unittests * Add test to ensure traces don't crash in finalizer * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Add usage tracking metrics for Kafka clients. (#658) * Add usage tracking metrics for Kafka clients. * Fix double import lint error * [Mega-Linter] Apply linters fixes * Create version util file and add metrics to consumer. * Address linting errors. * Add missing semi-colon. * [Mega-Linter] Apply linters fixes * Bump tests. Co-authored-by: Hannah Stepanek Co-authored-by: hmstepanek Co-authored-by: umaannamalai * Deprecate add_custom_parameter(s) API (#655) * Deprecate add_custom_parameter(s) API * Fix unicode tests and some pylint errors * Fix more pylint errors * Revert "Fix more pylint errors" This reverts commit 807ec1c5c40fe421300ccdcd6fedd81f288dce2c. * Edit deprecation message in add_custom_parameters * Add usage metrics for Daphne and Hypercorn. (#665) * Add usage metrics for Daphne and Hypercorn. * [Mega-Linter] Apply linters fixes Co-authored-by: umaannamalai * Fix Flask view support in Code Level Metrics (#664) * Fix Flask view support in Code Level Metrics Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * [Mega-Linter] Apply linters fixes * Bump tests * Fix CLM tests for flaskrest * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: TimPansino Co-authored-by: Uma Annamalai * Fix aioredis version crash (#661) Co-authored-by: Uma Annamalai * Add double wrapped testing for Hypercorn and Daphne and dispatcher argument to WSGI API. (#667) * Add double wrapped app tests. * Fix linting errors. * [Mega-Linter] Apply linters fixes * Add co-authors. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Add Python 3.11 Support (#654) * Add py311 tests * Fix typo * Added 3.11 support for aiohttp framework Co-authored-by: Timothy Pansino * Set up environment to run Python 3.11 Co-authored-by: Timothy Pansino * Add Python 3.11 support for agent_features Co-authored-by: Timothy Pansino * Partial Python 3.11 support added for Tornado Co-authored-by: Timothy Pansino * Adjust postgres versions * Fix tornado install path locally * Remove aioredis py311 tests * Update 3.11 to dev in tests * Fix sanic instrumentation and imp/importlib deprecation Co-authored-by: Timothy Pansino * Simplify wheel build options * Update cibuildwheel for 3.11 * Remove falconmaster py311 test Co-authored-by: Lalleh Rafeei Co-authored-by: Timothy Pansino * Remove devcontainer submodule (#669) * Uncomment NewRelicContextFormatter from agent.py (#676) * Fix botocore tests for botocore v1.28.1+ (#675) * Fix botocore tests for botocore v1.28.1+ Co-authored-by: Timothy Pansino * Fix boto3 tests for botocore v1.28.1+ Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Fix boto3 tests for python 2.7 Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Feature increased custom event limit (#674) * Update reservoir size for custom events. * [Mega-Linter] Apply linters fixes * Increase custom event limit. (#666) * Remove duplicated CUSTOM_EVENT_RESERVOIR_SIZE Co-authored-by: Tim Pansino Co-authored-by: TimPansino Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add python 3.11 stable release to GHA (#671) * Double kafka test runners (#677) Co-authored-by: Hannah Stepanek * Fix failing flask_rest tests (#683) * Pin flask-restx in flask_rest tests for 2.7 flask-restx dropped support for 2.7 in 1.0.1. * Drop support for flask-restplus flask-restx replaced flask-restplus. flask-restplus's latest version supports 3.6 which we don't even support anymore. * Fix failing botocore tests (#684) * Change queue url for botocore>=1.29.0 botocore >=1.29.0 uses sqs.us-east-1.amazonaws.com url instead of queue.amazonaws.com. * Use tuple version instead of str * Change botocore129->botocore128 * Add record_log_event to public api (#681) * Add patch for sentry SDK to correct ASGI v2/v3 detection. (#680) * Add patch for sentry to correct ASGI v2/v3 detection. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * [Mega-Linter] Apply linters fixes Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: umaannamalai Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Update pip install command (#688) * Validator transfer from fixtures.py to validators directory, Part 1 (#672) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Fix import issues * Fix (more) import issues * Fix validate_transaction_metrics import in aioredis * Remove commented code from fixtures.py * Initialize ExternalNode properties (#687) Co-authored-by: Hannah Stepanek * Fix package_version_utils.py logic (#689) * Fix package_version_utils.py logic Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Move description of func into func itself * typecast lists into tuples * Remove breakpoints * Empty _test_package_version_utils.py * Make changes to the test Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Pin Github Actions Runner to Ubuntu 20 for Py27 (#698) * Pin Github Actions runner to ubuntu 20 for Py27 * Upgrade setup-python * Fix Confluent Kafka Producer Arguments (#699) * Add confluentkafka test for posargs/kwargs * Fix confluent kafka topic argument bug * More sensible producer arguments * Fix tornado master tests & instrument redis 4.3.5 (#695) * Remove 3.7 testing of tornado master tornadomaster dropped support for 3.7 * Instrument new redis 4.3.5 client methods Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Remove pylint codes from flake8 config (#701) * Validator transfer from fixtures.py to validators directory, Part 2 (#690) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Move validate_application_error_trace_count into validators directory * Move validate_application_error_event_count into validators directory * Move validate_synthetics_transaction_trace into validators directory * Move validate_tt_collector_json to validators directory * Move validate_transaction_trace_attributes into validator directory * Move validate_transaction_error_trace_attributes into validator directory * Move validate_error_trace_collector_json into validator directory * Move validate_error_event_collector_json into validator directory * Move validate_transaction_event_collector_json into validator directory * Fix import issues from merge * Fix some pylint errors * Revert 'raise ValueError' to be PY2 compatible * Delete commented lines * Fix bug in celery where works don't report data (#696) This fixes Missing information from Celery workers when using MAX_TASKS_PER_CHILD issue. Previously, if celery was run with the --loglevel=INFO flag, an agent instance would be created for the main celery process and after the first worker shutdown, all following worker's agent instances would point to that agent instance instead of creating a new instance. This was root caused to incorrectly creating an agent instance when application activate was not set. Now no agent instance will be created for the main celery process. * Reverts removal of flask_restful hooks. (#705) * Update instrumented methods in redis. (#707) Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Add TraceCache Guarded Iteration (#704) * Add MutableMapping API to TraceCache * Update trace cache usage to use guarded APIs. * [Mega-Linter] Apply linters fixes * Bump tests * Fix keys iterator * Comments for trace cache methods * Reorganize tests * Fix fixture refs * Fix testing refs * [Mega-Linter] Apply linters fixes * Bump tests * Upper case constant Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Fix Type Constructor Classes in Code Level Metrics (#708) * Fix CLM exception catching * Reorganize CLM Tests * Add type constructor tests to CLM * Fix line number * Pin tox version * Fix lambda tests in CLM * Fix lint issues * Turn helper func into pytest fixture Co-authored-by: Hannah Stepanek * Fix sanic and starlette tests (#734) * Fix sanic tests * Tweak test fix for sanic * Remove test for v18.12 in sanic (no longer supported) * Pin starlette latest to v0.23.1 (for now) * Add comment in tox about pinned starlette version * Add methods to instrument (#738) * Add card to instrumented methods in Redis (#740) * Add DevContainer (#711) * Add devcontainer setup * Add newrelic env vars to passenv * Add default extensions * Add devcontainer instructions to contributing docs * Convert smart quotes in contributing docs. * Apply proper RST formatting * [Mega-Linter] Apply linters fixes * Add GHCR to prerequisites * [Mega-Linter] Apply linters fixes * Bump tests Co-authored-by: TimPansino * Module classmethod fix (#662) * Fix function_wrapper calls to module * Fix wrapper in pika hook * Revert elasticsearch instrumentation * Revert some wrap_function_wrappers to orig * Remove comments/breakpoints * Fix hooks in elasticsearch Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Fix log decorating to be JSON compatible (#736) * Initial addition of JSON capability * Add NR-LINKING metadata JSON combatibility * Remove breakpoint * Hardcode local log decorating tests * Tweak linking metatdata parsing/adding * Revert "Fix log decorating to be JSON compatible" (#746) * Revert "Fix log decorating to be JSON compatible (#736)" This reverts commit 0db5fee1e5d44b0791dc517ac9f5d88d1240a340. * [Mega-Linter] Apply linters fixes * Trigger tests Co-authored-by: hmstepanek * Add apdexPerfZone attribute to Transaction. (#753) Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Fix tests in starlette v0.23.1 (#752) * Fix tests in starlette v0.23.1 * Fix conditional tests * Add comment to bg_task test * Support `redis.asyncio` (#744) * Support `redis.asyncio` * Fix `flake8` issues Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Redis Asyncio Testing (#750) * Add standardized method for package version tuples * Adapt aioredis tests to redis.asyncio * Standardize version tuple * Refresh uninstrumented redis methods * Fix aioredis version checking * Remove aioredis version function * CodeCov Integration (#710) * Add aggregate coverage settings to tox.ini * Refactor coverage fixture for GHA * Send coverage data files * Linter fixes * Configure codecov report * Yield cov handle from fixture * Fix empty coverage fixture * Specify artifact download dir * Find coverage files with find command * Add concurrency cancelling to github actions * uncomment test deps * Fix or symbol * Fix concurrency groups * Linter fixes * Add comment for yield None in fixture * [Mega-Linter] Apply linters fixes * Bump Tests --------- Co-authored-by: TimPansino * Mergify (#761) * Add mergify config file * Remove priority * Clean up mergify rules * Add non-draft requirement for merge * Add merge method * [Mega-Linter] Apply linters fixes * Don't update draft PRs. * Remove merge rules for develop branches * Linting --------- Co-authored-by: TimPansino * Elasticsearch v8 support (#741) * Fix function_wrapper calls to module * Fix wrapper in pika hook * Revert elasticsearch instrumentation * Revert some wrap_function_wrappers to orig * Remove comments/breakpoints * Fix hooks in elasticsearch * Add new client methods from v8 and their hooks * Add elasticsearch v8 to workflow and tox * Fix indices for elasticsearch01 * Disable xpack security in elasticsearch v8.0 * Start to add try/except blocks in tests * Add support for v8 transport * add support for v8 connection * Add tests-WIP * Clean up most tests * Clean up unused instrumentation Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Remove elastic search source code * Elasticsearch v8 testing Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek * Scope ES fixture * ES v8 only supports Python3.6+ * Refactor transport tests for v8 Co-authored-by: Lalleh Rafeei Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: Kate Anderson Co-authored-by: Enriqueta De Leon * Remove extra comments * Added perform_request_kwargs to test_transport * Fix some linter issues * Remove extra newline * Group es v7 v8 process modules together * Add auto signature detection & binding * Use bind_arguments in ES * Add test for wrapped function * Add validator for datastore trace inputs * Use common bind_arguments for PY3 * Fix tests in starlette v0.23.1 (#752) * Fix tests in starlette v0.23.1 * Fix conditional tests * Add comment to bg_task test * Split below es 8 methods from es 8 methods Note the previous tests in this file to check whether a method was instrumented, did not test anything because they were checking whether the list of methods that we instrumented were instrumented instead of whether there were uninstrumented methods on the es client that we missed. Because we decided due to lack of reporting of bugs by our customers, to not support the buggy wrapping on previous es versions (below es8), we only added tests to assert all methods were wrapped from es8+. We also are only testing es8+ wrapping of methods since the previous versions wrapping behavior may not have been correct due to the signature of the methods changing without us detecting it due to lack of tests. Since our customers have not reported any issues, it seems not worth it at this time to go back and fix these bugs. * Remove signature auto detection implementation * Fixup: remove signature autodetection * Fixup: cleanup * Test method calls on all es versions * Fixup: don't run some methods on es7 --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: mary-martinez Co-authored-by: enriqueta Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Enriqueta De Leon Co-authored-by: Uma Annamalai Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Hannah Stepanek Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Update contributors workspace link in CONTRIBUTING.rst. (#760) * Update link in CONTRIBUTING.rst. * Update to RST syntax. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add Retry to Pip Install (#763) * Add retry to pip install * Fix retry backoff constant * Fix script failures --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add aiohttp support for expected status codes (#735) * Add aiohttp support for expected status codes * Adjust naming convention * Fix expected tests for new validator behavior --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Tim Pansino * Fix PyPy Priority Sampling Test (#766) * Fix pypy priority sampling * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: TimPansino * Config linter fixes (#768) * Fix default value and lazy logging pylint * Fix default value and lazy logging pylint * Fix unnecessary 'else' in pylint * Fix logging-not-lazy in pylint * Fix redefined built-in error in Pylint * Fix implicit string concatenation in Pylint * Fix dict() to {} in Pylint * Make sure eval is OK to use for Pylint * Fix logging format string for Pylint * Change list comprehension to generator expression * [Mega-Linter] Apply linters fixes * Rerun tests --------- Co-authored-by: lrafeei * Sync tests w/ agents/cross_agent_tests/pull/150 (#770) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Infinite Tracing Batching & Compression (#762) * Infinite Tracing Batching and Compression settings (#756) * Add compression setting * Add batching setting * Infinite Tracing Compression (#758) * Initial commit * Add compression option in StreamingRPC * Add compression default to tests * Add test to confirm compression settings * Remove commented out code * Set compression settings from settings override * Infinite Tracing Batching (#759) * Initial infinite tracing batching implementation * Add RecordSpanBatch method to mock grpc server * Span batching settings and testing. Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Add final 8t batching tests * Rename serialization test * Formatting * Guard unittests from failing due to batching * Linting * Simplify batching algorithm * Properly wire batching parametrization * Fix incorrect validator use * Data loss on reconnect regression testing Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Test stream buffer batch sizes * Fix logic in supportability metrics for spans * Clean up nested conditionals in stream buffer * Compression parametrization in serialization test * Formatting * Update 8t test_no_delay_on_ok * Update protobufs * Remove unnecessary patching from test * Fix waiting in supportability metric tests * Add sleep to waiting in test * Reorder sleep and condition check * Mark no data loss xfail for py2. * Fix conditional check * Fix flake8 linter issues --------- Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Infinite Tracing Supportability Feature Toggle Metrics (#769) * Add 8T feature toggle supportability metrics * Remove supportability metrics when 8t is disabled. * Formatting --------- Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Fix DT settings for txn feature tests (#771) * Fix pyramid testing versions (#764) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix Ariadne Middleware Testing (#776) * Fix ariadne middleware testing Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei * [Mega-Linter] Apply linters fixes * Bump tests --------- Co-authored-by: Uma Annamalai Co-authored-by: Lalleh Rafeei Co-authored-by: TimPansino * Exclude merged PRs from automatic mergify actions. (#774) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Refactor Code Coverage (#765) * Reorder dependency of code coverage fixture * Fix tests with coverage disabled * Refactor code coverage fixture * Clean out old coverage settings * Fix missing code coverage fixture * Fix pypy priority sampling * Start coverage from pytest-cov for better tracking * Refactor coverage config file * Ripping out coverage fixtures * Move tool config to bottom of tox.ini * Disabling py27 warning * Renaming env var --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add GraphQL Introspection Setting (#783) * Add graphql introspection setting * Sort settings object hierarchy * Add test for introspection queries setting * Expand introspection queries testing * [Mega-Linter] Apply linters fixes * Adjust introspection detection for graphql --------- Co-authored-by: TimPansino * Fix instance info tests for redis. (#784) * Fix instance info tests for redis. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai * Fix Redis Instance Info (#790) * Fix failing redis test for new default behavior * Revert "Fix instance info tests for redis. (#784)" This reverts commit f7108e3c2a54ab02a1104f6c16bd5fd799b9fc7e. * Guard GraphQL Settings Lookup (#787) * Guard graphql settings lookup * [Mega-Linter] Apply linters fixes * Bump tests * Update graphql settings test --------- Co-authored-by: TimPansino Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Errors Inbox Improvements (#791) * Errors inbox attributes and tests (#778) * Initial errors inbox commit Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Add enduser.id field * Move validate_error_trace_attributes into validators directory * Add error callback attributes test * Add tests for enduser.id & error.group.name Co-authored-by: Timothy Pansino * Uncomment code_coverage * Drop commented out line --------- Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Error Group Callback API (#785) * Error group initial implementation * Rewrite error callback to pass map of info * Fixed incorrect validators causing errors Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek * Fix validation of error trace attributes * Expanded error callback test * Add incorrect type to error callback testing * Change error group callback to private setting * Add testing for error group callback inputs * Separate error group callback tests * Add explicit testing for the set API * Ensure error group is string * Fix python 2 type validation --------- Co-authored-by: Uma Annamalai Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * User Tracking for Errors Inbox (#789) * Add user tracking feature for errors inbox. * Address review comments, * Add high_security test. * Cleanup invalid tests test. * Update user_id string check. * Remove set_id outside txn test. --------- Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Uma Annamalai * Update Packages (#793) * Update urllib3 to v1.26.15 * Update six to v1.16.0 * Update coverage exclude for newrelic/packages * [Mega-Linter] Apply linters fixes * Drop removed package from urllib3 * Update pytest * Downgrade websockets version for old sanic testing --------- Co-authored-by: TimPansino * Remove Unused Instrumentation and Tests (#794) * Remove unused instrumentation files * Remove testing for deprecated CherryPy versions * Remove unused pyelasticsearch tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix Loguru Instrumentation for v0.7.0 (#798) * Add autosignature implementation * Fix loguru with auto-signature * [Mega-Linter] Apply linters fixes * Fix tests for Py2 * [Mega-Linter] Apply linters fixes * Bump tests * Remove unwrap from signature utils * Fix arg unpacking * Remove unwrap arg from bind_args * Fix linter errors --------- Co-authored-by: TimPansino Co-authored-by: Lalleh Rafeei * Remove Twisted framework (#800) * Initial twisted commit * Remove Twisted Framework * Pin virtualenv, fix pip arg deprecation & disable kafka tests (#803) * Pin virtualenv * Fixup: use 20.21.1 instead * Replace install-options with config-settings See https://github.com/pypa/pip/issues/11358. * Temporarily disable kafka tests * Add tests for pyodbc (#796) * Add tests for pyodbc * Move imports into tests to get import coverage * Fixup: remove time import * Trigger tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add tests for Waitress (#797) * Change import format * Initial commit * Add more tests to adapter_waitress * Remove commented out code * [Mega-Linter] Apply linters fixes * Add assertions to all tests * Add more NR testing to waitress --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Add testing for genshi and mako. (#799) * Add testing for genshi and mako. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Omit some frameworks from coverage analysis (#810) * Omit some frameworks from coverage analysis * Remove commas * Change format of omit * Add relative_files option to coverage * Add absolute directory * Add envsitepackagedir * Add coveragerc file * Add codecov.yml * [Mega-Linter] Apply linters fixes * Revert coveragerc file settings * Add files in packages and more frameworks * Remove commented line --------- Co-authored-by: lrafeei Co-authored-by: Hannah Stepanek * Run coverage around pytest (#813) * Run coverage around pytest * Trigger tests * Fixup * Add redis client_no_touch to ignore list * Temporarily remove kafka from coverage * Remove coverage for old libs * Add required option for tox v4 (#795) * Add required option for tox v4 * Update tox in GHA * Remove py27 no-cache-dir * Fix Testing Failures (#828) * Fix tastypie tests * Adjust asgiref pinned version * Make aioredis key PID unique * Pin more asgiref versions * Fix pytest test filtering when running tox (#823) Co-authored-by: Uma Annamalai * Validator transfer p3 (#745) * Move validate_transaction_metrics to validators directory * Comment out original validate_transaction_metrics from fixtures.py * Move validate_time_metrics_outside_transaction to validators directory * Move validate_internal_metrics into validators directory and fixed validate_transaction_metrics * Move validate_transaction_errors into validators directory * Move validate_application_errors into validators directory * Move validate_custom_parameters into validators directory * Move validate_synthetics_event into validators directory * Move validate_transaction_event_attributes into validators directory * Move validate_non_transaction_error_event into validators directory * Move validate_application_error_trace_count into validators directory * Move validate_application_error_event_count into validators directory * Move validate_synthetics_transaction_trace into validators directory * Move validate_tt_collector_json to validators directory * Move validate_transaction_trace_attributes into validator directory * Move validate_transaction_error_trace_attributes into validator directory * Move validate_error_trace_collector_json into validator directory * Move validate_error_event_collector_json into validator directory * Move validate_transaction_event_collector_json into validator directory * Move validate_custom_event_collector_json into validator directory * Move validate_tt_parameters into validator directory * Move validate_tt_parameters into validator directory * Move validate_tt_segment_params into validator directory * Move validate_browser_attributes into validators directory * Move validate_error_event_attributes into validators directory * Move validate_error_trace_attributes_outside_transaction into validators directory * Move validate_error_event_attributes_outside_transaction into validators directory * Fix some pylint errors * Redirect check_error_attributes * Fix more Pylint errors * Fix import issues from move * Fix more import shuffle errors * Sort logging JSON test for PY2 consistency * Fix Pylint errors in validators * Fix import error --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix set output warning using new GHA syntax (#833) * Fix set output warning using new GHA syntax * Fix quoting * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Instrument Redis waitaof (#851) * Add uninstrumented command to redis * Update logic for datastore_aioredis instance info * [Mega-Linter] Apply linters fixes * Bump tests * Update defaults for aioredis port --------- Co-authored-by: TimPansino * Ignore patched hooks files. (#849) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix local scoped package reporting (#837) * Include isort stdlibs for determining stdlib modules * Use isort & sys to eliminate std & builtin modules Previously, the logic would fail to identify third party modules installed within the local user socpe. This fixes that issue by skipping builtin and stdlib modules by name, instead of attempting to identify third party modules based on file paths. * Handle importlib_metadata.version being a callable * Add isort into third party notices * [Mega-Linter] Apply linters fixes * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Trigger tests --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: hmstepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: Uma Annamalai * MSSQL Testing (#852) * For mysql tests into mssql * Add tox envs for mssql * Add mssql DB settings * Add correct MSSQL tests * Add mssql to GHA * Add MSSQL libs to CI image * Pin to dev CI image sha * Swap SQLServer container image * Fix healthcheck * Put MSSQL image back * Drop pypy37 tests * Unpin dev image sha * Exclude command line functionality from test coverage (#855) * FIX: resilient environment settings (#825) if the application uses generalimport to manage optional depedencies, it's possible that generalimport.MissingOptionalDependency is raised. In this case, we should not report the module as it is not actually loaded and is not a runtime dependency of the application. Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Replace drop_transaction logic by using transaction context manager (#832) * Replace drop_transaction call * [Mega-Linter] Apply linters fixes * Empty commit to start tests * Change logic in BG Wrappers --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Upgrade to Pypy38 for TypedDict (#861) * Fix base branch * Revert tox dependencies * Replace all pypy37 with pypy38 * Remove action.yml file * Push Empty Commit * Fix skip_missing_interpreters behavior * Fix skip_missing_interpreters behavior * Pin dev CI image sha * Remove unsupported Tornado tests * Add latest tests to Tornado * Remove pypy38 (for now) --------- Co-authored-by: Tim Pansino * Add profile_trace testing (#858) * Include isort stdlibs for determining stdlib modules * Use isort & sys to eliminate std & builtin modules Previously, the logic would fail to identify third party modules installed within the local user socpe. This fixes that issue by skipping builtin and stdlib modules by name, instead of attempting to identify third party modules based on file paths. * Handle importlib_metadata.version being a callable * Add isort into third party notices * [Mega-Linter] Apply linters fixes * Remove Python 2.7 and pypy2 testing (#835) * Change setup-python to @v2 for py2.7 * Remove py27 and pypy testing * Fix syntax errors * Fix comma related syntax errors * Fix more issues in tox * Remove gearman test * Containerized CI Pipeline (#836) * Revert "Remove Python 2.7 and pypy2 testing (#835)" This reverts commit abb6405d2bfd629ed83f48e8a17b4a28e3a3c352. * Containerize CI process * Publish new docker container for CI images * Rename github actions job * Copyright tag scripts * Drop debug line * Swap to new CI image * Move pip install to just main python * Remove libcurl special case from tox * Install special case packages into main image * Remove unused packages * Remove all other triggers besides manual * Add make run command * Cleanup small bugs * Fix CI Image Tagging (#838) * Correct templated CI image name * Pin pypy2.7 in image * Fix up scripting * Temporarily Restore Old CI Pipeline (#841) * Restore old pipelines * Remove python 2 from setup-python * Rework CI Pipeline (#839) Change pypy to pypy27 in tox. Fix checkout logic Pin tox requires * Fix Tests on New CI (#843) * Remove non-root user * Test new CI image * Change pypy to pypy27 in tox. * Fix checkout logic * Fetch git tags properly * Pin tox requires * Adjust default db settings for github actions * Rename elasticsearch services * Reset to new pipelines * [Mega-Linter] Apply linters fixes * Fix timezone * Fix docker networking * Pin dev image to new sha * Standardize gearman DB settings * Fix elasticsearch settings bug * Fix gearman bug * Add missing odbc headers * Add more debug messages * Swap out dev ci image * Fix required virtualenv version * Swap out dev ci image * Swap out dev ci image * Remove aioredis v1 for EOL * Add coverage paths for docker container * Unpin ci container --------- Co-authored-by: TimPansino * Trigger tests * Add testing for profile trace. * [Mega-Linter] Apply linters fixes * Ignore __call__ from coverage on profile_trace. * [Mega-Linter] Apply linters fixes --------- Co-authored-by: Hannah Stepanek Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: hmstepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: TimPansino Co-authored-by: umaannamalai * Add Transaction API Tests (#857) * Test for suppress_apdex_metric * Add custom_metrics tests * Add distributed_trace_headers testing in existing tests * [Mega-Linter] Apply linters fixes * Remove redundant if-statement * Ignore deprecated transaction function from coverage * [Mega-Linter] Apply linters fixes * Push empty commit * Update newrelic/api/transaction.py --------- Co-authored-by: lrafeei Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Uma Annamalai * Add tests for jinja2. (#842) * Add tests for jinja2. * [Mega-Linter] Apply linters fixes * Update tox.ini Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> --------- Co-authored-by: umaannamalai Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Add tests for newrelic/config.py (#860) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Fix starlette testing matrix for updated behavior. (#869) Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Uma Annamalai * Correct Serverless Distributed Tracing Logic (#870) * Fix serverless logic for distributed tracing * Test stubs * Collapse testing changes * Add negative testing to regular DT test suite * Apply linter fixes * [Mega-Linter] Apply linters fixes --------- Co-authored-by: TimPansino * Fix Kafka CI (#863) * Reenable kafka testing * Add kafka dev lib * Sync install python with devcontainer * Fix kafka local host setting * Drop set -u flag * Pin CI image dev sha * Add parallel flag to kafka * Fix proper exit status * Build librdkafka from source * Updated dev image sha * Remove coverage exclusions * Add new options to better emulate GHA * Reconfigure kafka networking Co-authored-by: Hannah Stepanek * Fix kafka ports on GHA * Run kafka tests serially * Separate kafka consumer groups * Put CI container makefile back * Remove confluent kafka Py27 for latest * Roll back ubuntu version update * Update dev ci sha --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Change image tag to latest (#871) * Change image tag to latest * Use built sha * Fixup * Replace w/ latest * Add full version for pypy3.8 to tox (#872) * Add full version for pypy3.8 * Remove solrpy from tests * Remove graphql 2 support & promise support * Instrument RedisCluster (#809) * Add instrumentation for RedisCluster * Add tests for redis cluster * Ignore Django instrumentation from older versions (#859) * Ignore Django instrumentation from older versions * Ignore Django instrumentation from older versions * Fix text concatenation * Update newrelic/hooks/framework_django.py Co-authored-by: Hannah Stepanek * Update newrelic/hooks/framework_django.py Co-authored-by: Hannah Stepanek --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Hannah Stepanek * Remove py2_namespace from validator * Fix differences w/ main * Fixup * Modify postgresql tests to include WITH query (#885) * Modify postgresql tests to include WITH * [Mega-Linter] Apply linters fixes --------- Co-authored-by: lrafeei * Fixup: merge * Fixup tests * Fixup * Fixup: remove old test files * Remove app specific ariadne tests * Fix ariadne middleware tests for all versions. * Fix broken graphene tests and update strawberry version logic. * Remove graphql starlette code. * Develop redis addons (#888) * Added separate instrumentation for redis.asyncio.client (#808) * Added separate instrumentation for redis.asyncio.client Merge main branch updates Add tests for newrelic/config.py (#860) Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Modify redis tests * removed redis.asyncio from aioredis instrumentation removed aioredis instrumentation in redis asyncio client removed redis.asyncio from aioredis instrumentation --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Lalleh Rafeei Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> * Redis asyncio testing (#881) * Add/modify redis asyncio tests * Change to psubscribe * Tweak redis async tests/instrumentation * [Mega-Linter] Apply linters fixes * Push empty commit * Exclude older instrumentation from coverage * Resolve requested testing changes * Tweak async pubsub test * Fix pubsub test --------- Co-authored-by: lrafeei * Remove aioredis and aredis from tox (#891) * Remove aioredis and aredis from tox * Add aredis and aioredis to coverage ignore * Push empty commit * Fix codecov ignore file --------- Co-authored-by: Ahmed Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: lrafeei * Add google firestore instrumentation (#893) * Add instrumentation for Google Firestore documents and collections (#876) * Initial GCP firestore instrumentation commit. * Add testing for documents and collections + test generators Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Add co-authors. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Add co-authors. Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek * Trim whitespace --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Hannah Stepanek Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> * Firestore CI (#877) * Add firestore CI runner * Correct hook file name * Setup emulator credentials * Swap dependency to firestore alone * Hacky setup for firestore * Fix firestore hostname * Ensure firestore connection * Fix CI issues * Refactor Firestore Hooks (#879) * Remove unnecessary instrumentation * Simplify existing instrumentation * Remove unnecessary settings lookups * Firestore Sync Client Instrumentation (#880) * Remove unnecessary instrumentation * Simplify existing instrumentation * Remove unnecessary settings lookups * Client instrumentation * Add query and aggregation query instrumentation * Fix deprecation warning * Simplify collection lookup * Combine query test files * Rename methods for clarity * Instrument Firestore batching * Add transaction instrumentation * Consumer iterators on <=Py38 * Allow better parallelization in firestore tests * Clean out unnecessary code * [Mega-Linter] Apply linters fixes * Better parallelization safeguards * Add collection group instrumentation * [Mega-Linter] Apply linters fixes * Change imports to native APIs * Swap target functions to lambdas * Convert exercise functions to fixtures --------- Co-authored-by: TimPansino * Update datastore_trace wrapper to take instance info (#883) * Update datastore trace wrapper to take instance info. * [Mega-Linter] Apply linters fixes * Make instance info args optional. * [Mega-Linter] Apply linters fixes * Add datastore trace testing. * Add background task decorator. * [Mega-Linter] Apply linters fixes * Fix typo in validator. --------- Co-authored-by: umaannamalai * Async Generator Wrapper (#884) * Add async generator wrapper * Add no harm test * Remove anext calls * Add graphql traces to decorator testing * Remove pypy generator gc logic --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Trace Async Wrapper Argument (#886) * Add async_wrapper to datastore_trace api * Add async wrapper argument to all trace APIs * Add testing for automatic and manual asyncwrappers * Firstore Async Instrumentation (#882) * Remove unnecessary instrumentation * Simplify existing instrumentation * Remove unnecessary settings lookups * Client instrumentation * Add query and aggregation query instrumentation * Fix deprecation warning * Simplify collection lookup * Combine query test files * Rename methods for clarity * Instrument Firestore batching * Add transaction instrumentation * Consumer iterators on <=Py38 * Add async generator wrapper * Allow better parallelization in firestore tests * Fix issue in async generator wrapper * Add async client instrumentation * Squashed commit of the following: commit 9d411e00e37476be4ce0c40c7e64e71c4a09cfc6 Author: Tim Pansino Date: Wed Jul 26 15:57:39 2023 -0700 Clean out unnecessary code commit cb550bad9bb9e15edfdcef5dd361022448e0348f Author: Tim Pansino Date: Wed Jul 26 14:27:01 2023 -0700 Allow better parallelization in firestore tests * Add async collection instrumentation * Add async document instrumentation * Async Query instrumentation * Add async batch instrumentation * Add instrumentation for AsyncTransaction * Squashed commit of the following: commit c836f8f377f9391af86452e81b59f834330b18fb Author: TimPansino Date: Thu Jul 27 19:54:35 2023 +0000 [Mega-Linter] Apply linters fixes commit 02a55a11017fd27b45f06ab719a33917cf185aac Author: Tim Pansino Date: Thu Jul 27 12:46:46 2023 -0700 Add collection group instrumentation commit ab1f4ff5d2e88e6def42fa3c99c619f9673ce918 Author: Tim Pansino Date: Thu Jul 27 12:00:33 2023 -0700 Better parallelization safeguards commit fa5f39a2b037421cf017a062901c0ea1ec2b9723 Author: TimPansino Date: Wed Jul 26 22:59:11 2023 +0000 [Mega-Linter] Apply linters fixes commit 9d411e00e37476be4ce0c40c7e64e71c4a09cfc6 Author: Tim Pansino Date: Wed Jul 26 15:57:39 2023 -0700 Clean out unnecessary code commit cb550bad9bb9e15edfdcef5dd361022448e0348f Author: Tim Pansino Date: Wed Jul 26 14:27:01 2023 -0700 Allow better parallelization in firestore tests * Remove reset_firestore * Re-merge of test_query * Use public API imports * Add async collection group instrumentation * Refactor exercise functions to fixtures * Squashed commit of the following: commit 09c5e11498b4c200057190e859f8151241c421f3 Author: Tim Pansino Date: Wed Aug 2 14:33:24 2023 -0700 Add testing for automatic and manual asyncwrappers commit fc3ef6bfb8cb2f9cd6c8ffdf7bfd953be41cc974 Author: Tim Pansino Date: Wed Aug 2 14:33:05 2023 -0700 Add async wrapper argument to all trace APIs commit 479f9e236e2212e0f9cdf51627996068027acd82 Merge: faf3cccea edd1f94b0 Author: Tim Pansino Date: Wed Aug 2 13:44:24 2023 -0700 Merge remote-tracking branch 'origin/develop-google-firestore-instrumentation' into feature-async-wrapper-argument commit edd1f94b0f601a2674da4e594b777bae0eed6643 Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Wed Aug 2 13:40:51 2023 -0700 Async Generator Wrapper (#884) * Add async generator wrapper * Add no harm test * Remove anext calls * Add graphql traces to decorator testing * Remove pypy generator gc logic --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> commit faf3ccceae127128aff81fc59d95dd3f49699a3c Author: Tim Pansino Date: Mon Jul 31 15:10:56 2023 -0700 Add async_wrapper to datastore_trace api * Remove custom wrapper code from firestore * Undo wrapper edits --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> * Firestore Instance Info (#887) * Add instance info testing to query * Instance info for query.stream * Squashed commit of the following: commit 1c426c84b2c8ee36c6a40bf6bbfcb862c90db1cf Author: umaannamalai Date: Mon Jul 31 23:01:49 2023 +0000 [Mega-Linter] Apply linters fixes commit 7687c0695783fe40a86e705ec9790c19248f0c1e Author: Uma Annamalai Date: Mon Jul 31 15:47:09 2023 -0700 Make instance info args optional. commit 53f8400ce0d0e8b53bfcaba4b54f898a63e3d68b Author: umaannamalai Date: Mon Jul 31 22:23:20 2023 +0000 [Mega-Linter] Apply linters fixes commit d95d477cdfd54de4490211e3c4dd7de2504057f3 Author: Uma Annamalai Date: Mon Jul 31 15:20:41 2023 -0700 Update datastore trace wrapper to take instance info. * Add instance info testing to all apis * Separate transaction instance info tests * Implement all instance info getters * Squashed commit of the following: commit db3561e54f773730f269455ae323865b6230a613 Merge: 844e556ab edd1f94b0 Author: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Date: Wed Aug 2 22:10:32 2023 +0000 Merge branch 'develop-google-firestore-instrumentation' into feature-firstore-async-instrumentation commit 844e556abfbca63573e51a2647141e07ce9e942f Author: Tim Pansino Date: Wed Aug 2 15:09:49 2023 -0700 Remove custom wrapper code from firestore commit ad2999ff50b6b17b5774f69ca8116ee901f47474 Author: Tim Pansino Date: Wed Aug 2 14:58:38 2023 -0700 Squashed commit of the following: commit 09c5e11498b4c200057190e859f8151241c421f3 Author: Tim Pansino Date: Wed Aug 2 14:33:24 2023 -0700 Add testing for automatic and manual asyncwrappers commit fc3ef6bfb8cb2f9cd6c8ffdf7bfd953be41cc974 Author: Tim Pansino Date: Wed Aug 2 14:33:05 2023 -0700 Add async wrapper argument to all trace APIs commit 479f9e236e2212e0f9cdf51627996068027acd82 Merge: faf3cccea edd1f94b0 Author: Tim Pansino Date: Wed Aug 2 13:44:24 2023 -0700 Merge remote-tracking branch 'origin/develop-google-firestore-instrumentation' into feature-async-wrapper-argument commit edd1f94b0f601a2674da4e594b777bae0eed6643 Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Wed Aug 2 13:40:51 2023 -0700 Async Generator Wrapper (#884) * Add async generator wrapper * Add no harm test * Remove anext calls * Add graphql traces to decorator testing * Remove pypy generator gc logic --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> commit faf3ccceae127128aff81fc59d95dd3f49699a3c Author: Tim Pansino Date: Mon Jul 31 15:10:56 2023 -0700 Add async_wrapper to datastore_trace api commit edd1f94b0f601a2674da4e594b777bae0eed6643 Author: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Date: Wed Aug 2 13:40:51 2023 -0700 Async Generator Wrapper (#884) * Add async generator wrapper * Add no harm test * Remove anext calls * Add graphql traces to decorator testing * Remove pypy generator gc logic --------- Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> commit 29579fc2ecd8199b0227922425556d3279f17e57 Merge: 4a8a3fe04 7596fb40d Author: mergify[b… * Remove solrpy merge conflict. --------- Co-authored-by: Lalleh Rafeei Co-authored-by: Timothy Pansino Co-authored-by: Uma Annamalai Co-authored-by: Tim Pansino Co-authored-by: Lalleh Rafeei Co-authored-by: Timothy Pansino <11214426+TimPansino@users.noreply.github.com> Co-authored-by: Hannah Stepanek Co-authored-by: Lalleh Rafeei <84813886+lrafeei@users.noreply.github.com> Co-authored-by: Hannah Stepanek Co-authored-by: Kevin Morey Co-authored-by: Kate Anderson <90657569+kanderson250@users.noreply.github.com> Co-authored-by: Enriqueta De Leon Co-authored-by: Kate Anderson Co-authored-by: Mary Martinez Co-authored-by: Dmitry Kolyagin Co-authored-by: mary-martinez Co-authored-by: enriqueta Co-authored-by: Mary Martinez Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com> Co-authored-by: Justin Richert Co-authored-by: Ahmed Helil Co-authored-by: Ahmed Co-authored-by: Tim Pansino Co-authored-by: Renan Ivo --- newrelic/api/graphql_trace.py | 8 +- newrelic/hooks/component_graphqlserver.py | 23 +- newrelic/hooks/framework_ariadne.py | 34 +- newrelic/hooks/framework_graphql.py | 196 ++++--- newrelic/hooks/framework_graphql_py3.py | 68 +++ newrelic/hooks/framework_strawberry.py | 37 +- .../test_application.py | 180 +++--- .../component_flask_rest/test_application.py | 8 +- tests/component_graphqlserver/__init__.py | 13 + .../_target_schema_async.py | 155 +++++ .../component_graphqlserver/_test_graphql.py | 14 +- tests/component_graphqlserver/test_graphql.py | 21 +- tests/datastore_bmemcached/test_memcache.py | 2 + .../test_async_batching.py | 2 +- .../datastore_firestore/test_async_client.py | 2 +- .../test_async_collections.py | 2 +- .../test_async_documents.py | 2 +- tests/datastore_firestore/test_async_query.py | 2 +- tests/datastore_firestore/test_client.py | 2 +- tests/framework_ariadne/__init__.py | 13 + .../framework_ariadne/_target_application.py | 195 +++---- .../framework_ariadne/_target_schema_async.py | 94 ++++ .../framework_ariadne/_target_schema_sync.py | 106 ++++ tests/framework_ariadne/conftest.py | 14 +- tests/framework_ariadne/schema.graphql | 7 +- tests/framework_ariadne/test_application.py | 529 +----------------- .../test_application_async.py | 106 ---- tests/framework_ariadne/test_asgi.py | 118 ---- tests/framework_ariadne/test_wsgi.py | 115 ---- tests/framework_graphene/__init__.py | 13 + .../framework_graphene/_target_application.py | 161 +----- .../_target_schema_async.py | 72 +++ .../framework_graphene/_target_schema_sync.py | 162 ++++++ tests/framework_graphene/test_application.py | 525 +---------------- tests/framework_graphql/__init__.py | 13 + .../framework_graphql/_target_application.py | 277 ++------- .../framework_graphql/_target_schema_async.py | 155 +++++ .../framework_graphql/_target_schema_sync.py | 210 +++++++ tests/framework_graphql/conftest.py | 21 +- tests/framework_graphql/test_application.py | 373 ++++++------ .../test_application_async.py | 101 +--- tests/framework_starlette/test_graphql.py | 87 --- tests/framework_strawberry/__init__.py | 13 + .../_target_application.py | 221 +++----- .../_target_schema_async.py | 84 +++ .../_target_schema_sync.py | 169 ++++++ tests/framework_strawberry/conftest.py | 20 +- .../framework_strawberry/test_application.py | 434 +------------- .../test_application_async.py | 144 ----- tests/framework_strawberry/test_asgi.py | 123 ---- .../test_pika_async_connection_consume.py | 287 +++++----- .../test_pika_blocking_connection_consume.py | 189 +++---- .../validators/validate_code_level_metrics.py | 10 +- tox.ini | 24 +- 54 files changed, 2359 insertions(+), 3597 deletions(-) create mode 100644 newrelic/hooks/framework_graphql_py3.py create mode 100644 tests/component_graphqlserver/__init__.py create mode 100644 tests/component_graphqlserver/_target_schema_async.py create mode 100644 tests/framework_ariadne/__init__.py create mode 100644 tests/framework_ariadne/_target_schema_async.py create mode 100644 tests/framework_ariadne/_target_schema_sync.py delete mode 100644 tests/framework_ariadne/test_application_async.py delete mode 100644 tests/framework_ariadne/test_asgi.py delete mode 100644 tests/framework_ariadne/test_wsgi.py create mode 100644 tests/framework_graphene/__init__.py create mode 100644 tests/framework_graphene/_target_schema_async.py create mode 100644 tests/framework_graphene/_target_schema_sync.py create mode 100644 tests/framework_graphql/__init__.py create mode 100644 tests/framework_graphql/_target_schema_async.py create mode 100644 tests/framework_graphql/_target_schema_sync.py delete mode 100644 tests/framework_starlette/test_graphql.py create mode 100644 tests/framework_strawberry/__init__.py create mode 100644 tests/framework_strawberry/_target_schema_async.py create mode 100644 tests/framework_strawberry/_target_schema_sync.py delete mode 100644 tests/framework_strawberry/test_application_async.py delete mode 100644 tests/framework_strawberry/test_asgi.py diff --git a/newrelic/api/graphql_trace.py b/newrelic/api/graphql_trace.py index 3d6ae6b09..e8803fa68 100644 --- a/newrelic/api/graphql_trace.py +++ b/newrelic/api/graphql_trace.py @@ -139,7 +139,7 @@ def wrap_graphql_operation_trace(module, object_path, async_wrapper=None): class GraphQLResolverTrace(TimeTrace): - def __init__(self, field_name=None, **kwargs): + def __init__(self, field_name=None, field_parent_type=None, field_return_type=None, field_path=None, **kwargs): parent = kwargs.pop("parent", None) source = kwargs.pop("source", None) if kwargs: @@ -148,6 +148,9 @@ def __init__(self, field_name=None, **kwargs): super(GraphQLResolverTrace, self).__init__(parent=parent, source=source) self.field_name = field_name + self.field_parent_type = field_parent_type + self.field_return_type = field_return_type + self.field_path = field_path self._product = None def __repr__(self): @@ -175,6 +178,9 @@ def product(self): def finalize_data(self, *args, **kwargs): self._add_agent_attribute("graphql.field.name", self.field_name) + self._add_agent_attribute("graphql.field.parentType", self.field_parent_type) + self._add_agent_attribute("graphql.field.returnType", self.field_return_type) + self._add_agent_attribute("graphql.field.path", self.field_path) return super(GraphQLResolverTrace, self).finalize_data(*args, **kwargs) diff --git a/newrelic/hooks/component_graphqlserver.py b/newrelic/hooks/component_graphqlserver.py index 29004c11f..ebc62a34d 100644 --- a/newrelic/hooks/component_graphqlserver.py +++ b/newrelic/hooks/component_graphqlserver.py @@ -1,19 +1,18 @@ -from newrelic.api.asgi_application import wrap_asgi_application from newrelic.api.error_trace import ErrorTrace from newrelic.api.graphql_trace import GraphQLOperationTrace from newrelic.api.transaction import current_transaction -from newrelic.api.transaction_name import TransactionNameWrapper from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version from newrelic.core.graphql_utils import graphql_statement from newrelic.hooks.framework_graphql import ( - framework_version as graphql_framework_version, + GRAPHQL_VERSION, + ignore_graphql_duplicate_exception, ) -from newrelic.hooks.framework_graphql import ignore_graphql_duplicate_exception -def framework_details(): - import graphql_server - return ("GraphQLServer", getattr(graphql_server, "__version__", None)) +GRAPHQL_SERVER_VERSION = get_package_version("graphql-server") +graphql_server_major_version = int(GRAPHQL_SERVER_VERSION.split(".")[0]) + def bind_query(schema, params, *args, **kwargs): return getattr(params, "query", None) @@ -30,9 +29,8 @@ def wrap_get_response(wrapped, instance, args, kwargs): except TypeError: return wrapped(*args, **kwargs) - framework = framework_details() - transaction.add_framework_info(name=framework[0], version=framework[1]) - transaction.add_framework_info(name="GraphQL", version=graphql_framework_version()) + transaction.add_framework_info(name="GraphQLServer", version=GRAPHQL_SERVER_VERSION) + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) if hasattr(query, "body"): query = query.body @@ -45,5 +43,8 @@ def wrap_get_response(wrapped, instance, args, kwargs): with ErrorTrace(ignore=ignore_graphql_duplicate_exception): return wrapped(*args, **kwargs) + def instrument_graphqlserver(module): - wrap_function_wrapper(module, "get_response", wrap_get_response) + if graphql_server_major_version <= 2: + return + wrap_function_wrapper(module, "get_response", wrap_get_response) diff --git a/newrelic/hooks/framework_ariadne.py b/newrelic/hooks/framework_ariadne.py index 498c662c4..4927abe0b 100644 --- a/newrelic/hooks/framework_ariadne.py +++ b/newrelic/hooks/framework_ariadne.py @@ -21,17 +21,12 @@ from newrelic.api.wsgi_application import wrap_wsgi_application from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version from newrelic.core.graphql_utils import graphql_statement -from newrelic.hooks.framework_graphql import ( - framework_version as graphql_framework_version, -) -from newrelic.hooks.framework_graphql import ignore_graphql_duplicate_exception +from newrelic.hooks.framework_graphql import GRAPHQL_VERSION, ignore_graphql_duplicate_exception - -def framework_details(): - import ariadne - - return ("Ariadne", getattr(ariadne, "__version__", None)) +ARIADNE_VERSION = get_package_version("ariadne") +ariadne_version_tuple = tuple(map(int, ARIADNE_VERSION.split("."))) def bind_graphql(schema, data, *args, **kwargs): @@ -49,9 +44,8 @@ def wrap_graphql_sync(wrapped, instance, args, kwargs): except TypeError: return wrapped(*args, **kwargs) - framework = framework_details() - transaction.add_framework_info(name=framework[0], version=framework[1]) # No version info available on ariadne - transaction.add_framework_info(name="GraphQL", version=graphql_framework_version()) + transaction.add_framework_info(name="Ariadne", version=ARIADNE_VERSION) + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) query = data["query"] if hasattr(query, "body"): @@ -83,9 +77,8 @@ async def wrap_graphql(wrapped, instance, args, kwargs): result = await result return result - framework = framework_details() - transaction.add_framework_info(name=framework[0], version=framework[1]) # No version info available on ariadne - transaction.add_framework_info(name="GraphQL", version=graphql_framework_version()) + transaction.add_framework_info(name="Ariadne", version=ARIADNE_VERSION) + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) query = data["query"] if hasattr(query, "body"): @@ -104,6 +97,9 @@ async def wrap_graphql(wrapped, instance, args, kwargs): def instrument_ariadne_execute(module): + # v0.9.0 is the version where ariadne started using graphql-core v3 + if ariadne_version_tuple < (0, 9): + return if hasattr(module, "graphql"): wrap_function_wrapper(module, "graphql", wrap_graphql) @@ -112,10 +108,14 @@ def instrument_ariadne_execute(module): def instrument_ariadne_asgi(module): + if ariadne_version_tuple < (0, 9): + return if hasattr(module, "GraphQL"): - wrap_asgi_application(module, "GraphQL.__call__", framework=framework_details()) + wrap_asgi_application(module, "GraphQL.__call__", framework=("Ariadne", ARIADNE_VERSION)) def instrument_ariadne_wsgi(module): + if ariadne_version_tuple < (0, 9): + return if hasattr(module, "GraphQL"): - wrap_wsgi_application(module, "GraphQL.__call__", framework=framework_details()) + wrap_wsgi_application(module, "GraphQL.__call__", framework=("Ariadne", ARIADNE_VERSION)) diff --git a/newrelic/hooks/framework_graphql.py b/newrelic/hooks/framework_graphql.py index d261b2e9f..df86e6984 100644 --- a/newrelic/hooks/framework_graphql.py +++ b/newrelic/hooks/framework_graphql.py @@ -13,7 +13,10 @@ # limitations under the License. import logging +import sys +import time from collections import deque +from inspect import isawaitable from newrelic.api.error_trace import ErrorTrace from newrelic.api.function_trace import FunctionTrace @@ -22,7 +25,14 @@ from newrelic.api.transaction import current_transaction, ignore_transaction from newrelic.common.object_names import callable_name, parse_exc_info from newrelic.common.object_wrapper import function_wrapper, wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version from newrelic.core.graphql_utils import graphql_statement +from newrelic.hooks.framework_graphql_py3 import ( + nr_coro_execute_name_wrapper, + nr_coro_graphql_impl_wrapper, + nr_coro_resolver_error_wrapper, + nr_coro_resolver_wrapper, +) _logger = logging.getLogger(__name__) @@ -32,23 +42,8 @@ VERSION = None -def framework_version(): - """Framework version string.""" - global VERSION - if VERSION is None: - from graphql import __version__ as version - - VERSION = version - - return VERSION - - -def graphql_version(): - """Minor version tuple.""" - version = framework_version() - - # Take first two values in version to avoid ValueErrors with pre-releases (ex: 3.2.0a0) - return tuple(int(v) for v in version.split(".")[:2]) +GRAPHQL_VERSION = get_package_version("graphql-core") +major_version = int(GRAPHQL_VERSION.split(".")[0]) def ignore_graphql_duplicate_exception(exc, val, tb): @@ -98,10 +93,6 @@ def bind_operation_v3(operation, root_value): return operation -def bind_operation_v2(exe_context, operation, root_value): - return operation - - def wrap_execute_operation(wrapped, instance, args, kwargs): transaction = current_transaction() trace = current_trace() @@ -118,15 +109,9 @@ def wrap_execute_operation(wrapped, instance, args, kwargs): try: operation = bind_operation_v3(*args, **kwargs) except TypeError: - try: - operation = bind_operation_v2(*args, **kwargs) - except TypeError: - return wrapped(*args, **kwargs) + return wrapped(*args, **kwargs) - if graphql_version() < (3, 0): - execution_context = args[0] - else: - execution_context = instance + execution_context = instance trace.operation_name = get_node_value(operation, "name") or "" @@ -145,12 +130,17 @@ def wrap_execute_operation(wrapped, instance, args, kwargs): transaction.set_transaction_name(callable_name(wrapped), "GraphQL", priority=11) result = wrapped(*args, **kwargs) - if not execution_context.errors: - if hasattr(trace, "set_transaction_name"): + + def set_name(value=None): + if not execution_context.errors and hasattr(trace, "set_transaction_name"): # Operation trace sets transaction name trace.set_transaction_name(priority=14) + return value - return result + if isawaitable(result): + return nr_coro_execute_name_wrapper(wrapped, result, set_name) + else: + return set_name(result) def get_node_value(field, attr, subattr="value"): @@ -161,39 +151,25 @@ def get_node_value(field, attr, subattr="value"): def is_fragment_spread_node(field): - # Resolve version specific imports - try: - from graphql.language.ast import FragmentSpread - except ImportError: - from graphql import FragmentSpreadNode as FragmentSpread + from graphql.language.ast import FragmentSpreadNode - return isinstance(field, FragmentSpread) + return isinstance(field, FragmentSpreadNode) def is_fragment(field): - # Resolve version specific imports - try: - from graphql.language.ast import FragmentSpread, InlineFragment - except ImportError: - from graphql import FragmentSpreadNode as FragmentSpread - from graphql import InlineFragmentNode as InlineFragment - - _fragment_types = (InlineFragment, FragmentSpread) + from graphql.language.ast import FragmentSpreadNode, InlineFragmentNode + _fragment_types = (InlineFragmentNode, FragmentSpreadNode) return isinstance(field, _fragment_types) def is_named_fragment(field): - # Resolve version specific imports - try: - from graphql.language.ast import NamedType - except ImportError: - from graphql import NamedTypeNode as NamedType + from graphql.language.ast import NamedTypeNode return ( is_fragment(field) and getattr(field, "type_condition", None) is not None - and isinstance(field.type_condition, NamedType) + and isinstance(field.type_condition, NamedTypeNode) ) @@ -321,12 +297,25 @@ def wrap_resolver(wrapped, instance, args, kwargs): if transaction is None: return wrapped(*args, **kwargs) - name = callable_name(wrapped) + base_resolver = getattr(wrapped, "_nr_base_resolver", wrapped) + + name = callable_name(base_resolver) transaction.set_transaction_name(name, "GraphQL", priority=13) + trace = FunctionTrace(name, source=base_resolver) - with FunctionTrace(name, source=wrapped): - with ErrorTrace(ignore=ignore_graphql_duplicate_exception): - return wrapped(*args, **kwargs) + with ErrorTrace(ignore=ignore_graphql_duplicate_exception): + sync_start_time = time.time() + result = wrapped(*args, **kwargs) + + if isawaitable(result): + # Grab any async resolvers and wrap with traces + return nr_coro_resolver_error_wrapper( + wrapped, name, trace, ignore_graphql_duplicate_exception, result, transaction + ) + else: + with trace: + trace.start_time = sync_start_time + return result def wrap_error_handler(wrapped, instance, args, kwargs): @@ -368,19 +357,12 @@ def bind_resolve_field_v3(parent_type, source, field_nodes, path): return parent_type, field_nodes, path -def bind_resolve_field_v2(exe_context, parent_type, source, field_asts, parent_info, field_path): - return parent_type, field_asts, field_path - - def wrap_resolve_field(wrapped, instance, args, kwargs): transaction = current_transaction() if transaction is None: return wrapped(*args, **kwargs) - if graphql_version() < (3, 0): - bind_resolve_field = bind_resolve_field_v2 - else: - bind_resolve_field = bind_resolve_field_v3 + bind_resolve_field = bind_resolve_field_v3 try: parent_type, field_asts, field_path = bind_resolve_field(*args, **kwargs) @@ -390,18 +372,34 @@ def wrap_resolve_field(wrapped, instance, args, kwargs): field_name = field_asts[0].name.value field_def = parent_type.fields.get(field_name) field_return_type = str(field_def.type) if field_def else "" + if isinstance(field_path, list): + field_path = field_path[0] + else: + field_path = field_path.key - with GraphQLResolverTrace(field_name) as trace: - with ErrorTrace(ignore=ignore_graphql_duplicate_exception): - trace._add_agent_attribute("graphql.field.parentType", parent_type.name) - trace._add_agent_attribute("graphql.field.returnType", field_return_type) + trace = GraphQLResolverTrace( + field_name, field_parent_type=parent_type.name, field_return_type=field_return_type, field_path=field_path + ) + start_time = time.time() - if isinstance(field_path, list): - trace._add_agent_attribute("graphql.field.path", field_path[0]) - else: - trace._add_agent_attribute("graphql.field.path", field_path.key) + try: + result = wrapped(*args, **kwargs) + except Exception: + # Synchonous resolver with exception raised + with trace: + trace.start_time = start_time + notice_error(ignore=ignore_graphql_duplicate_exception) + raise - return wrapped(*args, **kwargs) + if isawaitable(result): + # Asynchronous resolvers (returned coroutines from non-coroutine functions) + # Return a coroutine that handles wrapping in a resolver trace + return nr_coro_resolver_wrapper(wrapped, trace, ignore_graphql_duplicate_exception, result) + else: + # Synchonous resolver with no exception raised + with trace: + trace.start_time = start_time + return result def bind_graphql_impl_query(schema, source, *args, **kwargs): @@ -428,11 +426,8 @@ def wrap_graphql_impl(wrapped, instance, args, kwargs): if not transaction: return wrapped(*args, **kwargs) - transaction.add_framework_info(name="GraphQL", version=framework_version()) - if graphql_version() < (3, 0): - bind_query = bind_execute_graphql_query - else: - bind_query = bind_graphql_impl_query + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) + bind_query = bind_graphql_impl_query try: schema, query = bind_query(*args, **kwargs) @@ -444,17 +439,34 @@ def wrap_graphql_impl(wrapped, instance, args, kwargs): transaction.set_transaction_name(callable_name(wrapped), "GraphQL", priority=10) - with GraphQLOperationTrace() as trace: - trace.statement = graphql_statement(query) + trace = GraphQLOperationTrace() + + trace.statement = graphql_statement(query) - # Handle Schemas created from frameworks - if hasattr(schema, "_nr_framework"): - framework = schema._nr_framework - trace.product = framework[0] - transaction.add_framework_info(name=framework[0], version=framework[1]) + # Handle Schemas created from frameworks + if hasattr(schema, "_nr_framework"): + framework = schema._nr_framework + trace.product = framework[0] + transaction.add_framework_info(name=framework[0], version=framework[1]) + # Trace must be manually started and stopped to ensure it exists prior to and during the entire duration of the query. + # Otherwise subsequent instrumentation will not be able to find an operation trace and will have issues. + trace.__enter__() + try: with ErrorTrace(ignore=ignore_graphql_duplicate_exception): result = wrapped(*args, **kwargs) + except Exception as e: + # Execution finished synchronously, exit immediately. + trace.__exit__(*sys.exc_info()) + raise + else: + if isawaitable(result): + # Asynchronous implementations + # Return a coroutine that handles closing the operation trace + return nr_coro_graphql_impl_wrapper(wrapped, trace, ignore_graphql_duplicate_exception, result) + else: + # Execution finished synchronously, exit immediately. + trace.__exit__(None, None, None) return result @@ -480,11 +492,15 @@ def instrument_graphql_execute(module): def instrument_graphql_execution_utils(module): + if major_version == 2: + return if hasattr(module, "ExecutionContext"): wrap_function_wrapper(module, "ExecutionContext.__init__", wrap_executor_context_init) def instrument_graphql_execution_middleware(module): + if major_version == 2: + return if hasattr(module, "get_middleware_resolvers"): wrap_function_wrapper(module, "get_middleware_resolvers", wrap_get_middleware_resolvers) if hasattr(module, "MiddlewareManager"): @@ -492,20 +508,26 @@ def instrument_graphql_execution_middleware(module): def instrument_graphql_error_located_error(module): + if major_version == 2: + return if hasattr(module, "located_error"): wrap_function_wrapper(module, "located_error", wrap_error_handler) def instrument_graphql_validate(module): + if major_version == 2: + return wrap_function_wrapper(module, "validate", wrap_validate) def instrument_graphql(module): + if major_version == 2: + return if hasattr(module, "graphql_impl"): wrap_function_wrapper(module, "graphql_impl", wrap_graphql_impl) - if hasattr(module, "execute_graphql"): - wrap_function_wrapper(module, "execute_graphql", wrap_graphql_impl) def instrument_graphql_parser(module): + if major_version == 2: + return wrap_function_wrapper(module, "parse", wrap_parse) diff --git a/newrelic/hooks/framework_graphql_py3.py b/newrelic/hooks/framework_graphql_py3.py new file mode 100644 index 000000000..3931aa6ed --- /dev/null +++ b/newrelic/hooks/framework_graphql_py3.py @@ -0,0 +1,68 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +import functools +import sys + +from newrelic.api.error_trace import ErrorTrace +from newrelic.api.function_trace import FunctionTrace + + +def nr_coro_execute_name_wrapper(wrapped, result, set_name): + @functools.wraps(wrapped) + async def _nr_coro_execute_name_wrapper(): + result_ = await result + set_name() + return result_ + + return _nr_coro_execute_name_wrapper() + + +def nr_coro_resolver_error_wrapper(wrapped, name, trace, ignore, result, transaction): + @functools.wraps(wrapped) + async def _nr_coro_resolver_error_wrapper(): + with trace: + with ErrorTrace(ignore=ignore): + try: + return await result + except Exception: + transaction.set_transaction_name(name, "GraphQL", priority=15) + raise + + return _nr_coro_resolver_error_wrapper() + + +def nr_coro_resolver_wrapper(wrapped, trace, ignore, result): + @functools.wraps(wrapped) + async def _nr_coro_resolver_wrapper(): + with trace: + with ErrorTrace(ignore=ignore): + return await result + + return _nr_coro_resolver_wrapper() + +def nr_coro_graphql_impl_wrapper(wrapped, trace, ignore, result): + @functools.wraps(wrapped) + async def _nr_coro_graphql_impl_wrapper(): + try: + with ErrorTrace(ignore=ignore): + result_ = await result + except: + trace.__exit__(*sys.exc_info()) + raise + else: + trace.__exit__(None, None, None) + return result_ + + + return _nr_coro_graphql_impl_wrapper() \ No newline at end of file diff --git a/newrelic/hooks/framework_strawberry.py b/newrelic/hooks/framework_strawberry.py index 92a0ea8b4..e6d06bb04 100644 --- a/newrelic/hooks/framework_strawberry.py +++ b/newrelic/hooks/framework_strawberry.py @@ -16,20 +16,14 @@ from newrelic.api.error_trace import ErrorTrace from newrelic.api.graphql_trace import GraphQLOperationTrace from newrelic.api.transaction import current_transaction -from newrelic.api.transaction_name import TransactionNameWrapper from newrelic.common.object_names import callable_name from newrelic.common.object_wrapper import wrap_function_wrapper +from newrelic.common.package_version_utils import get_package_version from newrelic.core.graphql_utils import graphql_statement -from newrelic.hooks.framework_graphql import ( - framework_version as graphql_framework_version, -) -from newrelic.hooks.framework_graphql import ignore_graphql_duplicate_exception +from newrelic.hooks.framework_graphql import GRAPHQL_VERSION, ignore_graphql_duplicate_exception - -def framework_details(): - import strawberry - - return ("Strawberry", getattr(strawberry, "__version__", None)) +STRAWBERRY_GRAPHQL_VERSION = get_package_version("strawberry-graphql") +strawberry_version_tuple = tuple(map(int, STRAWBERRY_GRAPHQL_VERSION.split("."))) def bind_execute(query, *args, **kwargs): @@ -47,9 +41,8 @@ def wrap_execute_sync(wrapped, instance, args, kwargs): except TypeError: return wrapped(*args, **kwargs) - framework = framework_details() - transaction.add_framework_info(name=framework[0], version=framework[1]) - transaction.add_framework_info(name="GraphQL", version=graphql_framework_version()) + transaction.add_framework_info(name="Strawberry", version=STRAWBERRY_GRAPHQL_VERSION) + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) if hasattr(query, "body"): query = query.body @@ -74,9 +67,8 @@ async def wrap_execute(wrapped, instance, args, kwargs): except TypeError: return await wrapped(*args, **kwargs) - framework = framework_details() - transaction.add_framework_info(name=framework[0], version=framework[1]) - transaction.add_framework_info(name="GraphQL", version=graphql_framework_version()) + transaction.add_framework_info(name="Strawberry", version=STRAWBERRY_GRAPHQL_VERSION) + transaction.add_framework_info(name="GraphQL", version=GRAPHQL_VERSION) if hasattr(query, "body"): query = query.body @@ -98,19 +90,20 @@ def wrap_from_resolver(wrapped, instance, args, kwargs): result = wrapped(*args, **kwargs) try: - field = bind_from_resolver(*args, **kwargs) + field = bind_from_resolver(*args, **kwargs) except TypeError: pass else: if hasattr(field, "base_resolver"): if hasattr(field.base_resolver, "wrapped_func"): - resolver_name = callable_name(field.base_resolver.wrapped_func) - result = TransactionNameWrapper(result, resolver_name, "GraphQL", priority=13) + result._nr_base_resolver = field.base_resolver.wrapped_func return result def instrument_strawberry_schema(module): + if strawberry_version_tuple < (0, 23, 3): + return if hasattr(module, "Schema"): if hasattr(module.Schema, "execute"): wrap_function_wrapper(module, "Schema.execute", wrap_execute) @@ -119,11 +112,15 @@ def instrument_strawberry_schema(module): def instrument_strawberry_asgi(module): + if strawberry_version_tuple < (0, 23, 3): + return if hasattr(module, "GraphQL"): - wrap_asgi_application(module, "GraphQL.__call__", framework=framework_details()) + wrap_asgi_application(module, "GraphQL.__call__", framework=("Strawberry", STRAWBERRY_GRAPHQL_VERSION)) def instrument_strawberry_schema_converter(module): + if strawberry_version_tuple < (0, 23, 3): + return if hasattr(module, "GraphQLCoreConverter"): if hasattr(module.GraphQLCoreConverter, "from_resolver"): wrap_function_wrapper(module, "GraphQLCoreConverter.from_resolver", wrap_from_resolver) diff --git a/tests/component_djangorestframework/test_application.py b/tests/component_djangorestframework/test_application.py index 9ed60aa33..29861dca8 100644 --- a/tests/component_djangorestframework/test_application.py +++ b/tests/component_djangorestframework/test_application.py @@ -12,190 +12,168 @@ # See the License for the specific language governing permissions and # limitations under the License. +import django import pytest import webtest +from testing_support.fixtures import function_not_called, override_generic_settings +from testing_support.validators.validate_code_level_metrics import ( + validate_code_level_metrics, +) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) -from newrelic.packages import six from newrelic.core.config import global_settings +from newrelic.packages import six -from testing_support.fixtures import ( - override_generic_settings, - function_not_called) -from testing_support.validators.validate_transaction_errors import validate_transaction_errors -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_code_level_metrics import validate_code_level_metrics -import django - -DJANGO_VERSION = tuple(map(int, django.get_version().split('.')[:2])) - +DJANGO_VERSION = tuple(map(int, django.get_version().split(".")[:2])) -@pytest.fixture(scope='module') +@pytest.fixture(scope="module") def target_application(): from wsgi import application + test_application = webtest.TestApp(application) return test_application if DJANGO_VERSION >= (1, 10): - url_module_path = 'django.urls.resolvers' + url_module_path = "django.urls.resolvers" # Django 1.10 new style middleware removed individual process_* methods. # All middleware in Django 1.10+ is called through the __call__ methods on # middlwares. - process_request_method = '' - process_view_method = '' - process_response_method = '' + process_request_method = "" + process_view_method = "" + process_response_method = "" else: - url_module_path = 'django.core.urlresolvers' - process_request_method = '.process_request' - process_view_method = '.process_view' - process_response_method = '.process_response' + url_module_path = "django.core.urlresolvers" + process_request_method = ".process_request" + process_view_method = ".process_view" + process_response_method = ".process_response" if DJANGO_VERSION >= (2, 0): - url_resolver_cls = 'URLResolver' + url_resolver_cls = "URLResolver" else: - url_resolver_cls = 'RegexURLResolver' + url_resolver_cls = "RegexURLResolver" _scoped_metrics = [ - ('Function/django.core.handlers.wsgi:WSGIHandler.__call__', 1), - ('Python/WSGI/Application', 1), - ('Python/WSGI/Response', 1), - ('Python/WSGI/Finalize', 1), - (('Function/django.middleware.common:' - 'CommonMiddleware' + process_request_method), 1), - (('Function/django.contrib.sessions.middleware:' - 'SessionMiddleware' + process_request_method), 1), - (('Function/django.contrib.auth.middleware:' - 'AuthenticationMiddleware' + process_request_method), 1), - (('Function/django.contrib.messages.middleware:' - 'MessageMiddleware' + process_request_method), 1), - (('Function/%s:' % url_module_path + - '%s.resolve' % url_resolver_cls), 1), - (('Function/django.middleware.csrf:' - 'CsrfViewMiddleware' + process_view_method), 1), - (('Function/django.contrib.messages.middleware:' - 'MessageMiddleware' + process_response_method), 1), - (('Function/django.middleware.csrf:' - 'CsrfViewMiddleware' + process_response_method), 1), - (('Function/django.contrib.sessions.middleware:' - 'SessionMiddleware' + process_response_method), 1), - (('Function/django.middleware.common:' - 'CommonMiddleware' + process_response_method), 1), + ("Function/django.core.handlers.wsgi:WSGIHandler.__call__", 1), + ("Python/WSGI/Application", 1), + ("Python/WSGI/Response", 1), + ("Python/WSGI/Finalize", 1), + (("Function/django.middleware.common:CommonMiddleware%s" % process_request_method), 1), + (("Function/django.contrib.sessions.middleware:SessionMiddleware%s" % process_request_method), 1), + (("Function/django.contrib.auth.middleware:AuthenticationMiddleware%s" % process_request_method), 1), + (("Function/django.contrib.messages.middleware:MessageMiddleware%s" % process_request_method), 1), + (("Function/%s:%s.resolve" % (url_module_path, url_resolver_cls)), 1), + (("Function/django.middleware.csrf:CsrfViewMiddleware%s" % process_view_method), 1), + (("Function/django.contrib.messages.middleware:MessageMiddleware%s" % process_response_method), 1), + (("Function/django.middleware.csrf:CsrfViewMiddleware%s" % process_response_method), 1), + (("Function/django.contrib.sessions.middleware:SessionMiddleware%s" % process_response_method), 1), + (("Function/django.middleware.common:CommonMiddleware%s" % process_response_method), 1), ] _test_application_index_scoped_metrics = list(_scoped_metrics) -_test_application_index_scoped_metrics.append(('Function/views:index', 1)) +_test_application_index_scoped_metrics.append(("Function/views:index", 1)) if DJANGO_VERSION >= (1, 5): - _test_application_index_scoped_metrics.extend([ - ('Function/django.http.response:HttpResponse.close', 1)]) + _test_application_index_scoped_metrics.extend([("Function/django.http.response:HttpResponse.close", 1)]) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('views:index', - scoped_metrics=_test_application_index_scoped_metrics) +@validate_transaction_metrics("views:index", scoped_metrics=_test_application_index_scoped_metrics) @validate_code_level_metrics("views", "index") def test_application_index(target_application): - response = target_application.get('') - response.mustcontain('INDEX RESPONSE') + response = target_application.get("") + response.mustcontain("INDEX RESPONSE") _test_application_view_scoped_metrics = list(_scoped_metrics) -_test_application_view_scoped_metrics.append(('Function/urls:View.get', 1)) +_test_application_view_scoped_metrics.append(("Function/urls:View.get", 1)) if DJANGO_VERSION >= (1, 5): - _test_application_view_scoped_metrics.extend([ - ('Function/rest_framework.response:Response.close', 1)]) + _test_application_view_scoped_metrics.extend([("Function/rest_framework.response:Response.close", 1)]) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics('urls:View.get', - scoped_metrics=_test_application_view_scoped_metrics) +@validate_transaction_metrics("urls:View.get", scoped_metrics=_test_application_view_scoped_metrics) @validate_code_level_metrics("urls.View", "get") def test_application_view(target_application): - response = target_application.get('/view/') + response = target_application.get("/view/") assert response.status_int == 200 - response.mustcontain('restframework view response') + response.mustcontain("restframework view response") _test_application_view_error_scoped_metrics = list(_scoped_metrics) -_test_application_view_error_scoped_metrics.append( - ('Function/urls:ViewError.get', 1)) +_test_application_view_error_scoped_metrics.append(("Function/urls:ViewError.get", 1)) -@validate_transaction_errors(errors=['urls:Error']) -@validate_transaction_metrics('urls:ViewError.get', - scoped_metrics=_test_application_view_error_scoped_metrics) +@validate_transaction_errors(errors=["urls:Error"]) +@validate_transaction_metrics("urls:ViewError.get", scoped_metrics=_test_application_view_error_scoped_metrics) @validate_code_level_metrics("urls.ViewError", "get") def test_application_view_error(target_application): - target_application.get('/view_error/', status=500) + target_application.get("/view_error/", status=500) _test_application_view_handle_error_scoped_metrics = list(_scoped_metrics) -_test_application_view_handle_error_scoped_metrics.append( - ('Function/urls:ViewHandleError.get', 1)) +_test_application_view_handle_error_scoped_metrics.append(("Function/urls:ViewHandleError.get", 1)) -@pytest.mark.parametrize('status,should_record', [(418, True), (200, False)]) -@pytest.mark.parametrize('use_global_exc_handler', [True, False]) +@pytest.mark.parametrize("status,should_record", [(418, True), (200, False)]) +@pytest.mark.parametrize("use_global_exc_handler", [True, False]) @validate_code_level_metrics("urls.ViewHandleError", "get") -def test_application_view_handle_error(status, should_record, - use_global_exc_handler, target_application): - errors = ['urls:Error'] if should_record else [] +def test_application_view_handle_error(status, should_record, use_global_exc_handler, target_application): + errors = ["urls:Error"] if should_record else [] @validate_transaction_errors(errors=errors) - @validate_transaction_metrics('urls:ViewHandleError.get', - scoped_metrics=_test_application_view_handle_error_scoped_metrics) + @validate_transaction_metrics( + "urls:ViewHandleError.get", scoped_metrics=_test_application_view_handle_error_scoped_metrics + ) def _test(): - response = target_application.get( - '/view_handle_error/%s/%s/' % (status, use_global_exc_handler), - status=status) + response = target_application.get("/view_handle_error/%s/%s/" % (status, use_global_exc_handler), status=status) if use_global_exc_handler: - response.mustcontain('exception was handled global') + response.mustcontain("exception was handled global") else: - response.mustcontain('exception was handled not global') + response.mustcontain("exception was handled not global") _test() -_test_api_view_view_name_get = 'urls:wrapped_view.get' +_test_api_view_view_name_get = "urls:wrapped_view.get" _test_api_view_scoped_metrics_get = list(_scoped_metrics) -_test_api_view_scoped_metrics_get.append( - ('Function/%s' % _test_api_view_view_name_get, 1)) +_test_api_view_scoped_metrics_get.append(("Function/%s" % _test_api_view_view_name_get, 1)) @validate_transaction_errors(errors=[]) -@validate_transaction_metrics(_test_api_view_view_name_get, - scoped_metrics=_test_api_view_scoped_metrics_get) -@validate_code_level_metrics("urls.WrappedAPIView" if six.PY3 else "urls", "wrapped_view") +@validate_transaction_metrics(_test_api_view_view_name_get, scoped_metrics=_test_api_view_scoped_metrics_get) +@validate_code_level_metrics("urls.WrappedAPIView", "wrapped_view", py2_namespace="urls") def test_api_view_get(target_application): - response = target_application.get('/api_view/') - response.mustcontain('wrapped_view response') + response = target_application.get("/api_view/") + response.mustcontain("wrapped_view response") -_test_api_view_view_name_post = 'urls:wrapped_view.http_method_not_allowed' +_test_api_view_view_name_post = "urls:wrapped_view.http_method_not_allowed" _test_api_view_scoped_metrics_post = list(_scoped_metrics) -_test_api_view_scoped_metrics_post.append( - ('Function/%s' % _test_api_view_view_name_post, 1)) +_test_api_view_scoped_metrics_post.append(("Function/%s" % _test_api_view_view_name_post, 1)) -@validate_transaction_errors( - errors=['rest_framework.exceptions:MethodNotAllowed']) -@validate_transaction_metrics(_test_api_view_view_name_post, - scoped_metrics=_test_api_view_scoped_metrics_post) +@validate_transaction_errors(errors=["rest_framework.exceptions:MethodNotAllowed"]) +@validate_transaction_metrics(_test_api_view_view_name_post, scoped_metrics=_test_api_view_scoped_metrics_post) def test_api_view_method_not_allowed(target_application): - target_application.post('/api_view/', status=405) + target_application.post("/api_view/", status=405) def test_application_view_agent_disabled(target_application): settings = global_settings() - @override_generic_settings(settings, {'enabled': False}) - @function_not_called('newrelic.core.stats_engine', - 'StatsEngine.record_transaction') + @override_generic_settings(settings, {"enabled": False}) + @function_not_called("newrelic.core.stats_engine", "StatsEngine.record_transaction") def _test(): - response = target_application.get('/view/') + response = target_application.get("/view/") assert response.status_int == 200 - response.mustcontain('restframework view response') + response.mustcontain("restframework view response") _test() diff --git a/tests/component_flask_rest/test_application.py b/tests/component_flask_rest/test_application.py index d463a0205..67d4825a1 100644 --- a/tests/component_flask_rest/test_application.py +++ b/tests/component_flask_rest/test_application.py @@ -31,8 +31,6 @@ from newrelic.core.config import global_settings from newrelic.packages import six -TEST_APPLICATION_PREFIX = "_test_application.create_app." if six.PY3 else "_test_application" - @pytest.fixture(params=["flask_restful", "flask_restx"]) def application(request): @@ -62,7 +60,7 @@ def application(request): ] -@validate_code_level_metrics(TEST_APPLICATION_PREFIX + ".IndexResource", "get") +@validate_code_level_metrics("_test_application.create_app..IndexResource", "get", py2_namespace="_test_application.IndexResource") @validate_transaction_errors(errors=[]) @validate_transaction_metrics("_test_application:index", scoped_metrics=_test_application_index_scoped_metrics) def test_application_index(application): @@ -88,7 +86,7 @@ def test_application_index(application): ], ) def test_application_raises(exception, status_code, ignore_status_code, propagate_exceptions, application): - @validate_code_level_metrics(TEST_APPLICATION_PREFIX + ".ExceptionResource", "get") + @validate_code_level_metrics("_test_application.create_app..ExceptionResource", "get", py2_namespace="_test_application.ExceptionResource") @validate_transaction_metrics("_test_application:exception", scoped_metrics=_test_application_raises_scoped_metrics) def _test(): try: @@ -118,4 +116,4 @@ def test_application_outside_transaction(application): def _test(): application.get("/exception/werkzeug.exceptions:HTTPException/404", status=404) - _test() + _test() \ No newline at end of file diff --git a/tests/component_graphqlserver/__init__.py b/tests/component_graphqlserver/__init__.py new file mode 100644 index 000000000..8030baccf --- /dev/null +++ b/tests/component_graphqlserver/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/component_graphqlserver/_target_schema_async.py b/tests/component_graphqlserver/_target_schema_async.py new file mode 100644 index 000000000..aff587bc8 --- /dev/null +++ b/tests/component_graphqlserver/_target_schema_async.py @@ -0,0 +1,155 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from graphql import ( + GraphQLArgument, + GraphQLField, + GraphQLInt, + GraphQLList, + GraphQLNonNull, + GraphQLObjectType, + GraphQLSchema, + GraphQLString, + GraphQLUnionType, +) + +from ._target_schema_sync import books, libraries, magazines + +storage = [] + + +async def resolve_library(parent, info, index): + return libraries[index] + + +async def resolve_storage_add(parent, info, string): + storage.append(string) + return string + + +async def resolve_storage(parent, info): + return [storage.pop()] + + +async def resolve_search(parent, info, contains): + search_books = [b for b in books if contains in b["name"]] + search_magazines = [m for m in magazines if contains in m["name"]] + return search_books + search_magazines + + +Author = GraphQLObjectType( + "Author", + { + "first_name": GraphQLField(GraphQLString), + "last_name": GraphQLField(GraphQLString), + }, +) + +Book = GraphQLObjectType( + "Book", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "isbn": GraphQLField(GraphQLString), + "author": GraphQLField(Author), + "branch": GraphQLField(GraphQLString), + }, +) + +Magazine = GraphQLObjectType( + "Magazine", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "issue": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + }, +) + + +Library = GraphQLObjectType( + "Library", + { + "id": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + "book": GraphQLField(GraphQLList(Book)), + "magazine": GraphQLField(GraphQLList(Magazine)), + }, +) + +Storage = GraphQLList(GraphQLString) + + +async def resolve_hello(root, info): + return "Hello!" + + +async def resolve_echo(root, info, echo): + return echo + + +async def resolve_error(root, info): + raise RuntimeError("Runtime Error!") + + +hello_field = GraphQLField(GraphQLString, resolver=resolve_hello) +library_field = GraphQLField( + Library, + resolver=resolve_library, + args={"index": GraphQLArgument(GraphQLNonNull(GraphQLInt))}, +) +search_field = GraphQLField( + GraphQLList(GraphQLUnionType("Item", (Book, Magazine), resolve_type=resolve_search)), + args={"contains": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +echo_field = GraphQLField( + GraphQLString, + resolver=resolve_echo, + args={"echo": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +storage_field = GraphQLField( + Storage, + resolver=resolve_storage, +) +storage_add_field = GraphQLField( + GraphQLString, + resolver=resolve_storage_add, + args={"string": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +error_field = GraphQLField(GraphQLString, resolver=resolve_error) +error_non_null_field = GraphQLField(GraphQLNonNull(GraphQLString), resolver=resolve_error) +error_middleware_field = GraphQLField(GraphQLString, resolver=resolve_hello) + +query = GraphQLObjectType( + name="Query", + fields={ + "hello": hello_field, + "library": library_field, + "search": search_field, + "echo": echo_field, + "storage": storage_field, + "error": error_field, + "error_non_null": error_non_null_field, + "error_middleware": error_middleware_field, + }, +) + +mutation = GraphQLObjectType( + name="Mutation", + fields={ + "storage_add": storage_add_field, + }, +) + +target_schema = GraphQLSchema(query=query, mutation=mutation) diff --git a/tests/component_graphqlserver/_test_graphql.py b/tests/component_graphqlserver/_test_graphql.py index 50b5621f9..7a29b3a8f 100644 --- a/tests/component_graphqlserver/_test_graphql.py +++ b/tests/component_graphqlserver/_test_graphql.py @@ -12,15 +12,18 @@ # See the License for the specific language governing permissions and # limitations under the License. +from flask import Flask +from sanic import Sanic import json - import webtest -from flask import Flask -from framework_graphql._target_application import _target_application as schema + +from testing_support.asgi_testing import AsgiTest +from framework_graphql._target_schema_sync import target_schema as schema from graphql_server.flask import GraphQLView as FlaskView from graphql_server.sanic import GraphQLView as SanicView -from sanic import Sanic -from testing_support.asgi_testing import AsgiTest + +# Sanic +target_application = dict() def set_middlware(middleware, view_middleware): @@ -95,5 +98,4 @@ def flask_execute(query, middleware=None): return response - target_application["Flask"] = flask_execute diff --git a/tests/component_graphqlserver/test_graphql.py b/tests/component_graphqlserver/test_graphql.py index e5566047e..098f50970 100644 --- a/tests/component_graphqlserver/test_graphql.py +++ b/tests/component_graphqlserver/test_graphql.py @@ -12,16 +12,21 @@ # See the License for the specific language governing permissions and # limitations under the License. + import importlib import pytest from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_errors import validate_transaction_errors -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics from testing_support.validators.validate_span_events import validate_span_events from testing_support.validators.validate_transaction_count import ( validate_transaction_count, ) +from testing_support.validators.validate_transaction_errors import ( + validate_transaction_errors, +) +from testing_support.validators.validate_transaction_metrics import ( + validate_transaction_metrics, +) from newrelic.common.object_names import callable_name @@ -36,7 +41,7 @@ def is_graphql_2(): @pytest.fixture(scope="session", params=("Sanic", "Flask")) def target_application(request): - import _test_graphql + from . import _test_graphql framework = request.param version = importlib.import_module(framework.lower()).__version__ @@ -186,7 +191,7 @@ def test_middleware(target_application): _test_middleware_metrics = [ ("GraphQL/operation/GraphQLServer/query//hello", 1), ("GraphQL/resolve/GraphQLServer/hello", 1), - ("Function/test_graphql:example_middleware", 1), + ("Function/component_graphqlserver.test_graphql:example_middleware", 1), ] # Base span count 6: Transaction, View, Operation, Middleware, and 1 Resolver and Resolver function @@ -220,7 +225,7 @@ def test_exception_in_middleware(target_application): _test_exception_rollup_metrics = [ ("Errors/all", 1), ("Errors/allWeb", 1), - ("Errors/WebTransaction/GraphQL/test_graphql:error_middleware", 1), + ("Errors/WebTransaction/GraphQL/component_graphqlserver.test_graphql:error_middleware", 1), ] + _test_exception_scoped_metrics # Attributes @@ -237,7 +242,7 @@ def test_exception_in_middleware(target_application): } @validate_transaction_metrics( - "test_graphql:error_middleware", + "component_graphqlserver.test_graphql:error_middleware", "GraphQL", scoped_metrics=_test_exception_scoped_metrics, rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, @@ -257,7 +262,7 @@ def test_exception_in_resolver(target_application, field): framework, version, target_application = target_application query = "query MyQuery { %s }" % field - txn_name = "framework_graphql._target_application:resolve_error" + txn_name = "framework_graphql._target_schema_sync:resolve_error" # Metrics _test_exception_scoped_metrics = [ @@ -488,7 +493,7 @@ def _test(): def test_deepest_unique_path(target_application, query, expected_path): framework, version, target_application = target_application if expected_path == "/error": - txn_name = "framework_graphql._target_application:resolve_error" + txn_name = "framework_graphql._target_schema_sync:resolve_error" else: txn_name = "query/%s" % expected_path diff --git a/tests/datastore_bmemcached/test_memcache.py b/tests/datastore_bmemcached/test_memcache.py index c4bd3fb66..2f87da113 100644 --- a/tests/datastore_bmemcached/test_memcache.py +++ b/tests/datastore_bmemcached/test_memcache.py @@ -23,6 +23,8 @@ from newrelic.api.background_task import background_task from newrelic.api.transaction import set_background_task +from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics + DB_SETTINGS = memcached_settings()[0] MEMCACHED_HOST = DB_SETTINGS["host"] diff --git a/tests/datastore_firestore/test_async_batching.py b/tests/datastore_firestore/test_async_batching.py index fd4afea8c..39e532a04 100644 --- a/tests/datastore_firestore/test_async_batching.py +++ b/tests/datastore_firestore/test_async_batching.py @@ -70,4 +70,4 @@ def test_firestore_async_write_batch_trace_node_datastore_params(loop, exercise_ def _test(): loop.run_until_complete(exercise_async_write_batch()) - _test() + _test() \ No newline at end of file diff --git a/tests/datastore_firestore/test_async_client.py b/tests/datastore_firestore/test_async_client.py index 703fbc4c2..1c7518bf0 100644 --- a/tests/datastore_firestore/test_async_client.py +++ b/tests/datastore_firestore/test_async_client.py @@ -84,4 +84,4 @@ def test_firestore_async_client_trace_node_datastore_params(loop, exercise_async def _test(): loop.run_until_complete(exercise_async_client()) - _test() + _test() \ No newline at end of file diff --git a/tests/datastore_firestore/test_async_collections.py b/tests/datastore_firestore/test_async_collections.py index 45ac691b9..214ee2939 100644 --- a/tests/datastore_firestore/test_async_collections.py +++ b/tests/datastore_firestore/test_async_collections.py @@ -91,4 +91,4 @@ def test_firestore_async_collections_trace_node_datastore_params(loop, exercise_ def _test(): loop.run_until_complete(exercise_async_collections()) - _test() + _test() \ No newline at end of file diff --git a/tests/datastore_firestore/test_async_documents.py b/tests/datastore_firestore/test_async_documents.py index 8a4ecd7bb..c90693208 100644 --- a/tests/datastore_firestore/test_async_documents.py +++ b/tests/datastore_firestore/test_async_documents.py @@ -105,4 +105,4 @@ def test_firestore_async_documents_trace_node_datastore_params(loop, exercise_as def _test(): loop.run_until_complete(exercise_async_documents()) - _test() + _test() \ No newline at end of file diff --git a/tests/datastore_firestore/test_async_query.py b/tests/datastore_firestore/test_async_query.py index 8c29841e9..1bc579b7f 100644 --- a/tests/datastore_firestore/test_async_query.py +++ b/tests/datastore_firestore/test_async_query.py @@ -246,4 +246,4 @@ def test_firestore_async_collection_group_trace_node_datastore_params( def _test(): loop.run_until_complete(exercise_async_collection_group()) - _test() + _test() \ No newline at end of file diff --git a/tests/datastore_firestore/test_client.py b/tests/datastore_firestore/test_client.py index a5fd1b37f..81fbd181c 100644 --- a/tests/datastore_firestore/test_client.py +++ b/tests/datastore_firestore/test_client.py @@ -80,4 +80,4 @@ def test_firestore_client_trace_node_datastore_params(exercise_client, instance_ def _test(): exercise_client() - _test() + _test() \ No newline at end of file diff --git a/tests/framework_ariadne/__init__.py b/tests/framework_ariadne/__init__.py new file mode 100644 index 000000000..8030baccf --- /dev/null +++ b/tests/framework_ariadne/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/framework_ariadne/_target_application.py b/tests/framework_ariadne/_target_application.py index 94bc0710f..fef782608 100644 --- a/tests/framework_ariadne/_target_application.py +++ b/tests/framework_ariadne/_target_application.py @@ -12,140 +12,125 @@ # See the License for the specific language governing permissions and # limitations under the License. -import os - -from ariadne import ( - MutationType, - QueryType, - UnionType, - load_schema_from_path, - make_executable_schema, + +import asyncio +import json + +from framework_ariadne._target_schema_async import ( + target_asgi_application as target_asgi_application_async, +) +from framework_ariadne._target_schema_async import target_schema as target_schema_async +from framework_ariadne._target_schema_sync import ( + target_asgi_application as target_asgi_application_sync, +) +from framework_ariadne._target_schema_sync import target_schema as target_schema_sync +from framework_ariadne._target_schema_sync import ( + target_wsgi_application as target_wsgi_application_sync, ) -from ariadne.asgi import GraphQL as GraphQLASGI -from ariadne.wsgi import GraphQL as GraphQLWSGI +from framework_ariadne._target_schema_sync import ariadne_version_tuple +from graphql import MiddlewareManager -schema_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "schema.graphql") -type_defs = load_schema_from_path(schema_file) - - -authors = [ - { - "first_name": "New", - "last_name": "Relic", - }, - { - "first_name": "Bob", - "last_name": "Smith", - }, - { - "first_name": "Leslie", - "last_name": "Jones", - }, -] -books = [ - { - "id": 1, - "name": "Python Agent: The Book", - "isbn": "a-fake-isbn", - "author": authors[0], - "branch": "riverside", - }, - { - "id": 2, - "name": "Ollies for O11y: A Sk8er's Guide to Observability", - "isbn": "a-second-fake-isbn", - "author": authors[1], - "branch": "downtown", - }, - { - "id": 3, - "name": "[Redacted]", - "isbn": "a-third-fake-isbn", - "author": authors[2], - "branch": "riverside", - }, -] -magazines = [ - {"id": 1, "name": "Reli Updates Weekly", "issue": 1, "branch": "riverside"}, - {"id": 2, "name": "Reli Updates Weekly", "issue": 2, "branch": "downtown"}, - {"id": 3, "name": "Node Weekly", "issue": 1, "branch": "riverside"}, -] +def check_response(query, success, response): + if isinstance(query, str) and "error" not in query: + assert success and "errors" not in response, response + assert response.get("data", None), response + else: + assert "errors" in response, response -libraries = ["riverside", "downtown"] -libraries = [ - { - "id": i + 1, - "branch": branch, - "magazine": [m for m in magazines if m["branch"] == branch], - "book": [b for b in books if b["branch"] == branch], - } - for i, branch in enumerate(libraries) -] +def run_sync(schema): + def _run_sync(query, middleware=None): + from ariadne import graphql_sync -storage = [] + if ariadne_version_tuple < (0, 18): + if middleware: + middleware = MiddlewareManager(*middleware) + success, response = graphql_sync(schema, {"query": query}, middleware=middleware) + check_response(query, success, response) -mutation = MutationType() + return response.get("data", {}) + return _run_sync -@mutation.field("storage_add") -def mutate(self, info, string): - storage.append(string) - return {"string": string} +def run_async(schema): + def _run_async(query, middleware=None): + from ariadne import graphql -item = UnionType("Item") + #Later versions of ariadne directly accept a list of middleware while older versions require the MiddlewareManager + if ariadne_version_tuple < (0, 18): + if middleware: + middleware = MiddlewareManager(*middleware) + loop = asyncio.get_event_loop() + success, response = loop.run_until_complete(graphql(schema, {"query": query}, middleware=middleware)) + check_response(query, success, response) -@item.type_resolver -def resolve_type(obj, *args): - if "isbn" in obj: - return "Book" - elif "issue" in obj: # pylint: disable=R1705 - return "Magazine" + return response.get("data", {}) - return None + return _run_async -query = QueryType() +def run_wsgi(app): + def _run_asgi(query, middleware=None): + if not isinstance(query, str) or "error" in query: + expect_errors = True + else: + expect_errors = False + app.app.middleware = middleware -@query.field("library") -def resolve_library(self, info, index): - return libraries[index] + response = app.post( + "/", json.dumps({"query": query}), headers={"Content-Type": "application/json"}, expect_errors=expect_errors + ) + body = json.loads(response.body.decode("utf-8")) + if expect_errors: + assert body["errors"] + else: + assert "errors" not in body or not body["errors"] -@query.field("storage") -def resolve_storage(self, info): - return storage + return body.get("data", {}) + return _run_asgi -@query.field("search") -def resolve_search(self, info, contains): - search_books = [b for b in books if contains in b["name"]] - search_magazines = [m for m in magazines if contains in m["name"]] - return search_books + search_magazines +def run_asgi(app): + def _run_asgi(query, middleware=None): + if ariadne_version_tuple < (0, 16): + app.asgi_application.middleware = middleware -@query.field("hello") -def resolve_hello(self, info): - return "Hello!" + #In ariadne v0.16.0, the middleware attribute was removed from the GraphQL class in favor of the http_handler + elif ariadne_version_tuple >= (0, 16): + app.asgi_application.http_handler.middleware = middleware + response = app.make_request( + "POST", "/", body=json.dumps({"query": query}), headers={"Content-Type": "application/json"} + ) + body = json.loads(response.body.decode("utf-8")) -@query.field("echo") -def resolve_echo(self, info, echo): - return echo + if not isinstance(query, str) or "error" in query: + try: + assert response.status != 200 + except AssertionError: + assert body["errors"] + else: + assert response.status == 200 + assert "errors" not in body or not body["errors"] + return body.get("data", {}) -@query.field("error_non_null") -@query.field("error") -def resolve_error(self, info): - raise RuntimeError("Runtime Error!") + return _run_asgi -_target_application = make_executable_schema(type_defs, query, mutation, item) -_target_asgi_application = GraphQLASGI(_target_application) -_target_wsgi_application = GraphQLWSGI(_target_application) +target_application = { + "sync-sync": run_sync(target_schema_sync), + "async-sync": run_async(target_schema_sync), + "async-async": run_async(target_schema_async), + "wsgi-sync": run_wsgi(target_wsgi_application_sync), + "asgi-sync": run_asgi(target_asgi_application_sync), + "asgi-async": run_asgi(target_asgi_application_async), +} diff --git a/tests/framework_ariadne/_target_schema_async.py b/tests/framework_ariadne/_target_schema_async.py new file mode 100644 index 000000000..076475628 --- /dev/null +++ b/tests/framework_ariadne/_target_schema_async.py @@ -0,0 +1,94 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os + +from ariadne import ( + MutationType, + QueryType, + UnionType, + load_schema_from_path, + make_executable_schema, +) +from ariadne.asgi import GraphQL as GraphQLASGI +from framework_graphql._target_schema_sync import books, magazines, libraries + +from testing_support.asgi_testing import AsgiTest + +schema_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "schema.graphql") +type_defs = load_schema_from_path(schema_file) + +storage = [] + +mutation = MutationType() + + +@mutation.field("storage_add") +async def resolve_storage_add(self, info, string): + storage.append(string) + return string + + +item = UnionType("Item") + + +@item.type_resolver +async def resolve_type(obj, *args): + if "isbn" in obj: + return "Book" + elif "issue" in obj: # pylint: disable=R1705 + return "Magazine" + + return None + + +query = QueryType() + + +@query.field("library") +async def resolve_library(self, info, index): + return libraries[index] + + +@query.field("storage") +async def resolve_storage(self, info): + return [storage.pop()] + + +@query.field("search") +async def resolve_search(self, info, contains): + search_books = [b for b in books if contains in b["name"]] + search_magazines = [m for m in magazines if contains in m["name"]] + return search_books + search_magazines + + +@query.field("hello") +@query.field("error_middleware") +async def resolve_hello(self, info): + return "Hello!" + + +@query.field("echo") +async def resolve_echo(self, info, echo): + return echo + + +@query.field("error_non_null") +@query.field("error") +async def resolve_error(self, info): + raise RuntimeError("Runtime Error!") + + +target_schema = make_executable_schema(type_defs, query, mutation, item) +target_asgi_application = AsgiTest(GraphQLASGI(target_schema)) diff --git a/tests/framework_ariadne/_target_schema_sync.py b/tests/framework_ariadne/_target_schema_sync.py new file mode 100644 index 000000000..8860e71ac --- /dev/null +++ b/tests/framework_ariadne/_target_schema_sync.py @@ -0,0 +1,106 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import webtest + +from ariadne import ( + MutationType, + QueryType, + UnionType, + load_schema_from_path, + make_executable_schema, +) +from ariadne.wsgi import GraphQL as GraphQLWSGI +from framework_graphql._target_schema_sync import books, magazines, libraries + +from testing_support.asgi_testing import AsgiTest +from framework_ariadne.test_application import ARIADNE_VERSION + +ariadne_version_tuple = tuple(map(int, ARIADNE_VERSION.split("."))) + +if ariadne_version_tuple < (0, 16): + from ariadne.asgi import GraphQL as GraphQLASGI +elif ariadne_version_tuple >= (0, 16): + from ariadne.asgi.graphql import GraphQL as GraphQLASGI + + +schema_file = os.path.join(os.path.dirname(os.path.realpath(__file__)), "schema.graphql") +type_defs = load_schema_from_path(schema_file) + +storage = [] + +mutation = MutationType() + + + +@mutation.field("storage_add") +def resolve_storage_add(self, info, string): + storage.append(string) + return string + + +item = UnionType("Item") + + +@item.type_resolver +def resolve_type(obj, *args): + if "isbn" in obj: + return "Book" + elif "issue" in obj: # pylint: disable=R1705 + return "Magazine" + + return None + + +query = QueryType() + + +@query.field("library") +def resolve_library(self, info, index): + return libraries[index] + + +@query.field("storage") +def resolve_storage(self, info): + return [storage.pop()] + + +@query.field("search") +def resolve_search(self, info, contains): + search_books = [b for b in books if contains in b["name"]] + search_magazines = [m for m in magazines if contains in m["name"]] + return search_books + search_magazines + + +@query.field("hello") +@query.field("error_middleware") +def resolve_hello(self, info): + return "Hello!" + + +@query.field("echo") +def resolve_echo(self, info, echo): + return echo + + +@query.field("error_non_null") +@query.field("error") +def resolve_error(self, info): + raise RuntimeError("Runtime Error!") + + +target_schema = make_executable_schema(type_defs, query, mutation, item) +target_asgi_application = AsgiTest(GraphQLASGI(target_schema)) +target_wsgi_application = webtest.TestApp(GraphQLWSGI(target_schema)) \ No newline at end of file diff --git a/tests/framework_ariadne/conftest.py b/tests/framework_ariadne/conftest.py index 93623a685..42b08faba 100644 --- a/tests/framework_ariadne/conftest.py +++ b/tests/framework_ariadne/conftest.py @@ -12,10 +12,11 @@ # See the License for the specific language governing permissions and # limitations under the License. -import pytest import six -from testing_support.fixtures import collector_agent_registration_fixture, collector_available_fixture # noqa: F401; pylint: disable=W0611 - +from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) _default_settings = { "transaction_tracer.explain_threshold": 0.0, @@ -31,12 +32,5 @@ ) -@pytest.fixture(scope="session") -def app(): - from _target_application import _target_application - - return _target_application - - if six.PY2: collect_ignore = ["test_application_async.py"] diff --git a/tests/framework_ariadne/schema.graphql b/tests/framework_ariadne/schema.graphql index 4c76e0b88..8bf64af51 100644 --- a/tests/framework_ariadne/schema.graphql +++ b/tests/framework_ariadne/schema.graphql @@ -33,7 +33,7 @@ type Magazine { } type Mutation { - storage_add(string: String!): StorageAdd + storage_add(string: String!): String } type Query { @@ -44,8 +44,5 @@ type Query { echo(echo: String!): String error: String error_non_null: String! -} - -type StorageAdd { - string: String + error_middleware: String } diff --git a/tests/framework_ariadne/test_application.py b/tests/framework_ariadne/test_application.py index cf8501a7a..0b7bf2489 100644 --- a/tests/framework_ariadne/test_application.py +++ b/tests/framework_ariadne/test_application.py @@ -11,526 +11,27 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - import pytest -from testing_support.fixtures import dt_enabled, override_application_settings -from testing_support.validators.validate_span_events import validate_span_events -from testing_support.validators.validate_transaction_count import ( - validate_transaction_count, -) -from testing_support.validators.validate_transaction_errors import ( - validate_transaction_errors, -) -from testing_support.validators.validate_transaction_metrics import ( - validate_transaction_metrics, -) - -from newrelic.api.background_task import background_task -from newrelic.common.object_names import callable_name -from newrelic.common.package_version_utils import get_package_version_tuple - - -@pytest.fixture(scope="session") -def is_graphql_2(): - from graphql import __version__ as version - - major_version = int(version.split(".")[0]) - return major_version == 2 - - -@pytest.fixture(scope="session") -def graphql_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - - def execute(schema, query, *args, **kwargs): - from ariadne import graphql_sync - - return graphql_sync(schema, {"query": query}, *args, **kwargs) - - return execute - - -def to_graphql_source(query): - def delay_import(): - try: - from graphql import Source - except ImportError: - # Fallback if Source is not implemented - return query - - from graphql import __version__ as version - - # For graphql2, Source objects aren't acceptable input - major_version = int(version.split(".")[0]) - if major_version == 2: - return query - - return Source(query) - - return delay_import - - -def example_middleware(next, root, info, **args): # pylint: disable=W0622 - return_value = next(root, info, **args) - return return_value - - -def error_middleware(next, root, info, **args): # pylint: disable=W0622 - raise RuntimeError("Runtime Error!") - - -_runtime_error_name = callable_name(RuntimeError) -_test_runtime_error = [(_runtime_error_name, "Runtime Error!")] -_graphql_base_rollup_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 1), - ("GraphQL/allOther", 1), - ("GraphQL/Ariadne/all", 1), - ("GraphQL/Ariadne/allOther", 1), -] - - -def test_basic(app, graphql_run): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/Ariadne/None", 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - rollup_metrics=_graphql_base_rollup_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @background_task() - def _test(): - ok, response = graphql_run(app, "{ hello }") - assert ok and not response.get("errors") - - _test() - - -@dt_enabled -def test_query_and_mutation(app, graphql_run): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/Ariadne/None", 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage", 1), - ("GraphQL/resolve/Ariadne/storage_add", 1), - ("GraphQL/operation/Ariadne/query//storage", 1), - ("GraphQL/operation/Ariadne/mutation//storage_add.string", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/Ariadne/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/Ariadne/allOther", 2), - ] + _test_mutation_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "StorageAdd", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - ok, response = graphql_run(app, 'mutation { storage_add(string: "abc") { string } }') - assert ok and not response.get("errors") - ok, response = graphql_run(app, "query { storage }") - assert ok and not response.get("errors") - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response["data"]) - assert "abc" in str(response["data"]) - - _test() - - -@dt_enabled -def test_middleware(app, graphql_run, is_graphql_2): - _test_middleware_metrics = [ - ("GraphQL/operation/Ariadne/query//hello", 1), - ("GraphQL/resolve/Ariadne/hello", 1), - ("Function/test_application:example_middleware", 1), - ] - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=_test_middleware_metrics, - rollup_metrics=_test_middleware_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 5: Transaction, Operation, Middleware, and 1 Resolver and Resolver function - @validate_span_events(count=5) - @background_task() - def _test(): - from graphql import MiddlewareManager - - middleware = ( - [example_middleware] - if get_package_version_tuple("ariadne") >= (0, 18) - else MiddlewareManager(example_middleware) - ) +from framework_graphql.test_application import * - ok, response = graphql_run(app, "{ hello }", middleware=middleware) - assert ok and not response.get("errors") - assert "Hello!" in str(response["data"]) +from newrelic.common.package_version_utils import get_package_version - _test() +ARIADNE_VERSION = get_package_version("ariadne") +ariadne_version_tuple = tuple(map(int, ARIADNE_VERSION.split("."))) -@dt_enabled -def test_exception_in_middleware(app, graphql_run): - query = "query MyQuery { hello }" - field = "hello" - - # Metrics - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Ariadne/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/Ariadne/%s" % field, 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/test_application:error_middleware", 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_resolver_attributes = { - "graphql.field.name": field, - "graphql.field.parentType": "Query", - "graphql.field.path": field, - "graphql.field.returnType": "String", - } - _expected_exception_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - "test_application:error_middleware", - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_span_events(exact_agents=_expected_exception_resolver_attributes) - @validate_transaction_errors(errors=_test_runtime_error) - @background_task() - def _test(): - from graphql import MiddlewareManager - - middleware = ( - [error_middleware] - if get_package_version_tuple("ariadne") >= (0, 18) - else MiddlewareManager(error_middleware) - ) - - _, response = graphql_run(app, query, middleware=middleware) - assert response["errors"] - - _test() - - -@pytest.mark.parametrize("field", ("error", "error_non_null")) -@dt_enabled -def test_exception_in_resolver(app, graphql_run, field): - query = "query MyQuery { %s }" % field - txn_name = "_target_application:resolve_error" - - # Metrics - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Ariadne/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/Ariadne/%s" % field, 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_resolver_attributes = { - "graphql.field.name": field, - "graphql.field.parentType": "Query", - "graphql.field.path": field, - "graphql.field.returnType": "String!" if "non_null" in field else "String", - } - _expected_exception_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_span_events(exact_agents=_expected_exception_resolver_attributes) - @validate_transaction_errors(errors=_test_runtime_error) - @background_task() - def _test(): - _, response = graphql_run(app, query) - assert response["errors"] - - _test() - - -@dt_enabled -@pytest.mark.parametrize( - "query,exc_class", - [ - ("query MyQuery { missing_field }", "GraphQLError"), - ("{ syntax_error ", "graphql.error.syntax_error:GraphQLSyntaxError"), - ], +@pytest.fixture( + scope="session", params=["sync-sync", "async-sync", "async-async", "wsgi-sync", "asgi-sync", "asgi-async"] ) -def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_class): - if "syntax" in query: - txn_name = "graphql.language.parser:parse" - else: - if is_graphql_2: - txn_name = "graphql.validation.validation:validate" - else: - txn_name = "graphql.validation.validate:validate" - - # Import path differs between versions - if exc_class == "GraphQLError": - from graphql.error import GraphQLError - - exc_class = callable_name(GraphQLError) - - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Ariadne///", 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_operation_attributes = { - "graphql.operation.type": "", - "graphql.operation.name": "", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_transaction_errors(errors=[exc_class]) - @background_task() - def _test(): - _, response = graphql_run(app, query) - assert response["errors"] - - _test() - - -@dt_enabled -def test_operation_metrics_and_attrs(app, graphql_run): - operation_metrics = [("GraphQL/operation/Ariadne/query/MyQuery/library", 1)] - operation_attrs = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - } - - @validate_transaction_metrics( - "query/MyQuery/library", - "GraphQL", - scoped_metrics=operation_metrics, - rollup_metrics=operation_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 16: Transaction, Operation, and 7 Resolvers and Resolver functions - # library, library.name, library.book - # library.book.name and library.book.id for each book resolved (in this case 2) - @validate_span_events(count=16) - @validate_span_events(exact_agents=operation_attrs) - @background_task() - def _test(): - ok, response = graphql_run(app, "query MyQuery { library(index: 0) { branch, book { id, name } } }") - assert ok and not response.get("errors") - - _test() - - -@dt_enabled -def test_field_resolver_metrics_and_attrs(app, graphql_run): - field_resolver_metrics = [("GraphQL/resolve/Ariadne/hello", 1)] - graphql_attrs = { - "graphql.field.name": "hello", - "graphql.field.parentType": "Query", - "graphql.field.path": "hello", - "graphql.field.returnType": "String", - } - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=field_resolver_metrics, - rollup_metrics=field_resolver_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 4: Transaction, Operation, and 1 Resolver and Resolver function - @validate_span_events(count=4) - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - ok, response = graphql_run(app, "{ hello }") - assert ok and not response.get("errors") - assert "Hello!" in str(response["data"]) - - _test() - - -_test_queries = [ - ("{ hello }", "{ hello }"), # Basic query extraction - ("{ error }", "{ error }"), # Extract query on field error - ("{ library(index: 0) { branch } }", "{ library(index: ?) { branch } }"), # Integers - ('{ echo(echo: "123") }', "{ echo(echo: ?) }"), # Strings with numerics - ('{ echo(echo: "test") }', "{ echo(echo: ?) }"), # Strings - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Aliases - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Variables - ( # Fragments - '{ ...MyFragment } fragment MyFragment on Query { echo(echo: "test") }', - "{ ...MyFragment } fragment MyFragment on Query { echo(echo: ?) }", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,obfuscated", _test_queries) -def test_query_obfuscation(app, graphql_run, query, obfuscated): - graphql_attrs = {"graphql.operation.query": obfuscated} - - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - ok, response = graphql_run(app, query) - if not isinstance(query, str) or "error" not in query: - assert ok and not response.get("errors") - - _test() - - -_test_queries = [ - ("{ hello }", "/hello"), # Basic query - ("{ error }", "/error"), # Extract deepest path on field error - ('{ echo(echo: "test") }', "/echo"), # Fields with arguments - ( - "{ library(index: 0) { branch, book { isbn branch } } }", - "/library", - ), # Complex Example, 1 level - ( - "{ library(index: 0) { book { author { first_name }} } }", - "/library.book.author.first_name", - ), # Complex Example, 2 levels - ("{ library(index: 0) { id, book { name } } }", "/library.book.name"), # Filtering - ('{ TestEcho: echo(echo: "test") }', "/echo"), # Aliases - ( - '{ search(contains: "A") { __typename ... on Book { name } } }', - "/search.name", - ), # InlineFragment - ( - '{ hello echo(echo: "test") }', - "", - ), # Multiple root selections. (need to decide on final behavior) - # FragmentSpread - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { name id }", # Fragment filtering - "/library.book.name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { author { first_name } }", - "/library.book.author.first_name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } magazine { ...MagFragment } } } fragment MyFragment on Book { author { first_name } } fragment MagFragment on Magazine { name }", - "/library", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,expected_path", _test_queries) -def test_deepest_unique_path(app, graphql_run, query, expected_path): - if expected_path == "/error": - txn_name = "_target_application:resolve_error" - else: - txn_name = "query/%s" % expected_path - - @validate_transaction_metrics( - txn_name, - "GraphQL", - background_task=True, - ) - @background_task() - def _test(): - ok, response = graphql_run(app, query) - if "error" not in query: - assert ok and not response.get("errors") - - _test() - +def target_application(request): + from ._target_application import target_application -@pytest.mark.parametrize("capture_introspection_setting", (True, False)) -def test_introspection_transactions(app, graphql_run, capture_introspection_setting): - txn_ct = 1 if capture_introspection_setting else 0 + target_application = target_application[request.param] - @override_application_settings( - {"instrumentation.graphql.capture_introspection_queries": capture_introspection_setting} - ) - @validate_transaction_count(txn_ct) - @background_task() - def _test(): - ok, response = graphql_run(app, "{ __schema { types { name } } }") - assert ok and not response.get("errors") + param = request.param.split("-") + is_background = param[0] not in {"wsgi", "asgi"} + schema_type = param[1] + extra_spans = 4 if param[0] == "wsgi" else 0 - _test() + assert ARIADNE_VERSION is not None + return "Ariadne", ARIADNE_VERSION, target_application, is_background, schema_type, extra_spans diff --git a/tests/framework_ariadne/test_application_async.py b/tests/framework_ariadne/test_application_async.py deleted file mode 100644 index ada34ffad..000000000 --- a/tests/framework_ariadne/test_application_async.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import asyncio - -import pytest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - -from newrelic.api.background_task import background_task - - -@pytest.fixture(scope="session") -def graphql_run_async(): - """Wrapper function to simulate framework_graphql test behavior.""" - - def execute(schema, query, *args, **kwargs): - from ariadne import graphql - - return graphql(schema, {"query": query}, *args, **kwargs) - - return execute - - -@dt_enabled -def test_query_and_mutation_async(app, graphql_run_async): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/Ariadne/None", 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage", 1), - ("GraphQL/resolve/Ariadne/storage_add", 1), - ("GraphQL/operation/Ariadne/query//storage", 1), - ("GraphQL/operation/Ariadne/mutation//storage_add.string", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/Ariadne/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/Ariadne/allOther", 2), - ] + _test_mutation_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "StorageAdd", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - async def coro(): - ok, response = await graphql_run_async(app, 'mutation { storage_add(string: "abc") { string } }') - assert ok and not response.get("errors") - ok, response = await graphql_run_async(app, "query { storage }") - assert ok and not response.get("errors") - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.get("data")) - assert "abc" in str(response.get("data")) - - loop = asyncio.new_event_loop() - loop.run_until_complete(coro()) - - _test() diff --git a/tests/framework_ariadne/test_asgi.py b/tests/framework_ariadne/test_asgi.py deleted file mode 100644 index 861f2aa93..000000000 --- a/tests/framework_ariadne/test_asgi.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json - -import pytest -from testing_support.asgi_testing import AsgiTest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - - -@pytest.fixture(scope="session") -def graphql_asgi_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - from _target_application import _target_asgi_application - - app = AsgiTest(_target_asgi_application) - - def execute(query): - return app.make_request( - "POST", "/", headers={"Content-Type": "application/json"}, body=json.dumps({"query": query}) - ) - - return execute - - -@dt_enabled -def test_query_and_mutation_asgi(graphql_asgi_run): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/Ariadne/None", 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage_add", 1), - ("GraphQL/operation/Ariadne/mutation//storage_add.string", 1), - ] - _test_query_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage", 1), - ("GraphQL/operation/Ariadne/query//storage", 1), - ] - _test_unscoped_metrics = [ - ("WebTransaction", 1), - ("GraphQL/all", 1), - ("GraphQL/Ariadne/all", 1), - ("GraphQL/allWeb", 1), - ("GraphQL/Ariadne/allWeb", 1), - ] - _test_mutation_unscoped_metrics = _test_unscoped_metrics + _test_mutation_scoped_metrics - _test_query_unscoped_metrics = _test_unscoped_metrics + _test_query_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "StorageAdd", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_query_scoped_metrics, - rollup_metrics=_test_query_unscoped_metrics + FRAMEWORK_METRICS, - ) - @validate_transaction_metrics( - "mutation//storage_add.string", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - index=-2, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes, index=-2) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes, index=-2) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - def _test(): - response = graphql_asgi_run('mutation { storage_add(string: "abc") { string } }') - assert response.status == 200 - response = json.loads(response.body.decode("utf-8")) - assert not response.get("errors") - - response = graphql_asgi_run("query { storage }") - assert response.status == 200 - response = json.loads(response.body.decode("utf-8")) - assert not response.get("errors") - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.get("data")) - assert "abc" in str(response.get("data")) - - _test() diff --git a/tests/framework_ariadne/test_wsgi.py b/tests/framework_ariadne/test_wsgi.py deleted file mode 100644 index 9ce2373d4..000000000 --- a/tests/framework_ariadne/test_wsgi.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import pytest -import webtest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - - -@pytest.fixture(scope="session") -def graphql_wsgi_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - from _target_application import _target_wsgi_application - - app = webtest.TestApp(_target_wsgi_application) - - def execute(query): - return app.post_json("/", {"query": query}) - - return execute - - -@dt_enabled -def test_query_and_mutation_wsgi(graphql_wsgi_run): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/Ariadne/None", 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage_add", 1), - ("GraphQL/operation/Ariadne/mutation//storage_add.string", 1), - ] - _test_query_scoped_metrics = [ - ("GraphQL/resolve/Ariadne/storage", 1), - ("GraphQL/operation/Ariadne/query//storage", 1), - ] - _test_unscoped_metrics = [ - ("WebTransaction", 1), - ("Python/WSGI/Response", 1), - ("GraphQL/all", 1), - ("GraphQL/Ariadne/all", 1), - ("GraphQL/allWeb", 1), - ("GraphQL/Ariadne/allWeb", 1), - ] - _test_mutation_unscoped_metrics = _test_unscoped_metrics + _test_mutation_scoped_metrics - _test_query_unscoped_metrics = _test_unscoped_metrics + _test_query_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "StorageAdd", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_query_scoped_metrics, - rollup_metrics=_test_query_unscoped_metrics + FRAMEWORK_METRICS, - ) - @validate_transaction_metrics( - "mutation//storage_add.string", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - index=-2, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes, index=-2) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes, index=-2) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - def _test(): - response = graphql_wsgi_run('mutation { storage_add(string: "abc") { string } }') - assert response.status_code == 200 - response = response.json_body - assert not response.get("errors") - - response = graphql_wsgi_run("query { storage }") - assert response.status_code == 200 - response = response.json_body - assert not response.get("errors") - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.get("data")) - assert "abc" in str(response.get("data")) - - _test() diff --git a/tests/framework_graphene/__init__.py b/tests/framework_graphene/__init__.py new file mode 100644 index 000000000..8030baccf --- /dev/null +++ b/tests/framework_graphene/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/framework_graphene/_target_application.py b/tests/framework_graphene/_target_application.py index 50acc776f..3f4b23e57 100644 --- a/tests/framework_graphene/_target_application.py +++ b/tests/framework_graphene/_target_application.py @@ -11,150 +11,45 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. -from graphene import Field, Int, List -from graphene import Mutation as GrapheneMutation -from graphene import NonNull, ObjectType, Schema, String, Union +from ._target_schema_async import target_schema as target_schema_async +from ._target_schema_sync import target_schema as target_schema_sync +from framework_graphene.test_application import GRAPHENE_VERSION -class Author(ObjectType): - first_name = String() - last_name = String() +def check_response(query, response): + if isinstance(query, str) and "error" not in query: + assert not response.errors, response + assert response.data + else: + assert response.errors, response -class Book(ObjectType): - id = Int() - name = String() - isbn = String() - author = Field(Author) - branch = String() +def run_sync(schema): + def _run_sync(query, middleware=None): + response = schema.execute(query, middleware=middleware) + check_response(query, response) + return response.data -class Magazine(ObjectType): - id = Int() - name = String() - issue = Int() - branch = String() + return _run_sync -class Item(Union): - class Meta: - types = (Book, Magazine) +def run_async(schema): + import asyncio + def _run_async(query, middleware=None): + loop = asyncio.get_event_loop() + response = loop.run_until_complete(schema.execute_async(query, middleware=middleware)) + check_response(query, response) -class Library(ObjectType): - id = Int() - branch = String() - magazine = Field(List(Magazine)) - book = Field(List(Book)) + return response.data + return _run_async -Storage = List(String) +target_application = { + "sync-sync": run_sync(target_schema_sync), + "async-sync": run_async(target_schema_sync), + "async-async": run_async(target_schema_async), + } -authors = [ - Author( - first_name="New", - last_name="Relic", - ), - Author( - first_name="Bob", - last_name="Smith", - ), - Author( - first_name="Leslie", - last_name="Jones", - ), -] - -books = [ - Book( - id=1, - name="Python Agent: The Book", - isbn="a-fake-isbn", - author=authors[0], - branch="riverside", - ), - Book( - id=2, - name="Ollies for O11y: A Sk8er's Guide to Observability", - isbn="a-second-fake-isbn", - author=authors[1], - branch="downtown", - ), - Book( - id=3, - name="[Redacted]", - isbn="a-third-fake-isbn", - author=authors[2], - branch="riverside", - ), -] - -magazines = [ - Magazine(id=1, name="Reli Updates Weekly", issue=1, branch="riverside"), - Magazine(id=2, name="Reli Updates Weekly", issue=2, branch="downtown"), - Magazine(id=3, name="Node Weekly", issue=1, branch="riverside"), -] - - -libraries = ["riverside", "downtown"] -libraries = [ - Library( - id=i + 1, - branch=branch, - magazine=[m for m in magazines if m.branch == branch], - book=[b for b in books if b.branch == branch], - ) - for i, branch in enumerate(libraries) -] - -storage = [] - - -class StorageAdd(GrapheneMutation): - class Arguments: - string = String(required=True) - - string = String() - - def mutate(self, info, string): - storage.append(string) - return String(string=string) - - -class Query(ObjectType): - library = Field(Library, index=Int(required=True)) - hello = String() - search = Field(List(Item), contains=String(required=True)) - echo = Field(String, echo=String(required=True)) - storage = Storage - error = String() - - def resolve_library(self, info, index): - return libraries[index] - - def resolve_storage(self, info): - return storage - - def resolve_search(self, info, contains): - search_books = [b for b in books if contains in b.name] - search_magazines = [m for m in magazines if contains in m.name] - return search_books + search_magazines - - def resolve_hello(self, info): - return "Hello!" - - def resolve_echo(self, info, echo): - return echo - - def resolve_error(self, info): - raise RuntimeError("Runtime Error!") - - error_non_null = Field(NonNull(String), resolver=resolve_error) - - -class Mutation(ObjectType): - storage_add = StorageAdd.Field() - - -_target_application = Schema(query=Query, mutation=Mutation, auto_camelcase=False) diff --git a/tests/framework_graphene/_target_schema_async.py b/tests/framework_graphene/_target_schema_async.py new file mode 100644 index 000000000..39905f2f9 --- /dev/null +++ b/tests/framework_graphene/_target_schema_async.py @@ -0,0 +1,72 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from graphene import Field, Int, List +from graphene import Mutation as GrapheneMutation +from graphene import NonNull, ObjectType, Schema, String, Union + +from ._target_schema_sync import Author, Book, Magazine, Item, Library, Storage, authors, books, magazines, libraries + + +storage = [] + + +async def resolve_library(self, info, index): + return libraries[index] + +async def resolve_storage(self, info): + return [storage.pop()] + +async def resolve_search(self, info, contains): + search_books = [b for b in books if contains in b.name] + search_magazines = [m for m in magazines if contains in m.name] + return search_books + search_magazines + +async def resolve_hello(self, info): + return "Hello!" + +async def resolve_echo(self, info, echo): + return echo + +async def resolve_error(self, info): + raise RuntimeError("Runtime Error!") + +async def resolve_storage_add(self, info, string): + storage.append(string) + return StorageAdd(string=string) + + +class StorageAdd(GrapheneMutation): + class Arguments: + string = String(required=True) + + string = String() + mutate = resolve_storage_add + + +class Query(ObjectType): + library = Field(Library, index=Int(required=True), resolver=resolve_library) + hello = String(resolver=resolve_hello) + search = Field(List(Item), contains=String(required=True), resolver=resolve_search) + echo = Field(String, echo=String(required=True), resolver=resolve_echo) + storage = Field(Storage, resolver=resolve_storage) + error = String(resolver=resolve_error) + error_non_null = Field(NonNull(String), resolver=resolve_error) + error_middleware = String(resolver=resolve_hello) + + +class Mutation(ObjectType): + storage_add = StorageAdd.Field() + + +target_schema = Schema(query=Query, mutation=Mutation, auto_camelcase=False) diff --git a/tests/framework_graphene/_target_schema_sync.py b/tests/framework_graphene/_target_schema_sync.py new file mode 100644 index 000000000..b59179065 --- /dev/null +++ b/tests/framework_graphene/_target_schema_sync.py @@ -0,0 +1,162 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +from graphene import Field, Int, List +from graphene import Mutation as GrapheneMutation +from graphene import NonNull, ObjectType, Schema, String, Union + + +class Author(ObjectType): + first_name = String() + last_name = String() + + +class Book(ObjectType): + id = Int() + name = String() + isbn = String() + author = Field(Author) + branch = String() + + +class Magazine(ObjectType): + id = Int() + name = String() + issue = Int() + branch = String() + + +class Item(Union): + class Meta: + types = (Book, Magazine) + + +class Library(ObjectType): + id = Int() + branch = String() + magazine = Field(List(Magazine)) + book = Field(List(Book)) + + +Storage = List(String) + + +authors = [ + Author( + first_name="New", + last_name="Relic", + ), + Author( + first_name="Bob", + last_name="Smith", + ), + Author( + first_name="Leslie", + last_name="Jones", + ), +] + +books = [ + Book( + id=1, + name="Python Agent: The Book", + isbn="a-fake-isbn", + author=authors[0], + branch="riverside", + ), + Book( + id=2, + name="Ollies for O11y: A Sk8er's Guide to Observability", + isbn="a-second-fake-isbn", + author=authors[1], + branch="downtown", + ), + Book( + id=3, + name="[Redacted]", + isbn="a-third-fake-isbn", + author=authors[2], + branch="riverside", + ), +] + +magazines = [ + Magazine(id=1, name="Reli Updates Weekly", issue=1, branch="riverside"), + Magazine(id=2, name="Reli Updates Weekly", issue=2, branch="downtown"), + Magazine(id=3, name="Node Weekly", issue=1, branch="riverside"), +] + + +libraries = ["riverside", "downtown"] +libraries = [ + Library( + id=i + 1, + branch=branch, + magazine=[m for m in magazines if m.branch == branch], + book=[b for b in books if b.branch == branch], + ) + for i, branch in enumerate(libraries) +] + +storage = [] + + +def resolve_library(self, info, index): + return libraries[index] + +def resolve_storage(self, info): + return [storage.pop()] + +def resolve_search(self, info, contains): + search_books = [b for b in books if contains in b.name] + search_magazines = [m for m in magazines if contains in m.name] + return search_books + search_magazines + +def resolve_hello(self, info): + return "Hello!" + +def resolve_echo(self, info, echo): + return echo + +def resolve_error(self, info): + raise RuntimeError("Runtime Error!") + +def resolve_storage_add(self, info, string): + storage.append(string) + return StorageAdd(string=string) + + +class StorageAdd(GrapheneMutation): + class Arguments: + string = String(required=True) + + string = String() + mutate = resolve_storage_add + + +class Query(ObjectType): + library = Field(Library, index=Int(required=True), resolver=resolve_library) + hello = String(resolver=resolve_hello) + search = Field(List(Item), contains=String(required=True), resolver=resolve_search) + echo = Field(String, echo=String(required=True), resolver=resolve_echo) + storage = Field(Storage, resolver=resolve_storage) + error = String(resolver=resolve_error) + error_non_null = Field(NonNull(String), resolver=resolve_error) + error_middleware = String(resolver=resolve_hello) + + +class Mutation(ObjectType): + storage_add = StorageAdd.Field() + + +target_schema = Schema(query=Query, mutation=Mutation, auto_camelcase=False) diff --git a/tests/framework_graphene/test_application.py b/tests/framework_graphene/test_application.py index fd02d992a..838f3b515 100644 --- a/tests/framework_graphene/test_application.py +++ b/tests/framework_graphene/test_application.py @@ -13,518 +13,25 @@ # limitations under the License. import pytest -import six -from testing_support.fixtures import dt_enabled, override_application_settings -from testing_support.validators.validate_span_events import validate_span_events -from testing_support.validators.validate_transaction_count import ( - validate_transaction_count, -) -from testing_support.validators.validate_transaction_errors import ( - validate_transaction_errors, -) -from testing_support.validators.validate_transaction_metrics import ( - validate_transaction_metrics, -) -from newrelic.api.background_task import background_task -from newrelic.common.object_names import callable_name +from framework_graphql.test_application import * +from newrelic.common.package_version_utils import get_package_version +GRAPHENE_VERSION = get_package_version("graphene") -@pytest.fixture(scope="session") -def is_graphql_2(): - from graphql import __version__ as version - major_version = int(version.split(".")[0]) - return major_version == 2 +@pytest.fixture(scope="session", params=["sync-sync", "async-sync", "async-async"]) +def target_application(request): + from ._target_application import target_application + target_application = target_application.get(request.param, None) + if target_application is None: + pytest.skip("Unsupported combination.") + return -@pytest.fixture(scope="session") -def graphql_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - - def execute(schema, *args, **kwargs): - return schema.execute(*args, **kwargs) - - return execute - - -def to_graphql_source(query): - def delay_import(): - try: - from graphql import Source - except ImportError: - # Fallback if Source is not implemented - return query - - from graphql import __version__ as version - - # For graphql2, Source objects aren't acceptable input - major_version = int(version.split(".")[0]) - if major_version == 2: - return query - - return Source(query) - - return delay_import - - -def example_middleware(next, root, info, **args): # pylint: disable=W0622 - return_value = next(root, info, **args) - return return_value - - -def error_middleware(next, root, info, **args): # pylint: disable=W0622 - raise RuntimeError("Runtime Error!") - - -_runtime_error_name = callable_name(RuntimeError) -_test_runtime_error = [(_runtime_error_name, "Runtime Error!")] -_graphql_base_rollup_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 1), - ("GraphQL/allOther", 1), - ("GraphQL/Graphene/all", 1), - ("GraphQL/Graphene/allOther", 1), -] - - -def test_basic(app, graphql_run): - from graphql import __version__ as version - - from newrelic.hooks.framework_graphene import framework_details - - FRAMEWORK_METRICS = [ - ("Python/Framework/Graphene/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - rollup_metrics=_graphql_base_rollup_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @background_task() - def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors - - _test() - - -@dt_enabled -def test_query_and_mutation(app, graphql_run): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Graphene/storage", 1), - ("GraphQL/resolve/Graphene/storage_add", 1), - ("GraphQL/operation/Graphene/query//storage", 1), - ("GraphQL/operation/Graphene/mutation//storage_add.string", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/Graphene/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/Graphene/allOther", 2), - ] + _test_mutation_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "StorageAdd", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - response = graphql_run(app, 'mutation { storage_add(string: "abc") { string } }') - assert not response.errors - response = graphql_run(app, "query { storage }") - assert not response.errors - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.data) - assert "abc" in str(response.data) - - _test() - - -@dt_enabled -def test_middleware(app, graphql_run, is_graphql_2): - _test_middleware_metrics = [ - ("GraphQL/operation/Graphene/query//hello", 1), - ("GraphQL/resolve/Graphene/hello", 1), - ("Function/test_application:example_middleware", 1), - ] - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=_test_middleware_metrics, - rollup_metrics=_test_middleware_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 5: Transaction, Operation, Middleware, and 1 Resolver and 1 Resolver Function - @validate_span_events(count=5) - @background_task() - def _test(): - response = graphql_run(app, "{ hello }", middleware=[example_middleware]) - assert not response.errors - assert "Hello!" in str(response.data) - - _test() - - -@dt_enabled -def test_exception_in_middleware(app, graphql_run): - query = "query MyQuery { hello }" - field = "hello" - - # Metrics - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Graphene/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/Graphene/%s" % field, 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/test_application:error_middleware", 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_resolver_attributes = { - "graphql.field.name": field, - "graphql.field.parentType": "Query", - "graphql.field.path": field, - "graphql.field.returnType": "String", - } - _expected_exception_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - "test_application:error_middleware", - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_span_events(exact_agents=_expected_exception_resolver_attributes) - @validate_transaction_errors(errors=_test_runtime_error) - @background_task() - def _test(): - response = graphql_run(app, query, middleware=[error_middleware]) - assert response.errors - - _test() - - -@pytest.mark.parametrize("field", ("error", "error_non_null")) -@dt_enabled -def test_exception_in_resolver(app, graphql_run, field): - query = "query MyQuery { %s }" % field - - if six.PY2: - txn_name = "_target_application:resolve_error" - else: - txn_name = "_target_application:Query.resolve_error" - - # Metrics - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Graphene/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/Graphene/%s" % field, 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_resolver_attributes = { - "graphql.field.name": field, - "graphql.field.parentType": "Query", - "graphql.field.path": field, - "graphql.field.returnType": "String!" if "non_null" in field else "String", - } - _expected_exception_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_span_events(exact_agents=_expected_exception_resolver_attributes) - @validate_transaction_errors(errors=_test_runtime_error) - @background_task() - def _test(): - response = graphql_run(app, query) - assert response.errors - - _test() - - -@dt_enabled -@pytest.mark.parametrize( - "query,exc_class", - [ - ("query MyQuery { missing_field }", "GraphQLError"), - ("{ syntax_error ", "graphql.error.syntax_error:GraphQLSyntaxError"), - ], -) -def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_class): - if "syntax" in query: - txn_name = "graphql.language.parser:parse" - else: - if is_graphql_2: - txn_name = "graphql.validation.validation:validate" - else: - txn_name = "graphql.validation.validate:validate" - - # Import path differs between versions - if exc_class == "GraphQLError": - from graphql.error import GraphQLError - - exc_class = callable_name(GraphQLError) - - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Graphene///", 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_operation_attributes = { - "graphql.operation.type": "", - "graphql.operation.name": "", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_transaction_errors(errors=[exc_class]) - @background_task() - def _test(): - response = graphql_run(app, query) - assert response.errors - - _test() - - -@dt_enabled -def test_operation_metrics_and_attrs(app, graphql_run): - operation_metrics = [("GraphQL/operation/Graphene/query/MyQuery/library", 1)] - operation_attrs = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - } - - @validate_transaction_metrics( - "query/MyQuery/library", - "GraphQL", - scoped_metrics=operation_metrics, - rollup_metrics=operation_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 16: Transaction, Operation, and 7 Resolvers and Resolver functions - # library, library.name, library.book - # library.book.name and library.book.id for each book resolved (in this case 2) - @validate_span_events(count=16) - @validate_span_events(exact_agents=operation_attrs) - @background_task() - def _test(): - response = graphql_run(app, "query MyQuery { library(index: 0) { branch, book { id, name } } }") - assert not response.errors - - _test() - - -@dt_enabled -def test_field_resolver_metrics_and_attrs(app, graphql_run): - field_resolver_metrics = [("GraphQL/resolve/Graphene/hello", 1)] - graphql_attrs = { - "graphql.field.name": "hello", - "graphql.field.parentType": "Query", - "graphql.field.path": "hello", - "graphql.field.returnType": "String", - } - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=field_resolver_metrics, - rollup_metrics=field_resolver_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 4: Transaction, Operation, and 1 Resolver and Resolver function - @validate_span_events(count=4) - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors - assert "Hello!" in str(response.data) - - _test() - - -_test_queries = [ - ("{ hello }", "{ hello }"), # Basic query extraction - ("{ error }", "{ error }"), # Extract query on field error - (to_graphql_source("{ hello }"), "{ hello }"), # Extract query from Source objects - ("{ library(index: 0) { branch } }", "{ library(index: ?) { branch } }"), # Integers - ('{ echo(echo: "123") }', "{ echo(echo: ?) }"), # Strings with numerics - ('{ echo(echo: "test") }', "{ echo(echo: ?) }"), # Strings - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Aliases - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Variables - ( # Fragments - '{ ...MyFragment } fragment MyFragment on Query { echo(echo: "test") }', - "{ ...MyFragment } fragment MyFragment on Query { echo(echo: ?) }", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,obfuscated", _test_queries) -def test_query_obfuscation(app, graphql_run, query, obfuscated): - graphql_attrs = {"graphql.operation.query": obfuscated} - - if callable(query): - query = query() - - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - response = graphql_run(app, query) - if not isinstance(query, str) or "error" not in query: - assert not response.errors - - _test() - - -_test_queries = [ - ("{ hello }", "/hello"), # Basic query - ("{ error }", "/error"), # Extract deepest path on field error - ('{ echo(echo: "test") }', "/echo"), # Fields with arguments - ( - "{ library(index: 0) { branch, book { isbn branch } } }", - "/library", - ), # Complex Example, 1 level - ( - "{ library(index: 0) { book { author { first_name }} } }", - "/library.book.author.first_name", - ), # Complex Example, 2 levels - ("{ library(index: 0) { id, book { name } } }", "/library.book.name"), # Filtering - ('{ TestEcho: echo(echo: "test") }', "/echo"), # Aliases - ( - '{ search(contains: "A") { __typename ... on Book { name } } }', - "/search.name", - ), # InlineFragment - ( - '{ hello echo(echo: "test") }', - "", - ), # Multiple root selections. (need to decide on final behavior) - # FragmentSpread - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { name id }", # Fragment filtering - "/library.book.name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { author { first_name } }", - "/library.book.author.first_name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } magazine { ...MagFragment } } } fragment MyFragment on Book { author { first_name } } fragment MagFragment on Magazine { name }", - "/library", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,expected_path", _test_queries) -def test_deepest_unique_path(app, graphql_run, query, expected_path): - if expected_path == "/error": - if six.PY2: - txn_name = "_target_application:resolve_error" - else: - txn_name = "_target_application:Query.resolve_error" - else: - txn_name = "query/%s" % expected_path - - @validate_transaction_metrics( - txn_name, - "GraphQL", - background_task=True, - ) - @background_task() - def _test(): - response = graphql_run(app, query) - if "error" not in query: - assert not response.errors - - _test() - - -@pytest.mark.parametrize("capture_introspection_setting", (True, False)) -def test_introspection_transactions(app, graphql_run, capture_introspection_setting): - txn_ct = 1 if capture_introspection_setting else 0 - - @override_application_settings( - {"instrumentation.graphql.capture_introspection_queries": capture_introspection_setting} - ) - @validate_transaction_count(txn_ct) - @background_task() - def _test(): - response = graphql_run(app, "{ __schema { types { name } } }") - assert not response.errors - - _test() + param = request.param.split("-") + is_background = param[0] not in {"wsgi", "asgi"} + schema_type = param[1] + extra_spans = 4 if param[0] == "wsgi" else 0 + assert GRAPHENE_VERSION is not None + return "Graphene", GRAPHENE_VERSION, target_application, is_background, schema_type, extra_spans diff --git a/tests/framework_graphql/__init__.py b/tests/framework_graphql/__init__.py new file mode 100644 index 000000000..8030baccf --- /dev/null +++ b/tests/framework_graphql/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/framework_graphql/_target_application.py b/tests/framework_graphql/_target_application.py index 7bef5e975..91da5d767 100644 --- a/tests/framework_graphql/_target_application.py +++ b/tests/framework_graphql/_target_application.py @@ -12,228 +12,55 @@ # See the License for the specific language governing permissions and # limitations under the License. -from graphql import ( - GraphQLArgument, - GraphQLField, - GraphQLInt, - GraphQLList, - GraphQLNonNull, - GraphQLObjectType, - GraphQLSchema, - GraphQLString, - GraphQLUnionType, -) - -authors = [ - { - "first_name": "New", - "last_name": "Relic", - }, - { - "first_name": "Bob", - "last_name": "Smith", - }, - { - "first_name": "Leslie", - "last_name": "Jones", - }, -] - -books = [ - { - "id": 1, - "name": "Python Agent: The Book", - "isbn": "a-fake-isbn", - "author": authors[0], - "branch": "riverside", - }, - { - "id": 2, - "name": "Ollies for O11y: A Sk8er's Guide to Observability", - "isbn": "a-second-fake-isbn", - "author": authors[1], - "branch": "downtown", - }, - { - "id": 3, - "name": "[Redacted]", - "isbn": "a-third-fake-isbn", - "author": authors[2], - "branch": "riverside", - }, -] - -magazines = [ - {"id": 1, "name": "Reli Updates Weekly", "issue": 1, "branch": "riverside"}, - {"id": 2, "name": "Reli Updates Weekly", "issue": 2, "branch": "downtown"}, - {"id": 3, "name": "Node Weekly", "issue": 1, "branch": "riverside"}, -] - - -libraries = ["riverside", "downtown"] -libraries = [ - { - "id": i + 1, - "branch": branch, - "magazine": [m for m in magazines if m["branch"] == branch], - "book": [b for b in books if b["branch"] == branch], - } - for i, branch in enumerate(libraries) -] - -storage = [] - - -def resolve_library(parent, info, index): - return libraries[index] - - -def resolve_storage_add(parent, info, string): - storage.append(string) - return string - - -def resolve_storage(parent, info): - return storage - - -def resolve_search(parent, info, contains): - search_books = [b for b in books if contains in b["name"]] - search_magazines = [m for m in magazines if contains in m["name"]] - return search_books + search_magazines - - -Author = GraphQLObjectType( - "Author", - { - "first_name": GraphQLField(GraphQLString), - "last_name": GraphQLField(GraphQLString), - }, -) - -Book = GraphQLObjectType( - "Book", - { - "id": GraphQLField(GraphQLInt), - "name": GraphQLField(GraphQLString), - "isbn": GraphQLField(GraphQLString), - "author": GraphQLField(Author), - "branch": GraphQLField(GraphQLString), - }, -) - -Magazine = GraphQLObjectType( - "Magazine", - { - "id": GraphQLField(GraphQLInt), - "name": GraphQLField(GraphQLString), - "issue": GraphQLField(GraphQLInt), - "branch": GraphQLField(GraphQLString), - }, -) - - -Library = GraphQLObjectType( - "Library", - { - "id": GraphQLField(GraphQLInt), - "branch": GraphQLField(GraphQLString), - "book": GraphQLField(GraphQLList(Book)), - "magazine": GraphQLField(GraphQLList(Magazine)), - }, -) - -Storage = GraphQLList(GraphQLString) - - -def resolve_hello(root, info): - return "Hello!" - - -def resolve_echo(root, info, echo): - return echo - - -def resolve_error(root, info): - raise RuntimeError("Runtime Error!") - - -try: - hello_field = GraphQLField(GraphQLString, resolver=resolve_hello) - library_field = GraphQLField( - Library, - resolver=resolve_library, - args={"index": GraphQLArgument(GraphQLNonNull(GraphQLInt))}, - ) - search_field = GraphQLField( - GraphQLList(GraphQLUnionType("Item", (Book, Magazine), resolve_type=resolve_search)), - args={"contains": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - echo_field = GraphQLField( - GraphQLString, - resolver=resolve_echo, - args={"echo": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - storage_field = GraphQLField( - Storage, - resolver=resolve_storage, - ) - storage_add_field = GraphQLField( - Storage, - resolver=resolve_storage_add, - args={"string": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - error_field = GraphQLField(GraphQLString, resolver=resolve_error) - error_non_null_field = GraphQLField(GraphQLNonNull(GraphQLString), resolver=resolve_error) - error_middleware_field = GraphQLField(GraphQLString, resolver=resolve_hello) -except TypeError: - hello_field = GraphQLField(GraphQLString, resolve=resolve_hello) - library_field = GraphQLField( - Library, - resolve=resolve_library, - args={"index": GraphQLArgument(GraphQLNonNull(GraphQLInt))}, - ) - search_field = GraphQLField( - GraphQLList(GraphQLUnionType("Item", (Book, Magazine), resolve_type=resolve_search)), - args={"contains": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - echo_field = GraphQLField( - GraphQLString, - resolve=resolve_echo, - args={"echo": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - storage_field = GraphQLField( - Storage, - resolve=resolve_storage, - ) - storage_add_field = GraphQLField( - GraphQLString, - resolve=resolve_storage_add, - args={"string": GraphQLArgument(GraphQLNonNull(GraphQLString))}, - ) - error_field = GraphQLField(GraphQLString, resolve=resolve_error) - error_non_null_field = GraphQLField(GraphQLNonNull(GraphQLString), resolve=resolve_error) - error_middleware_field = GraphQLField(GraphQLString, resolve=resolve_hello) - -query = GraphQLObjectType( - name="Query", - fields={ - "hello": hello_field, - "library": library_field, - "search": search_field, - "echo": echo_field, - "storage": storage_field, - "error": error_field, - "error_non_null": error_non_null_field, - "error_middleware": error_middleware_field, - }, -) - -mutation = GraphQLObjectType( - name="Mutation", - fields={ - "storage_add": storage_add_field, - }, -) - -_target_application = GraphQLSchema(query=query, mutation=mutation) +from graphql.language.source import Source + +from ._target_schema_async import target_schema as target_schema_async +from ._target_schema_sync import target_schema as target_schema_sync + + +def check_response(query, response): + if isinstance(query, str) and "error" not in query or isinstance(query, Source) and "error" not in query.body: + assert not response.errors, response.errors + assert response.data + else: + assert response.errors + + +def run_sync(schema): + def _run_sync(query, middleware=None): + try: + from graphql import graphql_sync as graphql + except ImportError: + from graphql import graphql + + response = graphql(schema, query, middleware=middleware) + + check_response(query, response) + + return response.data + + return _run_sync + + +def run_async(schema): + import asyncio + + from graphql import graphql + + def _run_async(query, middleware=None): + coro = graphql(schema, query, middleware=middleware) + loop = asyncio.get_event_loop() + response = loop.run_until_complete(coro) + + check_response(query, response) + + return response.data + + return _run_async + + +target_application = { + "sync-sync": run_sync(target_schema_sync), + "async-sync": run_async(target_schema_sync), + "async-async": run_async(target_schema_async), +} diff --git a/tests/framework_graphql/_target_schema_async.py b/tests/framework_graphql/_target_schema_async.py new file mode 100644 index 000000000..aad4eb271 --- /dev/null +++ b/tests/framework_graphql/_target_schema_async.py @@ -0,0 +1,155 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from graphql import ( + GraphQLArgument, + GraphQLField, + GraphQLInt, + GraphQLList, + GraphQLNonNull, + GraphQLObjectType, + GraphQLSchema, + GraphQLString, + GraphQLUnionType, +) + +from ._target_schema_sync import books, libraries, magazines + +storage = [] + + +async def resolve_library(parent, info, index): + return libraries[index] + + +async def resolve_storage_add(parent, info, string): + storage.append(string) + return string + + +async def resolve_storage(parent, info): + return [storage.pop()] + + +async def resolve_search(parent, info, contains): + search_books = [b for b in books if contains in b["name"]] + search_magazines = [m for m in magazines if contains in m["name"]] + return search_books + search_magazines + + +Author = GraphQLObjectType( + "Author", + { + "first_name": GraphQLField(GraphQLString), + "last_name": GraphQLField(GraphQLString), + }, +) + +Book = GraphQLObjectType( + "Book", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "isbn": GraphQLField(GraphQLString), + "author": GraphQLField(Author), + "branch": GraphQLField(GraphQLString), + }, +) + +Magazine = GraphQLObjectType( + "Magazine", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "issue": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + }, +) + + +Library = GraphQLObjectType( + "Library", + { + "id": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + "book": GraphQLField(GraphQLList(Book)), + "magazine": GraphQLField(GraphQLList(Magazine)), + }, +) + +Storage = GraphQLList(GraphQLString) + + +async def resolve_hello(root, info): + return "Hello!" + + +async def resolve_echo(root, info, echo): + return echo + + +async def resolve_error(root, info): + raise RuntimeError("Runtime Error!") + + +hello_field = GraphQLField(GraphQLString, resolve=resolve_hello) +library_field = GraphQLField( + Library, + resolve=resolve_library, + args={"index": GraphQLArgument(GraphQLNonNull(GraphQLInt))}, +) +search_field = GraphQLField( + GraphQLList(GraphQLUnionType("Item", (Book, Magazine), resolve_type=resolve_search)), + args={"contains": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +echo_field = GraphQLField( + GraphQLString, + resolve=resolve_echo, + args={"echo": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +storage_field = GraphQLField( + Storage, + resolve=resolve_storage, +) +storage_add_field = GraphQLField( + GraphQLString, + resolve=resolve_storage_add, + args={"string": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +error_field = GraphQLField(GraphQLString, resolve=resolve_error) +error_non_null_field = GraphQLField(GraphQLNonNull(GraphQLString), resolve=resolve_error) +error_middleware_field = GraphQLField(GraphQLString, resolve=resolve_hello) + +query = GraphQLObjectType( + name="Query", + fields={ + "hello": hello_field, + "library": library_field, + "search": search_field, + "echo": echo_field, + "storage": storage_field, + "error": error_field, + "error_non_null": error_non_null_field, + "error_middleware": error_middleware_field, + }, +) + +mutation = GraphQLObjectType( + name="Mutation", + fields={ + "storage_add": storage_add_field, + }, +) + +target_schema = GraphQLSchema(query=query, mutation=mutation) diff --git a/tests/framework_graphql/_target_schema_sync.py b/tests/framework_graphql/_target_schema_sync.py new file mode 100644 index 000000000..302a6c66e --- /dev/null +++ b/tests/framework_graphql/_target_schema_sync.py @@ -0,0 +1,210 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applic`ab`le law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from graphql import ( + GraphQLArgument, + GraphQLField, + GraphQLInt, + GraphQLList, + GraphQLNonNull, + GraphQLObjectType, + GraphQLSchema, + GraphQLString, + GraphQLUnionType, +) + +authors = [ + { + "first_name": "New", + "last_name": "Relic", + }, + { + "first_name": "Bob", + "last_name": "Smith", + }, + { + "first_name": "Leslie", + "last_name": "Jones", + }, +] + +books = [ + { + "id": 1, + "name": "Python Agent: The Book", + "isbn": "a-fake-isbn", + "author": authors[0], + "branch": "riverside", + }, + { + "id": 2, + "name": "Ollies for O11y: A Sk8er's Guide to Observability", + "isbn": "a-second-fake-isbn", + "author": authors[1], + "branch": "downtown", + }, + { + "id": 3, + "name": "[Redacted]", + "isbn": "a-third-fake-isbn", + "author": authors[2], + "branch": "riverside", + }, +] + +magazines = [ + {"id": 1, "name": "Reli Updates Weekly", "issue": 1, "branch": "riverside"}, + {"id": 2, "name": "Reli Updates Weekly", "issue": 2, "branch": "downtown"}, + {"id": 3, "name": "Node Weekly", "issue": 1, "branch": "riverside"}, +] + + +libraries = ["riverside", "downtown"] +libraries = [ + { + "id": i + 1, + "branch": branch, + "magazine": [m for m in magazines if m["branch"] == branch], + "book": [b for b in books if b["branch"] == branch], + } + for i, branch in enumerate(libraries) +] + +storage = [] + + +def resolve_library(parent, info, index): + return libraries[index] + + +def resolve_storage_add(parent, info, string): + storage.append(string) + return string + + +def resolve_storage(parent, info): + return [storage.pop()] + + +def resolve_search(parent, info, contains): + search_books = [b for b in books if contains in b["name"]] + search_magazines = [m for m in magazines if contains in m["name"]] + return search_books + search_magazines + + +Author = GraphQLObjectType( + "Author", + { + "first_name": GraphQLField(GraphQLString), + "last_name": GraphQLField(GraphQLString), + }, +) + +Book = GraphQLObjectType( + "Book", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "isbn": GraphQLField(GraphQLString), + "author": GraphQLField(Author), + "branch": GraphQLField(GraphQLString), + }, +) + +Magazine = GraphQLObjectType( + "Magazine", + { + "id": GraphQLField(GraphQLInt), + "name": GraphQLField(GraphQLString), + "issue": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + }, +) + + +Library = GraphQLObjectType( + "Library", + { + "id": GraphQLField(GraphQLInt), + "branch": GraphQLField(GraphQLString), + "book": GraphQLField(GraphQLList(Book)), + "magazine": GraphQLField(GraphQLList(Magazine)), + }, +) + +Storage = GraphQLList(GraphQLString) + + +def resolve_hello(root, info): + return "Hello!" + + +def resolve_echo(root, info, echo): + return echo + + +def resolve_error(root, info): + raise RuntimeError("Runtime Error!") + + +hello_field = GraphQLField(GraphQLString, resolve=resolve_hello) +library_field = GraphQLField( + Library, + resolve=resolve_library, + args={"index": GraphQLArgument(GraphQLNonNull(GraphQLInt))}, +) +search_field = GraphQLField( + GraphQLList(GraphQLUnionType("Item", (Book, Magazine), resolve_type=resolve_search)), + args={"contains": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +echo_field = GraphQLField( + GraphQLString, + resolve=resolve_echo, + args={"echo": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +storage_field = GraphQLField( + Storage, + resolve=resolve_storage, +) +storage_add_field = GraphQLField( + GraphQLString, + resolve=resolve_storage_add, + args={"string": GraphQLArgument(GraphQLNonNull(GraphQLString))}, +) +error_field = GraphQLField(GraphQLString, resolve=resolve_error) +error_non_null_field = GraphQLField(GraphQLNonNull(GraphQLString), resolve=resolve_error) +error_middleware_field = GraphQLField(GraphQLString, resolve=resolve_hello) + +query = GraphQLObjectType( + name="Query", + fields={ + "hello": hello_field, + "library": library_field, + "search": search_field, + "echo": echo_field, + "storage": storage_field, + "error": error_field, + "error_non_null": error_non_null_field, + "error_middleware": error_middleware_field, + }, +) + +mutation = GraphQLObjectType( + name="Mutation", + fields={ + "storage_add": storage_add_field, + }, +) + +target_schema = GraphQLSchema(query=query, mutation=mutation) diff --git a/tests/framework_graphql/conftest.py b/tests/framework_graphql/conftest.py index 4d9e06758..5302da2b8 100644 --- a/tests/framework_graphql/conftest.py +++ b/tests/framework_graphql/conftest.py @@ -13,10 +13,12 @@ # limitations under the License. import pytest -import six - -from testing_support.fixtures import collector_agent_registration_fixture, collector_available_fixture # noqa: F401; pylint: disable=W0611 +from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) +from newrelic.packages import six _default_settings = { "transaction_tracer.explain_threshold": 0.0, @@ -32,11 +34,16 @@ ) -@pytest.fixture(scope="session") -def app(): - from _target_application import _target_application +@pytest.fixture(scope="session", params=["sync-sync", "async-sync", "async-async"]) +def target_application(request): + from ._target_application import target_application + + app = target_application.get(request.param, None) + if app is None: + pytest.skip("Unsupported combination.") + return - return _target_application + return "GraphQL", None, app, True, request.param.split("-")[1], 0 if six.PY2: diff --git a/tests/framework_graphql/test_application.py b/tests/framework_graphql/test_application.py index dd49ee37f..b5d78699d 100644 --- a/tests/framework_graphql/test_application.py +++ b/tests/framework_graphql/test_application.py @@ -13,6 +13,10 @@ # limitations under the License. import pytest +from framework_graphql.test_application_async import ( + error_middleware_async, + example_middleware_async, +) from testing_support.fixtures import dt_enabled, override_application_settings from testing_support.validators.validate_code_level_metrics import ( validate_code_level_metrics, @@ -30,24 +34,18 @@ from newrelic.api.background_task import background_task from newrelic.common.object_names import callable_name +from newrelic.common.package_version_utils import get_package_version -@pytest.fixture(scope="session") -def is_graphql_2(): - from graphql import __version__ as version - - major_version = int(version.split(".")[0]) - return major_version == 2 +graphql_version = get_package_version("graphql-core") +def conditional_decorator(decorator, condition): + def _conditional_decorator(func): + if not condition: + return func + return decorator(func) -@pytest.fixture(scope="session") -def graphql_run(): - try: - from graphql import graphql_sync as graphql - except ImportError: - from graphql import graphql - - return graphql + return _conditional_decorator def to_graphql_source(query): @@ -58,13 +56,6 @@ def delay_import(): # Fallback if Source is not implemented return query - from graphql import __version__ as version - - # For graphql2, Source objects aren't acceptable input - major_version = int(version.split(".")[0]) - if major_version == 2: - return query - return Source(query) return delay_import @@ -79,66 +70,86 @@ def error_middleware(next, root, info, **args): raise RuntimeError("Runtime Error!") -def test_no_harm_no_transaction(app, graphql_run): +def test_no_harm_no_transaction(target_application): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + def _test(): - response = graphql_run(app, "{ __schema { types { name } } }") - assert not response.errors + response = target_application("{ __schema { types { name } } }") _test() +example_middleware = [example_middleware] +error_middleware = [error_middleware] + +example_middleware.append(example_middleware_async) +error_middleware.append(error_middleware_async) + _runtime_error_name = callable_name(RuntimeError) _test_runtime_error = [(_runtime_error_name, "Runtime Error!")] -_graphql_base_rollup_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 1), - ("GraphQL/allOther", 1), - ("GraphQL/GraphQL/all", 1), - ("GraphQL/GraphQL/allOther", 1), -] -def test_basic(app, graphql_run): - from graphql import __version__ as version +def _graphql_base_rollup_metrics(framework, version, background_task=True): + graphql_version = get_package_version("graphql-core") - FRAMEWORK_METRICS = [ - ("Python/Framework/GraphQL/%s" % version, 1), + metrics = [ + ("Python/Framework/GraphQL/%s" % graphql_version, 1), + ("GraphQL/all", 1), + ("GraphQL/%s/all" % framework, 1), ] + if background_task: + metrics.extend( + [ + ("GraphQL/allOther", 1), + ("GraphQL/%s/allOther" % framework, 1), + ] + ) + else: + metrics.extend( + [ + ("GraphQL/allWeb", 1), + ("GraphQL/%s/allWeb" % framework, 1), + ] + ) + + if framework != "GraphQL": + metrics.append(("Python/Framework/%s/%s" % (framework, version), 1)) + + return metrics + + +def test_basic(target_application): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application @validate_transaction_metrics( "query//hello", "GraphQL", - rollup_metrics=_graphql_base_rollup_metrics + FRAMEWORK_METRICS, - background_task=True, + rollup_metrics=_graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors + response = target_application("{ hello }") + assert response["hello"] == "Hello!" _test() @dt_enabled -def test_query_and_mutation(app, graphql_run, is_graphql_2): - from graphql import __version__ as version +def test_query_and_mutation(target_application): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + + mutation_path = "storage_add" if framework != "Graphene" else "storage_add.string" + type_annotation = "!" if framework == "Strawberry" else "" - FRAMEWORK_METRICS = [ - ("Python/Framework/GraphQL/%s" % version, 1), - ] _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/GraphQL/storage", 1), - ("GraphQL/resolve/GraphQL/storage_add", 1), - ("GraphQL/operation/GraphQL/query//storage", 1), - ("GraphQL/operation/GraphQL/mutation//storage_add", 1), + ("GraphQL/resolve/%s/storage_add" % framework, 1), + ("GraphQL/operation/%s/mutation//%s" % (framework, mutation_path), 1), + ] + _test_query_scoped_metrics = [ + ("GraphQL/resolve/%s/storage" % framework, 1), + ("GraphQL/operation/%s/query//storage" % framework, 1), ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/GraphQL/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/GraphQL/allOther", 2), - ] + _test_mutation_scoped_metrics _expected_mutation_operation_attributes = { "graphql.operation.type": "mutation", @@ -148,7 +159,7 @@ def test_query_and_mutation(app, graphql_run, is_graphql_2): "graphql.field.name": "storage_add", "graphql.field.parentType": "Mutation", "graphql.field.path": "storage_add", - "graphql.field.returnType": "[String]" if is_graphql_2 else "String", + "graphql.field.returnType": ("String" if framework != "Graphene" else "StorageAdd") + type_annotation, } _expected_query_operation_attributes = { "graphql.operation.type": "query", @@ -158,78 +169,108 @@ def test_query_and_mutation(app, graphql_run, is_graphql_2): "graphql.field.name": "storage", "graphql.field.parentType": "Query", "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", + "graphql.field.returnType": "[String%s]%s" % (type_annotation, type_annotation), } - @validate_code_level_metrics("_target_application", "resolve_storage") - @validate_code_level_metrics("_target_application", "resolve_storage_add") + @validate_code_level_metrics( + "framework_%s._target_schema_%s" % (framework.lower(), schema_type), "resolve_storage_add" + ) + @validate_span_events(exact_agents=_expected_mutation_operation_attributes) + @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) @validate_transaction_metrics( - "query//storage", + "mutation//%s" % mutation_path, "GraphQL", scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, + rollup_metrics=_test_mutation_scoped_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) + @conditional_decorator(background_task(), is_bg) + def _mutation(): + if framework == "Graphene": + query = 'mutation { storage_add(string: "abc") { string } }' + else: + query = 'mutation { storage_add(string: "abc") }' + response = target_application(query) + assert response["storage_add"] == "abc" or response["storage_add"]["string"] == "abc" + + @validate_code_level_metrics("framework_%s._target_schema_%s" % (framework.lower(), schema_type), "resolve_storage") @validate_span_events(exact_agents=_expected_query_operation_attributes) @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - response = graphql_run(app, 'mutation { storage_add(string: "abc") }') - assert not response.errors - response = graphql_run(app, "query { storage }") - assert not response.errors - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.data) - assert "abc" in str(response.data) + @validate_transaction_metrics( + "query//storage", + "GraphQL", + scoped_metrics=_test_query_scoped_metrics, + rollup_metrics=_test_query_scoped_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, + ) + @conditional_decorator(background_task(), is_bg) + def _query(): + response = target_application("query { storage }") + assert response["storage"] == ["abc"] - _test() + _mutation() + _query() +@pytest.mark.parametrize("middleware", example_middleware) @dt_enabled -def test_middleware(app, graphql_run, is_graphql_2): +def test_middleware(target_application, middleware): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + + name = "%s:%s" % (middleware.__module__, middleware.__name__) + if "async" in name: + if schema_type != "async": + pytest.skip("Async middleware not supported in sync applications.") + _test_middleware_metrics = [ - ("GraphQL/operation/GraphQL/query//hello", 1), - ("GraphQL/resolve/GraphQL/hello", 1), - ("Function/test_application:example_middleware", 1), + ("GraphQL/operation/%s/query//hello" % framework, 1), + ("GraphQL/resolve/%s/hello" % framework, 1), + ("Function/%s" % name, 1), ] - @validate_code_level_metrics("test_application", "example_middleware") - @validate_code_level_metrics("_target_application", "resolve_hello") + # Span count 5: Transaction, Operation, Middleware, and 1 Resolver and Resolver Function + span_count = 5 + extra_spans + + @validate_code_level_metrics(*name.split(":")) + @validate_code_level_metrics("framework_%s._target_schema_%s" % (framework.lower(), schema_type), "resolve_hello") + @validate_span_events(count=span_count) @validate_transaction_metrics( "query//hello", "GraphQL", scoped_metrics=_test_middleware_metrics, - rollup_metrics=_test_middleware_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=_test_middleware_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) - # Span count 5: Transaction, Operation, Middleware, and 1 Resolver and Resolver Function - @validate_span_events(count=5) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, "{ hello }", middleware=[example_middleware]) - assert not response.errors - assert "Hello!" in str(response.data) + response = target_application("{ hello }", middleware=[middleware]) + assert response["hello"] == "Hello!" _test() +@pytest.mark.parametrize("middleware", error_middleware) @dt_enabled -def test_exception_in_middleware(app, graphql_run): - query = "query MyQuery { hello }" - field = "hello" +def test_exception_in_middleware(target_application, middleware): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + query = "query MyQuery { error_middleware }" + field = "error_middleware" + + name = "%s:%s" % (middleware.__module__, middleware.__name__) + if "async" in name: + if schema_type != "async": + pytest.skip("Async middleware not supported in sync applications.") # Metrics _test_exception_scoped_metrics = [ - ("GraphQL/operation/GraphQL/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/GraphQL/%s" % field, 1), + ("GraphQL/operation/%s/query/MyQuery/%s" % (framework, field), 1), + ("GraphQL/resolve/%s/%s" % (framework, field), 1), + ("Function/%s" % name, 1), ] _test_exception_rollup_metrics = [ ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/test_application:error_middleware", 1), + ("Errors/all%s" % ("Other" if is_bg else "Web"), 1), + ("Errors/%sTransaction/GraphQL/%s" % ("Other" if is_bg else "Web", name), 1), ] + _test_exception_scoped_metrics # Attributes @@ -246,39 +287,39 @@ def test_exception_in_middleware(app, graphql_run): } @validate_transaction_metrics( - "test_application:error_middleware", + name, "GraphQL", scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) @validate_span_events(exact_agents=_expected_exception_operation_attributes) @validate_span_events(exact_agents=_expected_exception_resolver_attributes) @validate_transaction_errors(errors=_test_runtime_error) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, query, middleware=[error_middleware]) - assert response.errors + response = target_application(query, middleware=[middleware]) _test() @pytest.mark.parametrize("field", ("error", "error_non_null")) @dt_enabled -def test_exception_in_resolver(app, graphql_run, field): +def test_exception_in_resolver(target_application, field): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application query = "query MyQuery { %s }" % field - txn_name = "_target_application:resolve_error" + txn_name = "framework_%s._target_schema_%s:resolve_error" % (framework.lower(), schema_type) # Metrics _test_exception_scoped_metrics = [ - ("GraphQL/operation/GraphQL/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/GraphQL/%s" % field, 1), + ("GraphQL/operation/%s/query/MyQuery/%s" % (framework, field), 1), + ("GraphQL/resolve/%s/%s" % (framework, field), 1), ] _test_exception_rollup_metrics = [ ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), + ("Errors/all%s" % ("Other" if is_bg else "Web"), 1), + ("Errors/%sTransaction/GraphQL/%s" % ("Other" if is_bg else "Web", txn_name), 1), ] + _test_exception_scoped_metrics # Attributes @@ -298,16 +339,15 @@ def test_exception_in_resolver(app, graphql_run, field): txn_name, "GraphQL", scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) @validate_span_events(exact_agents=_expected_exception_operation_attributes) @validate_span_events(exact_agents=_expected_exception_resolver_attributes) @validate_transaction_errors(errors=_test_runtime_error) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, query) - assert response.errors + response = target_application(query) _test() @@ -316,18 +356,16 @@ def _test(): @pytest.mark.parametrize( "query,exc_class", [ - ("query MyQuery { missing_field }", "GraphQLError"), + ("query MyQuery { error_missing_field }", "GraphQLError"), ("{ syntax_error ", "graphql.error.syntax_error:GraphQLSyntaxError"), ], ) -def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_class): +def test_exception_in_validation(target_application, query, exc_class): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application if "syntax" in query: txn_name = "graphql.language.parser:parse" else: - if is_graphql_2: - txn_name = "graphql.validation.validation:validate" - else: - txn_name = "graphql.validation.validate:validate" + txn_name = "graphql.validation.validate:validate" # Import path differs between versions if exc_class == "GraphQLError": @@ -336,12 +374,12 @@ def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_clas exc_class = callable_name(GraphQLError) _test_exception_scoped_metrics = [ - # ('GraphQL/operation/GraphQL///', 1), + ("GraphQL/operation/%s///" % framework, 1), ] _test_exception_rollup_metrics = [ ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), + ("Errors/all%s" % ("Other" if is_bg else "Web"), 1), + ("Errors/%sTransaction/GraphQL/%s" % ("Other" if is_bg else "Web", txn_name), 1), ] + _test_exception_scoped_metrics # Attributes @@ -355,72 +393,77 @@ def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_clas txn_name, "GraphQL", scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) @validate_span_events(exact_agents=_expected_exception_operation_attributes) @validate_transaction_errors(errors=[exc_class]) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, query) - assert response.errors + response = target_application(query) _test() @dt_enabled -def test_operation_metrics_and_attrs(app, graphql_run): - operation_metrics = [("GraphQL/operation/GraphQL/query/MyQuery/library", 1)] +def test_operation_metrics_and_attrs(target_application): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + operation_metrics = [("GraphQL/operation/%s/query/MyQuery/library" % framework, 1)] operation_attrs = { "graphql.operation.type": "query", "graphql.operation.name": "MyQuery", } + # Span count 16: Transaction, Operation, and 7 Resolvers and Resolver functions + # library, library.name, library.book + # library.book.name and library.book.id for each book resolved (in this case 2) + span_count = 16 + extra_spans # WSGI may add 4 spans, other frameworks may add other amounts + @validate_transaction_metrics( "query/MyQuery/library", "GraphQL", scoped_metrics=operation_metrics, - rollup_metrics=operation_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=operation_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) - # Span count 16: Transaction, Operation, and 7 Resolvers and Resolver functions - # library, library.name, library.book - # library.book.name and library.book.id for each book resolved (in this case 2) - @validate_span_events(count=16) + @validate_span_events(count=span_count) @validate_span_events(exact_agents=operation_attrs) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, "query MyQuery { library(index: 0) { branch, book { id, name } } }") - assert not response.errors + response = target_application("query MyQuery { library(index: 0) { branch, book { id, name } } }") _test() @dt_enabled -def test_field_resolver_metrics_and_attrs(app, graphql_run): - field_resolver_metrics = [("GraphQL/resolve/GraphQL/hello", 1)] +def test_field_resolver_metrics_and_attrs(target_application): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + field_resolver_metrics = [("GraphQL/resolve/%s/hello" % framework, 1)] + + type_annotation = "!" if framework == "Strawberry" else "" graphql_attrs = { "graphql.field.name": "hello", "graphql.field.parentType": "Query", "graphql.field.path": "hello", - "graphql.field.returnType": "String", + "graphql.field.returnType": "String" + type_annotation, } + # Span count 4: Transaction, Operation, and 1 Resolver and Resolver function + span_count = 4 + extra_spans # WSGI may add 4 spans, other frameworks may add other amounts + @validate_transaction_metrics( "query//hello", "GraphQL", scoped_metrics=field_resolver_metrics, - rollup_metrics=field_resolver_metrics + _graphql_base_rollup_metrics, - background_task=True, + rollup_metrics=field_resolver_metrics + _graphql_base_rollup_metrics(framework, version, is_bg), + background_task=is_bg, ) - # Span count 4: Transaction, Operation, and 1 Resolver and Resolver function - @validate_span_events(count=4) + @validate_span_events(count=span_count) @validate_span_events(exact_agents=graphql_attrs) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors - assert "Hello!" in str(response.data) + response = target_application("{ hello }") + assert response["hello"] == "Hello!" _test() @@ -443,18 +486,19 @@ def _test(): @dt_enabled @pytest.mark.parametrize("query,obfuscated", _test_queries) -def test_query_obfuscation(app, graphql_run, query, obfuscated): +def test_query_obfuscation(target_application, query, obfuscated): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application graphql_attrs = {"graphql.operation.query": obfuscated} if callable(query): + if framework != "GraphQL": + pytest.skip("Source query objects not tested outside of graphql-core") query = query() @validate_span_events(exact_agents=graphql_attrs) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, query) - if not isinstance(query, str) or "error" not in query: - assert not response.errors + response = target_application(query) _test() @@ -499,28 +543,28 @@ def _test(): @dt_enabled @pytest.mark.parametrize("query,expected_path", _test_queries) -def test_deepest_unique_path(app, graphql_run, query, expected_path): +def test_deepest_unique_path(target_application, query, expected_path): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application if expected_path == "/error": - txn_name = "_target_application:resolve_error" + txn_name = "framework_%s._target_schema_%s:resolve_error" % (framework.lower(), schema_type) else: txn_name = "query/%s" % expected_path @validate_transaction_metrics( txn_name, "GraphQL", - background_task=True, + background_task=is_bg, ) - @background_task() + @conditional_decorator(background_task(), is_bg) def _test(): - response = graphql_run(app, query) - if "error" not in query: - assert not response.errors + response = target_application(query) _test() @pytest.mark.parametrize("capture_introspection_setting", (True, False)) -def test_introspection_transactions(app, graphql_run, capture_introspection_setting): +def test_introspection_transactions(target_application, capture_introspection_setting): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application txn_ct = 1 if capture_introspection_setting else 0 @override_application_settings( @@ -529,7 +573,6 @@ def test_introspection_transactions(app, graphql_run, capture_introspection_sett @validate_transaction_count(txn_ct) @background_task() def _test(): - response = graphql_run(app, "{ __schema { types { name } } }") - assert not response.errors + response = target_application("{ __schema { types { name } } }") _test() diff --git a/tests/framework_graphql/test_application_async.py b/tests/framework_graphql/test_application_async.py index 28b435c43..39c1871ef 100644 --- a/tests/framework_graphql/test_application_async.py +++ b/tests/framework_graphql/test_application_async.py @@ -12,99 +12,16 @@ # See the License for the specific language governing permissions and # limitations under the License. -import asyncio +from inspect import isawaitable -import pytest -from test_application import is_graphql_2 -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events -from newrelic.api.background_task import background_task +# Async Functions not allowed in Py2 +async def example_middleware_async(next, root, info, **args): + return_value = next(root, info, **args) + if isawaitable(return_value): + return await return_value + return return_value -@pytest.fixture(scope="session") -def graphql_run_async(): - from graphql import __version__ as version - from graphql import graphql - - major_version = int(version.split(".")[0]) - if major_version == 2: - - def graphql_run(*args, **kwargs): - return graphql(*args, return_promise=True, **kwargs) - - return graphql_run - else: - return graphql - - -@dt_enabled -def test_query_and_mutation_async(app, graphql_run_async, is_graphql_2): - from graphql import __version__ as version - - FRAMEWORK_METRICS = [ - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/GraphQL/storage", 1), - ("GraphQL/resolve/GraphQL/storage_add", 1), - ("GraphQL/operation/GraphQL/query//storage", 1), - ("GraphQL/operation/GraphQL/mutation//storage_add", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/GraphQL/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/GraphQL/allOther", 2), - ] + _test_mutation_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "[String]" if is_graphql_2 else "String", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String]", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - async def coro(): - response = await graphql_run_async(app, 'mutation { storage_add(string: "abc") }') - assert not response.errors - response = await graphql_run_async(app, "query { storage }") - assert not response.errors - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.data) - assert "abc" in str(response.data) - - loop = asyncio.new_event_loop() - loop.run_until_complete(coro()) - - _test() +async def error_middleware_async(next, root, info, **args): + raise RuntimeError("Runtime Error!") diff --git a/tests/framework_starlette/test_graphql.py b/tests/framework_starlette/test_graphql.py deleted file mode 100644 index 24ec3ab38..000000000 --- a/tests/framework_starlette/test_graphql.py +++ /dev/null @@ -1,87 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json - -import pytest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - - -def get_starlette_version(): - import starlette - - version = getattr(starlette, "__version__", "0.0.0").split(".") - return tuple(int(x) for x in version) - - -@pytest.fixture(scope="session") -def target_application(): - import _test_graphql - - return _test_graphql.target_application - - -@dt_enabled -@pytest.mark.parametrize("endpoint", ("/async", "/sync")) -@pytest.mark.skipif(get_starlette_version() >= (0, 17), reason="Starlette GraphQL support dropped in v0.17.0") -def test_graphql_metrics_and_attrs(target_application, endpoint): - from graphql import __version__ as version - - from newrelic.hooks.framework_graphene import framework_details - - FRAMEWORK_METRICS = [ - ("Python/Framework/Graphene/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_scoped_metrics = [ - ("GraphQL/resolve/Graphene/hello", 1), - ("GraphQL/operation/Graphene/query//hello", 1), - ] - _test_unscoped_metrics = [ - ("GraphQL/all", 1), - ("GraphQL/Graphene/all", 1), - ("GraphQL/allWeb", 1), - ("GraphQL/Graphene/allWeb", 1), - ] + _test_scoped_metrics - - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - "graphql.operation.query": "{ hello }", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "hello", - "graphql.field.parentType": "Query", - "graphql.field.path": "hello", - "graphql.field.returnType": "String", - } - - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=_test_scoped_metrics, - rollup_metrics=_test_unscoped_metrics + FRAMEWORK_METRICS, - ) - def _test(): - response = target_application.make_request( - "POST", endpoint, body=json.dumps({"query": "{ hello }"}), headers={"Content-Type": "application/json"} - ) - assert response.status == 200 - assert "Hello!" in response.body.decode("utf-8") - - _test() diff --git a/tests/framework_strawberry/__init__.py b/tests/framework_strawberry/__init__.py new file mode 100644 index 000000000..8030baccf --- /dev/null +++ b/tests/framework_strawberry/__init__.py @@ -0,0 +1,13 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. diff --git a/tests/framework_strawberry/_target_application.py b/tests/framework_strawberry/_target_application.py index e032fc27a..afba04873 100644 --- a/tests/framework_strawberry/_target_application.py +++ b/tests/framework_strawberry/_target_application.py @@ -12,185 +12,90 @@ # See the License for the specific language governing permissions and # limitations under the License. -from typing import List, Union - -import strawberry.mutation -import strawberry.type -from strawberry import Schema, field -from strawberry.asgi import GraphQL -from strawberry.schema.config import StrawberryConfig -from strawberry.types.types import Optional - - -@strawberry.type -class Author: - first_name: str - last_name: str - - -@strawberry.type -class Book: - id: int - name: str - isbn: str - author: Author - branch: str - - -@strawberry.type -class Magazine: - id: int - name: str - issue: int - branch: str - -@strawberry.type -class Library: - id: int - branch: str - magazine: List[Magazine] - book: List[Book] +import asyncio +import json +import pytest +from framework_strawberry._target_schema_async import ( + target_asgi_application as target_asgi_application_async, +) +from framework_strawberry._target_schema_async import ( + target_schema as target_schema_async, +) +from framework_strawberry._target_schema_sync import ( + target_asgi_application as target_asgi_application_sync, +) +from framework_strawberry._target_schema_sync import target_schema as target_schema_sync -Item = Union[Book, Magazine] -Storage = List[str] +def run_sync(schema): + def _run_sync(query, middleware=None): + from graphql.language.source import Source -authors = [ - Author( - first_name="New", - last_name="Relic", - ), - Author( - first_name="Bob", - last_name="Smith", - ), - Author( - first_name="Leslie", - last_name="Jones", - ), -] - -books = [ - Book( - id=1, - name="Python Agent: The Book", - isbn="a-fake-isbn", - author=authors[0], - branch="riverside", - ), - Book( - id=2, - name="Ollies for O11y: A Sk8er's Guide to Observability", - isbn="a-second-fake-isbn", - author=authors[1], - branch="downtown", - ), - Book( - id=3, - name="[Redacted]", - isbn="a-third-fake-isbn", - author=authors[2], - branch="riverside", - ), -] + if middleware is not None: + pytest.skip("Middleware not supported in Strawberry.") -magazines = [ - Magazine(id=1, name="Reli Updates Weekly", issue=1, branch="riverside"), - Magazine(id=2, name="Reli: The Forgotten Years", issue=2, branch="downtown"), - Magazine(id=3, name="Node Weekly", issue=1, branch="riverside"), -] + response = schema.execute_sync(query) + if isinstance(query, str) and "error" not in query or isinstance(query, Source) and "error" not in query.body: + assert not response.errors + else: + assert response.errors -libraries = ["riverside", "downtown"] -libraries = [ - Library( - id=i + 1, - branch=branch, - magazine=[m for m in magazines if m.branch == branch], - book=[b for b in books if b.branch == branch], - ) - for i, branch in enumerate(libraries) -] + return response.data -storage = [] + return _run_sync -def resolve_hello(): - return "Hello!" +def run_async(schema): + def _run_async(query, middleware=None): + from graphql.language.source import Source + if middleware is not None: + pytest.skip("Middleware not supported in Strawberry.") -async def resolve_hello_async(): - return "Hello!" + loop = asyncio.get_event_loop() + response = loop.run_until_complete(schema.execute(query)) + if isinstance(query, str) and "error" not in query or isinstance(query, Source) and "error" not in query.body: + assert not response.errors + else: + assert response.errors -def resolve_echo(echo: str): - return echo + return response.data + return _run_async -def resolve_library(index: int): - return libraries[index] +def run_asgi(app): + def _run_asgi(query, middleware=None): + if middleware is not None: + pytest.skip("Middleware not supported in Strawberry.") -def resolve_storage_add(string: str): - storage.add(string) - return storage + response = app.make_request( + "POST", "/", body=json.dumps({"query": query}), headers={"Content-Type": "application/json"} + ) + body = json.loads(response.body.decode("utf-8")) + if not isinstance(query, str) or "error" in query: + try: + assert response.status != 200 + except AssertionError: + assert body["errors"] + else: + assert response.status == 200 + assert "errors" not in body or not body["errors"] -def resolve_storage(): - return storage + return body["data"] + return _run_asgi -def resolve_error(): - raise RuntimeError("Runtime Error!") - -def resolve_search(contains: str): - search_books = [b for b in books if contains in b.name] - search_magazines = [m for m in magazines if contains in m.name] - return search_books + search_magazines - - -@strawberry.type -class Query: - library: Library = field(resolver=resolve_library) - hello: str = field(resolver=resolve_hello) - hello_async: str = field(resolver=resolve_hello_async) - search: List[Item] = field(resolver=resolve_search) - echo: str = field(resolver=resolve_echo) - storage: Storage = field(resolver=resolve_storage) - error: Optional[str] = field(resolver=resolve_error) - error_non_null: str = field(resolver=resolve_error) - - def resolve_library(self, info, index): - return libraries[index] - - def resolve_storage(self, info): - return storage - - def resolve_search(self, info, contains): - search_books = [b for b in books if contains in b.name] - search_magazines = [m for m in magazines if contains in m.name] - return search_books + search_magazines - - def resolve_hello(self, info): - return "Hello!" - - def resolve_echo(self, info, echo): - return echo - - def resolve_error(self, info) -> str: - raise RuntimeError("Runtime Error!") - - -@strawberry.type -class Mutation: - @strawberry.mutation - def storage_add(self, string: str) -> str: - storage.append(string) - return str(string) - - -_target_application = Schema(query=Query, mutation=Mutation, config=StrawberryConfig(auto_camel_case=False)) -_target_asgi_application = GraphQL(_target_application) +target_application = { + "sync-sync": run_sync(target_schema_sync), + "async-sync": run_async(target_schema_sync), + "asgi-sync": run_asgi(target_asgi_application_sync), + "async-async": run_async(target_schema_async), + "asgi-async": run_asgi(target_asgi_application_async), +} diff --git a/tests/framework_strawberry/_target_schema_async.py b/tests/framework_strawberry/_target_schema_async.py new file mode 100644 index 000000000..373cef537 --- /dev/null +++ b/tests/framework_strawberry/_target_schema_async.py @@ -0,0 +1,84 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from typing import List + +import strawberry.mutation +import strawberry.type +from framework_strawberry._target_schema_sync import ( + Item, + Library, + Storage, + books, + libraries, + magazines, +) +from strawberry import Schema, field +from strawberry.asgi import GraphQL +from strawberry.schema.config import StrawberryConfig +from strawberry.types.types import Optional +from testing_support.asgi_testing import AsgiTest + +storage = [] + + +async def resolve_hello(): + return "Hello!" + + +async def resolve_echo(echo: str): + return echo + + +async def resolve_library(index: int): + return libraries[index] + + +async def resolve_storage_add(string: str): + storage.append(string) + return string + + +async def resolve_storage(): + return [storage.pop()] + + +async def resolve_error(): + raise RuntimeError("Runtime Error!") + + +async def resolve_search(contains: str): + search_books = [b for b in books if contains in b.name] + search_magazines = [m for m in magazines if contains in m.name] + return search_books + search_magazines + + +@strawberry.type +class Query: + library: Library = field(resolver=resolve_library) + hello: str = field(resolver=resolve_hello) + search: List[Item] = field(resolver=resolve_search) + echo: str = field(resolver=resolve_echo) + storage: Storage = field(resolver=resolve_storage) + error: Optional[str] = field(resolver=resolve_error) + error_non_null: str = field(resolver=resolve_error) + + +@strawberry.type +class Mutation: + storage_add: str = strawberry.mutation(resolver=resolve_storage_add) + + +target_schema = Schema(query=Query, mutation=Mutation, config=StrawberryConfig(auto_camel_case=False)) +target_asgi_application = AsgiTest(GraphQL(target_schema)) diff --git a/tests/framework_strawberry/_target_schema_sync.py b/tests/framework_strawberry/_target_schema_sync.py new file mode 100644 index 000000000..34bff75b9 --- /dev/null +++ b/tests/framework_strawberry/_target_schema_sync.py @@ -0,0 +1,169 @@ +# Copyright 2010 New Relic, Inc. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from typing import List, Union + +import strawberry.mutation +import strawberry.type +from strawberry import Schema, field +from strawberry.asgi import GraphQL +from strawberry.schema.config import StrawberryConfig +from strawberry.types.types import Optional +from testing_support.asgi_testing import AsgiTest + + +@strawberry.type +class Author: + first_name: str + last_name: str + + +@strawberry.type +class Book: + id: int + name: str + isbn: str + author: Author + branch: str + + +@strawberry.type +class Magazine: + id: int + name: str + issue: int + branch: str + + +@strawberry.type +class Library: + id: int + branch: str + magazine: List[Magazine] + book: List[Book] + + +Item = Union[Book, Magazine] +Storage = List[str] + + +authors = [ + Author( + first_name="New", + last_name="Relic", + ), + Author( + first_name="Bob", + last_name="Smith", + ), + Author( + first_name="Leslie", + last_name="Jones", + ), +] + +books = [ + Book( + id=1, + name="Python Agent: The Book", + isbn="a-fake-isbn", + author=authors[0], + branch="riverside", + ), + Book( + id=2, + name="Ollies for O11y: A Sk8er's Guide to Observability", + isbn="a-second-fake-isbn", + author=authors[1], + branch="downtown", + ), + Book( + id=3, + name="[Redacted]", + isbn="a-third-fake-isbn", + author=authors[2], + branch="riverside", + ), +] + +magazines = [ + Magazine(id=1, name="Reli Updates Weekly", issue=1, branch="riverside"), + Magazine(id=2, name="Reli: The Forgotten Years", issue=2, branch="downtown"), + Magazine(id=3, name="Node Weekly", issue=1, branch="riverside"), +] + + +libraries = ["riverside", "downtown"] +libraries = [ + Library( + id=i + 1, + branch=branch, + magazine=[m for m in magazines if m.branch == branch], + book=[b for b in books if b.branch == branch], + ) + for i, branch in enumerate(libraries) +] + +storage = [] + + +def resolve_hello(): + return "Hello!" + + +def resolve_echo(echo: str): + return echo + + +def resolve_library(index: int): + return libraries[index] + + +def resolve_storage_add(string: str): + storage.append(string) + return string + + +def resolve_storage(): + return [storage.pop()] + + +def resolve_error(): + raise RuntimeError("Runtime Error!") + + +def resolve_search(contains: str): + search_books = [b for b in books if contains in b.name] + search_magazines = [m for m in magazines if contains in m.name] + return search_books + search_magazines + + +@strawberry.type +class Query: + library: Library = field(resolver=resolve_library) + hello: str = field(resolver=resolve_hello) + search: List[Item] = field(resolver=resolve_search) + echo: str = field(resolver=resolve_echo) + storage: Storage = field(resolver=resolve_storage) + error: Optional[str] = field(resolver=resolve_error) + error_non_null: str = field(resolver=resolve_error) + + +@strawberry.type +class Mutation: + storage_add: str = strawberry.mutation(resolver=resolve_storage_add) + + +target_schema = Schema(query=Query, mutation=Mutation, config=StrawberryConfig(auto_camel_case=False)) +target_asgi_application = AsgiTest(GraphQL(target_schema)) diff --git a/tests/framework_strawberry/conftest.py b/tests/framework_strawberry/conftest.py index 130866bcb..6345b3033 100644 --- a/tests/framework_strawberry/conftest.py +++ b/tests/framework_strawberry/conftest.py @@ -12,11 +12,10 @@ # See the License for the specific language governing permissions and # limitations under the License. -import pytest -import six - -from testing_support.fixtures import collector_agent_registration_fixture, collector_available_fixture # noqa: F401; pylint: disable=W0611 - +from testing_support.fixtures import ( # noqa: F401; pylint: disable=W0611 + collector_agent_registration_fixture, + collector_available_fixture, +) _default_settings = { "transaction_tracer.explain_threshold": 0.0, @@ -30,14 +29,3 @@ app_name="Python Agent Test (framework_strawberry)", default_settings=_default_settings, ) - - -@pytest.fixture(scope="session") -def app(): - from _target_application import _target_application - - return _target_application - - -if six.PY2: - collect_ignore = ["test_application_async.py"] diff --git a/tests/framework_strawberry/test_application.py b/tests/framework_strawberry/test_application.py index ac60a33e0..5a3f579ba 100644 --- a/tests/framework_strawberry/test_application.py +++ b/tests/framework_strawberry/test_application.py @@ -11,437 +11,36 @@ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - import pytest -from testing_support.fixtures import dt_enabled, override_application_settings -from testing_support.validators.validate_span_events import validate_span_events +from framework_graphql.test_application import * +from testing_support.fixtures import override_application_settings from testing_support.validators.validate_transaction_count import ( validate_transaction_count, ) -from testing_support.validators.validate_transaction_errors import ( - validate_transaction_errors, -) -from testing_support.validators.validate_transaction_metrics import ( - validate_transaction_metrics, -) from newrelic.api.background_task import background_task -from newrelic.common.object_names import callable_name - - -@pytest.fixture(scope="session") -def is_graphql_2(): - from graphql import __version__ as version - - major_version = int(version.split(".")[0]) - return major_version == 2 - - -@pytest.fixture(scope="session") -def graphql_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - - def execute(schema, *args, **kwargs): - return schema.execute_sync(*args, **kwargs) - - return execute - - -def to_graphql_source(query): - def delay_import(): - try: - from graphql import Source - except ImportError: - # Fallback if Source is not implemented - return query - - from graphql import __version__ as version - - # For graphql2, Source objects aren't acceptable input - major_version = int(version.split(".")[0]) - if major_version == 2: - return query - - return Source(query) - - return delay_import - - -def example_middleware(next, root, info, **args): # pylint: disable=W0622 - return_value = next(root, info, **args) - return return_value - - -def error_middleware(next, root, info, **args): # pylint: disable=W0622 - raise RuntimeError("Runtime Error!") - - -_runtime_error_name = callable_name(RuntimeError) -_test_runtime_error = [(_runtime_error_name, "Runtime Error!")] -_graphql_base_rollup_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 1), - ("GraphQL/allOther", 1), - ("GraphQL/Strawberry/all", 1), - ("GraphQL/Strawberry/allOther", 1), -] - - -def test_basic(app, graphql_run): - from graphql import __version__ as version - - from newrelic.hooks.framework_strawberry import framework_details +from newrelic.common.package_version_utils import get_package_version - FRAMEWORK_METRICS = [ - ("Python/Framework/Strawberry/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] +STRAWBERRY_VERSION = get_package_version("strawberry-graphql") - @validate_transaction_metrics( - "query//hello", - "GraphQL", - rollup_metrics=_graphql_base_rollup_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @background_task() - def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors - - _test() - - -@dt_enabled -def test_query_and_mutation(app, graphql_run): - from graphql import __version__ as version - from newrelic.hooks.framework_strawberry import framework_details +@pytest.fixture(scope="session", params=["sync-sync", "async-sync", "async-async", "asgi-sync", "asgi-async"]) +def target_application(request): + from ._target_application import target_application - FRAMEWORK_METRICS = [ - ("Python/Framework/Strawberry/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Strawberry/storage", 1), - ("GraphQL/resolve/Strawberry/storage_add", 1), - ("GraphQL/operation/Strawberry/query//storage", 1), - ("GraphQL/operation/Strawberry/mutation//storage_add", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/Strawberry/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/Strawberry/allOther", 2), - ] + _test_mutation_scoped_metrics + target_application = target_application[request.param] - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "String!", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String!]!", - } + is_asgi = "asgi" in request.param + schema_type = request.param.split("-")[1] - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - response = graphql_run(app, 'mutation { storage_add(string: "abc") }') - assert not response.errors - response = graphql_run(app, "query { storage }") - assert not response.errors - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.data) - assert "abc" in str(response.data) - - _test() - - -@pytest.mark.parametrize("field", ("error", "error_non_null")) -@dt_enabled -def test_exception_in_resolver(app, graphql_run, field): - query = "query MyQuery { %s }" % field - - txn_name = "_target_application:resolve_error" - - # Metrics - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Strawberry/query/MyQuery/%s" % field, 1), - ("GraphQL/resolve/Strawberry/%s" % field, 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_resolver_attributes = { - "graphql.field.name": field, - "graphql.field.parentType": "Query", - "graphql.field.path": field, - "graphql.field.returnType": "String!" if "non_null" in field else "String", - } - _expected_exception_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_span_events(exact_agents=_expected_exception_resolver_attributes) - @validate_transaction_errors(errors=_test_runtime_error) - @background_task() - def _test(): - response = graphql_run(app, query) - assert response.errors - - _test() - - -@dt_enabled -@pytest.mark.parametrize( - "query,exc_class", - [ - ("query MyQuery { missing_field }", "GraphQLError"), - ("{ syntax_error ", "graphql.error.syntax_error:GraphQLSyntaxError"), - ], -) -def test_exception_in_validation(app, graphql_run, is_graphql_2, query, exc_class): - if "syntax" in query: - txn_name = "graphql.language.parser:parse" - else: - if is_graphql_2: - txn_name = "graphql.validation.validation:validate" - else: - txn_name = "graphql.validation.validate:validate" - - # Import path differs between versions - if exc_class == "GraphQLError": - from graphql.error import GraphQLError - - exc_class = callable_name(GraphQLError) - - _test_exception_scoped_metrics = [ - ("GraphQL/operation/Strawberry///", 1), - ] - _test_exception_rollup_metrics = [ - ("Errors/all", 1), - ("Errors/allOther", 1), - ("Errors/OtherTransaction/GraphQL/%s" % txn_name, 1), - ] + _test_exception_scoped_metrics - - # Attributes - _expected_exception_operation_attributes = { - "graphql.operation.type": "", - "graphql.operation.name": "", - "graphql.operation.query": query, - } - - @validate_transaction_metrics( - txn_name, - "GraphQL", - scoped_metrics=_test_exception_scoped_metrics, - rollup_metrics=_test_exception_rollup_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_exception_operation_attributes) - @validate_transaction_errors(errors=[exc_class]) - @background_task() - def _test(): - response = graphql_run(app, query) - assert response.errors - - _test() - - -@dt_enabled -def test_operation_metrics_and_attrs(app, graphql_run): - operation_metrics = [("GraphQL/operation/Strawberry/query/MyQuery/library", 1)] - operation_attrs = { - "graphql.operation.type": "query", - "graphql.operation.name": "MyQuery", - } - - @validate_transaction_metrics( - "query/MyQuery/library", - "GraphQL", - scoped_metrics=operation_metrics, - rollup_metrics=operation_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 16: Transaction, Operation, and 7 Resolvers and Resolver functions - # library, library.name, library.book - # library.book.name and library.book.id for each book resolved (in this case 2) - @validate_span_events(count=16) - @validate_span_events(exact_agents=operation_attrs) - @background_task() - def _test(): - response = graphql_run(app, "query MyQuery { library(index: 0) { branch, book { id, name } } }") - assert not response.errors - - _test() - - -@dt_enabled -def test_field_resolver_metrics_and_attrs(app, graphql_run): - field_resolver_metrics = [("GraphQL/resolve/Strawberry/hello", 1)] - graphql_attrs = { - "graphql.field.name": "hello", - "graphql.field.parentType": "Query", - "graphql.field.path": "hello", - "graphql.field.returnType": "String!", - } - - @validate_transaction_metrics( - "query//hello", - "GraphQL", - scoped_metrics=field_resolver_metrics, - rollup_metrics=field_resolver_metrics + _graphql_base_rollup_metrics, - background_task=True, - ) - # Span count 4: Transaction, Operation, and 1 Resolver and Resolver function - @validate_span_events(count=4) - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - response = graphql_run(app, "{ hello }") - assert not response.errors - assert "Hello!" in str(response.data) - - _test() - - -_test_queries = [ - ("{ hello }", "{ hello }"), # Basic query extraction - ("{ error }", "{ error }"), # Extract query on field error - (to_graphql_source("{ hello }"), "{ hello }"), # Extract query from Source objects - ( - "{ library(index: 0) { branch } }", - "{ library(index: ?) { branch } }", - ), # Integers - ('{ echo(echo: "123") }', "{ echo(echo: ?) }"), # Strings with numerics - ('{ echo(echo: "test") }', "{ echo(echo: ?) }"), # Strings - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Aliases - ('{ TestEcho: echo(echo: "test") }', "{ TestEcho: echo(echo: ?) }"), # Variables - ( # Fragments - '{ ...MyFragment } fragment MyFragment on Query { echo(echo: "test") }', - "{ ...MyFragment } fragment MyFragment on Query { echo(echo: ?) }", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,obfuscated", _test_queries) -def test_query_obfuscation(app, graphql_run, query, obfuscated): - graphql_attrs = {"graphql.operation.query": obfuscated} - - if callable(query): - query = query() - - @validate_span_events(exact_agents=graphql_attrs) - @background_task() - def _test(): - response = graphql_run(app, query) - if not isinstance(query, str) or "error" not in query: - assert not response.errors - - _test() - - -_test_queries = [ - ("{ hello }", "/hello"), # Basic query - ("{ error }", "/error"), # Extract deepest path on field error - ('{ echo(echo: "test") }', "/echo"), # Fields with arguments - ( - "{ library(index: 0) { branch, book { isbn branch } } }", - "/library", - ), # Complex Example, 1 level - ( - "{ library(index: 0) { book { author { first_name }} } }", - "/library.book.author.first_name", - ), # Complex Example, 2 levels - ("{ library(index: 0) { id, book { name } } }", "/library.book.name"), # Filtering - ('{ TestEcho: echo(echo: "test") }', "/echo"), # Aliases - ( - '{ search(contains: "A") { __typename ... on Book { name } } }', - "/search.name", - ), # InlineFragment - ( - '{ hello echo(echo: "test") }', - "", - ), # Multiple root selections. (need to decide on final behavior) - # FragmentSpread - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { name id }", # Fragment filtering - "/library.book.name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } } } fragment MyFragment on Book { author { first_name } }", - "/library.book.author.first_name", - ), - ( - "{ library(index: 0) { book { ...MyFragment } magazine { ...MagFragment } } } fragment MyFragment on Book { author { first_name } } fragment MagFragment on Magazine { name }", - "/library", - ), -] - - -@dt_enabled -@pytest.mark.parametrize("query,expected_path", _test_queries) -def test_deepest_unique_path(app, graphql_run, query, expected_path): - if expected_path == "/error": - txn_name = "_target_application:resolve_error" - else: - txn_name = "query/%s" % expected_path - - @validate_transaction_metrics( - txn_name, - "GraphQL", - background_task=True, - ) - @background_task() - def _test(): - response = graphql_run(app, query) - if "error" not in query: - assert not response.errors - - _test() + assert STRAWBERRY_VERSION is not None + return "Strawberry", STRAWBERRY_VERSION, target_application, not is_asgi, schema_type, 0 @pytest.mark.parametrize("capture_introspection_setting", (True, False)) -def test_introspection_transactions(app, graphql_run, capture_introspection_setting): +def test_introspection_transactions(target_application, capture_introspection_setting): + framework, version, target_application, is_bg, schema_type, extra_spans = target_application + txn_ct = 1 if capture_introspection_setting else 0 @override_application_settings( @@ -450,7 +49,6 @@ def test_introspection_transactions(app, graphql_run, capture_introspection_sett @validate_transaction_count(txn_ct) @background_task() def _test(): - response = graphql_run(app, "{ __schema { types { name } } }") - assert not response.errors + response = target_application("{ __schema { types { name } } }") _test() diff --git a/tests/framework_strawberry/test_application_async.py b/tests/framework_strawberry/test_application_async.py deleted file mode 100644 index 1354c4c01..000000000 --- a/tests/framework_strawberry/test_application_async.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import asyncio - -import pytest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - -from newrelic.api.background_task import background_task - - -@pytest.fixture(scope="session") -def graphql_run_async(): - """Wrapper function to simulate framework_graphql test behavior.""" - - def execute(schema, *args, **kwargs): - return schema.execute(*args, **kwargs) - - return execute - - -_graphql_base_rollup_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 1), - ("GraphQL/allOther", 1), - ("GraphQL/Strawberry/all", 1), - ("GraphQL/Strawberry/allOther", 1), -] - - -loop = asyncio.new_event_loop() - - -def test_basic(app, graphql_run_async): - from graphql import __version__ as version - - from newrelic.hooks.framework_strawberry import framework_details - - FRAMEWORK_METRICS = [ - ("Python/Framework/Strawberry/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - - @validate_transaction_metrics( - "query//hello_async", - "GraphQL", - rollup_metrics=_graphql_base_rollup_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @background_task() - def _test(): - async def coro(): - response = await graphql_run_async(app, "{ hello_async }") - assert not response.errors - - loop.run_until_complete(coro()) - - _test() - - -@dt_enabled -def test_query_and_mutation_async(app, graphql_run_async): - from graphql import __version__ as version - - from newrelic.hooks.framework_strawberry import framework_details - - FRAMEWORK_METRICS = [ - ("Python/Framework/Strawberry/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Strawberry/storage", 1), - ("GraphQL/resolve/Strawberry/storage_add", 1), - ("GraphQL/operation/Strawberry/query//storage", 1), - ("GraphQL/operation/Strawberry/mutation//storage_add", 1), - ] - _test_mutation_unscoped_metrics = [ - ("OtherTransaction/all", 1), - ("GraphQL/all", 2), - ("GraphQL/Strawberry/all", 2), - ("GraphQL/allOther", 2), - ("GraphQL/Strawberry/allOther", 2), - ] + _test_mutation_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "String!", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String!]!", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - background_task=True, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - @background_task() - def _test(): - async def coro(): - response = await graphql_run_async(app, 'mutation { storage_add(string: "abc") }') - assert not response.errors - response = await graphql_run_async(app, "query { storage }") - assert not response.errors - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.data) - assert "abc" in str(response.data) - - loop.run_until_complete(coro()) - - _test() diff --git a/tests/framework_strawberry/test_asgi.py b/tests/framework_strawberry/test_asgi.py deleted file mode 100644 index 8acbaedfb..000000000 --- a/tests/framework_strawberry/test_asgi.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright 2010 New Relic, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import json - -import pytest -from testing_support.asgi_testing import AsgiTest -from testing_support.fixtures import dt_enabled -from testing_support.validators.validate_transaction_metrics import validate_transaction_metrics -from testing_support.validators.validate_span_events import validate_span_events - - -@pytest.fixture(scope="session") -def graphql_asgi_run(): - """Wrapper function to simulate framework_graphql test behavior.""" - from _target_application import _target_asgi_application - - app = AsgiTest(_target_asgi_application) - - def execute(query): - return app.make_request( - "POST", - "/", - headers={"Content-Type": "application/json"}, - body=json.dumps({"query": query}), - ) - - return execute - - -@dt_enabled -def test_query_and_mutation_asgi(graphql_asgi_run): - from graphql import __version__ as version - - from newrelic.hooks.framework_strawberry import framework_details - - FRAMEWORK_METRICS = [ - ("Python/Framework/Strawberry/%s" % framework_details()[1], 1), - ("Python/Framework/GraphQL/%s" % version, 1), - ] - _test_mutation_scoped_metrics = [ - ("GraphQL/resolve/Strawberry/storage_add", 1), - ("GraphQL/operation/Strawberry/mutation//storage_add", 1), - ] - _test_query_scoped_metrics = [ - ("GraphQL/resolve/Strawberry/storage", 1), - ("GraphQL/operation/Strawberry/query//storage", 1), - ] - _test_unscoped_metrics = [ - ("WebTransaction", 1), - ("GraphQL/all", 1), - ("GraphQL/Strawberry/all", 1), - ("GraphQL/allWeb", 1), - ("GraphQL/Strawberry/allWeb", 1), - ] - _test_mutation_unscoped_metrics = _test_unscoped_metrics + _test_mutation_scoped_metrics - _test_query_unscoped_metrics = _test_unscoped_metrics + _test_query_scoped_metrics - - _expected_mutation_operation_attributes = { - "graphql.operation.type": "mutation", - "graphql.operation.name": "", - } - _expected_mutation_resolver_attributes = { - "graphql.field.name": "storage_add", - "graphql.field.parentType": "Mutation", - "graphql.field.path": "storage_add", - "graphql.field.returnType": "String!", - } - _expected_query_operation_attributes = { - "graphql.operation.type": "query", - "graphql.operation.name": "", - } - _expected_query_resolver_attributes = { - "graphql.field.name": "storage", - "graphql.field.parentType": "Query", - "graphql.field.path": "storage", - "graphql.field.returnType": "[String!]!", - } - - @validate_transaction_metrics( - "query//storage", - "GraphQL", - scoped_metrics=_test_query_scoped_metrics, - rollup_metrics=_test_query_unscoped_metrics + FRAMEWORK_METRICS, - ) - @validate_transaction_metrics( - "mutation//storage_add", - "GraphQL", - scoped_metrics=_test_mutation_scoped_metrics, - rollup_metrics=_test_mutation_unscoped_metrics + FRAMEWORK_METRICS, - index=-2, - ) - @validate_span_events(exact_agents=_expected_mutation_operation_attributes, index=-2) - @validate_span_events(exact_agents=_expected_mutation_resolver_attributes, index=-2) - @validate_span_events(exact_agents=_expected_query_operation_attributes) - @validate_span_events(exact_agents=_expected_query_resolver_attributes) - def _test(): - response = graphql_asgi_run('mutation { storage_add(string: "abc") }') - assert response.status == 200 - response = json.loads(response.body.decode("utf-8")) - assert not response.get("errors") - - response = graphql_asgi_run("query { storage }") - assert response.status == 200 - response = json.loads(response.body.decode("utf-8")) - assert not response.get("errors") - - # These are separate assertions because pypy stores 'abc' as a unicode string while other Python versions do not - assert "storage" in str(response.get("data")) - assert "abc" in str(response.get("data")) - - _test() diff --git a/tests/messagebroker_pika/test_pika_async_connection_consume.py b/tests/messagebroker_pika/test_pika_async_connection_consume.py index 4e44c7ed7..29b9d8ea4 100644 --- a/tests/messagebroker_pika/test_pika_async_connection_consume.py +++ b/tests/messagebroker_pika/test_pika_async_connection_consume.py @@ -49,20 +49,20 @@ from newrelic.api.background_task import background_task + DB_SETTINGS = rabbitmq_settings()[0] _message_broker_tt_params = { - "queue_name": QUEUE, - "routing_key": QUEUE, - "correlation_id": CORRELATION_ID, - "reply_to": REPLY_TO, - "headers": HEADERS.copy(), + 'queue_name': QUEUE, + 'routing_key': QUEUE, + 'correlation_id': CORRELATION_ID, + 'reply_to': REPLY_TO, + 'headers': HEADERS.copy(), } # Tornado's IO loop is not configurable in versions 5.x and up try: - class MyIOLoop(tornado.ioloop.IOLoop.configured_class()): def handle_callback_exception(self, *args, **kwargs): raise @@ -73,44 +73,38 @@ def handle_callback_exception(self, *args, **kwargs): connection_classes = [pika.SelectConnection, TornadoConnection] -parametrized_connection = pytest.mark.parametrize("ConnectionClass", connection_classes) +parametrized_connection = pytest.mark.parametrize('ConnectionClass', + connection_classes) _test_select_conn_basic_get_inside_txn_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, 1), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, 1), ] if six.PY3: _test_select_conn_basic_get_inside_txn_metrics.append( - ( - ( - "Function/test_pika_async_connection_consume:" - "test_async_connection_basic_get_inside_txn." - ".on_message" - ), - 1, - ) - ) + (('Function/test_pika_async_connection_consume:' + 'test_async_connection_basic_get_inside_txn.' + '.on_message'), 1)) else: - _test_select_conn_basic_get_inside_txn_metrics.append(("Function/test_pika_async_connection_consume:on_message", 1)) + _test_select_conn_basic_get_inside_txn_metrics.append( + ('Function/test_pika_async_connection_consume:on_message', 1)) @parametrized_connection -@pytest.mark.parametrize("callback_as_partial", [True, False]) -@validate_code_level_metrics( - "test_pika_async_connection_consume" + (".test_async_connection_basic_get_inside_txn." if six.PY3 else ""), - "on_message", -) +@pytest.mark.parametrize('callback_as_partial', [True, False]) +@validate_code_level_metrics("test_pika_async_connection_consume.test_async_connection_basic_get_inside_txn.", "on_message", py2_namespace="test_pika_async_connection_consume") @validate_transaction_metrics( - ("test_pika_async_connection_consume:" "test_async_connection_basic_get_inside_txn"), - scoped_metrics=_test_select_conn_basic_get_inside_txn_metrics, - rollup_metrics=_test_select_conn_basic_get_inside_txn_metrics, - background_task=True, -) + ('test_pika_async_connection_consume:' + 'test_async_connection_basic_get_inside_txn'), + scoped_metrics=_test_select_conn_basic_get_inside_txn_metrics, + rollup_metrics=_test_select_conn_basic_get_inside_txn_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() -def test_async_connection_basic_get_inside_txn(producer, ConnectionClass, callback_as_partial): +def test_async_connection_basic_get_inside_txn(producer, ConnectionClass, + callback_as_partial): def on_message(channel, method_frame, header_frame, body): assert method_frame assert body == BODY @@ -128,7 +122,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = ConnectionClass(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = ConnectionClass( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -139,8 +135,9 @@ def on_open_connection(connection): @parametrized_connection -@pytest.mark.parametrize("callback_as_partial", [True, False]) -def test_select_connection_basic_get_outside_txn(producer, ConnectionClass, callback_as_partial): +@pytest.mark.parametrize('callback_as_partial', [True, False]) +def test_select_connection_basic_get_outside_txn(producer, ConnectionClass, + callback_as_partial): metrics_list = [] @capture_transaction_metrics(metrics_list) @@ -163,8 +160,8 @@ def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) connection = ConnectionClass( - pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection - ) + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -181,24 +178,25 @@ def on_open_connection(connection): _test_select_conn_basic_get_inside_txn_no_callback_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] @pytest.mark.skipif( - condition=pika_version_info[0] > 0, reason="pika 1.0 removed the ability to use basic_get with callback=None" -) + condition=pika_version_info[0] > 0, + reason='pika 1.0 removed the ability to use basic_get with callback=None') @parametrized_connection @validate_transaction_metrics( - ("test_pika_async_connection_consume:" "test_async_connection_basic_get_inside_txn_no_callback"), + ('test_pika_async_connection_consume:' + 'test_async_connection_basic_get_inside_txn_no_callback'), scoped_metrics=_test_select_conn_basic_get_inside_txn_no_callback_metrics, rollup_metrics=_test_select_conn_basic_get_inside_txn_no_callback_metrics, - background_task=True, -) + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() -def test_async_connection_basic_get_inside_txn_no_callback(producer, ConnectionClass): +def test_async_connection_basic_get_inside_txn_no_callback(producer, + ConnectionClass): def on_open_channel(channel): channel.basic_get(callback=None, queue=QUEUE) channel.close() @@ -208,7 +206,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = ConnectionClass(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = ConnectionClass( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -219,26 +219,27 @@ def on_open_connection(connection): _test_async_connection_basic_get_empty_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] @parametrized_connection -@pytest.mark.parametrize("callback_as_partial", [True, False]) +@pytest.mark.parametrize('callback_as_partial', [True, False]) @validate_transaction_metrics( - ("test_pika_async_connection_consume:" "test_async_connection_basic_get_empty"), - scoped_metrics=_test_async_connection_basic_get_empty_metrics, - rollup_metrics=_test_async_connection_basic_get_empty_metrics, - background_task=True, -) + ('test_pika_async_connection_consume:' + 'test_async_connection_basic_get_empty'), + scoped_metrics=_test_async_connection_basic_get_empty_metrics, + rollup_metrics=_test_async_connection_basic_get_empty_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() -def test_async_connection_basic_get_empty(ConnectionClass, callback_as_partial): - QUEUE = "test_async_empty" +def test_async_connection_basic_get_empty(ConnectionClass, + callback_as_partial): + QUEUE = 'test_async_empty' def on_message(channel, method_frame, header_frame, body): - assert False, body.decode("UTF-8") + assert False, body.decode('UTF-8') if callback_as_partial: on_message = functools.partial(on_message) @@ -252,7 +253,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = ConnectionClass(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = ConnectionClass( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -263,42 +266,33 @@ def on_open_connection(connection): _test_select_conn_basic_consume_in_txn_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] if six.PY3: _test_select_conn_basic_consume_in_txn_metrics.append( - ( - ( - "Function/test_pika_async_connection_consume:" - "test_async_connection_basic_consume_inside_txn." - ".on_message" - ), - 1, - ) - ) + (('Function/test_pika_async_connection_consume:' + 'test_async_connection_basic_consume_inside_txn.' + '.on_message'), 1)) else: - _test_select_conn_basic_consume_in_txn_metrics.append(("Function/test_pika_async_connection_consume:on_message", 1)) + _test_select_conn_basic_consume_in_txn_metrics.append( + ('Function/test_pika_async_connection_consume:on_message', 1)) @parametrized_connection @validate_transaction_metrics( - ("test_pika_async_connection_consume:" "test_async_connection_basic_consume_inside_txn"), - scoped_metrics=_test_select_conn_basic_consume_in_txn_metrics, - rollup_metrics=_test_select_conn_basic_consume_in_txn_metrics, - background_task=True, -) -@validate_code_level_metrics( - "test_pika_async_connection_consume" - + (".test_async_connection_basic_consume_inside_txn." if six.PY3 else ""), - "on_message", -) + ('test_pika_async_connection_consume:' + 'test_async_connection_basic_consume_inside_txn'), + scoped_metrics=_test_select_conn_basic_consume_in_txn_metrics, + rollup_metrics=_test_select_conn_basic_consume_in_txn_metrics, + background_task=True) +@validate_code_level_metrics("test_pika_async_connection_consume.test_async_connection_basic_consume_inside_txn.", "on_message", py2_namespace="test_pika_async_connection_consume") @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() def test_async_connection_basic_consume_inside_txn(producer, ConnectionClass): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.basic_ack(method_frame.delivery_tag) channel.close() @@ -311,7 +305,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = ConnectionClass(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = ConnectionClass( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -322,67 +318,46 @@ def on_open_connection(connection): _test_select_conn_basic_consume_two_exchanges = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE_2, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE_2, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE_2, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE_2, None), ] if six.PY3: _test_select_conn_basic_consume_two_exchanges.append( - ( - ( - "Function/test_pika_async_connection_consume:" - "test_async_connection_basic_consume_two_exchanges." - ".on_message_1" - ), - 1, - ) - ) + (('Function/test_pika_async_connection_consume:' + 'test_async_connection_basic_consume_two_exchanges.' + '.on_message_1'), 1)) _test_select_conn_basic_consume_two_exchanges.append( - ( - ( - "Function/test_pika_async_connection_consume:" - "test_async_connection_basic_consume_two_exchanges." - ".on_message_2" - ), - 1, - ) - ) + (('Function/test_pika_async_connection_consume:' + 'test_async_connection_basic_consume_two_exchanges.' + '.on_message_2'), 1)) else: _test_select_conn_basic_consume_two_exchanges.append( - ("Function/test_pika_async_connection_consume:on_message_1", 1) - ) + ('Function/test_pika_async_connection_consume:on_message_1', 1)) _test_select_conn_basic_consume_two_exchanges.append( - ("Function/test_pika_async_connection_consume:on_message_2", 1) - ) + ('Function/test_pika_async_connection_consume:on_message_2', 1)) @parametrized_connection @validate_transaction_metrics( - ("test_pika_async_connection_consume:" "test_async_connection_basic_consume_two_exchanges"), - scoped_metrics=_test_select_conn_basic_consume_two_exchanges, - rollup_metrics=_test_select_conn_basic_consume_two_exchanges, - background_task=True, -) -@validate_code_level_metrics( - "test_pika_async_connection_consume" - + (".test_async_connection_basic_consume_two_exchanges." if six.PY3 else ""), - "on_message_1", -) -@validate_code_level_metrics( - "test_pika_async_connection_consume" - + (".test_async_connection_basic_consume_two_exchanges." if six.PY3 else ""), - "on_message_2", -) + ('test_pika_async_connection_consume:' + 'test_async_connection_basic_consume_two_exchanges'), + scoped_metrics=_test_select_conn_basic_consume_two_exchanges, + rollup_metrics=_test_select_conn_basic_consume_two_exchanges, + background_task=True) +@validate_code_level_metrics("test_pika_async_connection_consume.test_async_connection_basic_consume_two_exchanges.", "on_message_1", py2_namespace="test_pika_async_connection_consume") +@validate_code_level_metrics("test_pika_async_connection_consume.test_async_connection_basic_consume_two_exchanges.", "on_message_2", py2_namespace="test_pika_async_connection_consume") @background_task() -def test_async_connection_basic_consume_two_exchanges(producer, producer_2, ConnectionClass): +def test_async_connection_basic_consume_two_exchanges(producer, producer_2, + ConnectionClass): global events_received events_received = 0 def on_message_1(channel, method_frame, header_frame, body): channel.basic_ack(method_frame.delivery_tag) - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY global events_received @@ -395,7 +370,7 @@ def on_message_1(channel, method_frame, header_frame, body): def on_message_2(channel, method_frame, header_frame, body): channel.basic_ack(method_frame.delivery_tag) - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY global events_received @@ -413,7 +388,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = ConnectionClass(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = ConnectionClass( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -424,11 +401,12 @@ def on_open_connection(connection): # This should not create a transaction -@function_not_called("newrelic.core.stats_engine", "StatsEngine.record_transaction") -@override_application_settings({"debug.record_transaction_failure": True}) +@function_not_called('newrelic.core.stats_engine', + 'StatsEngine.record_transaction') +@override_application_settings({'debug.record_transaction_failure': True}) def test_tornado_connection_basic_consume_outside_transaction(producer): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.basic_ack(method_frame.delivery_tag) channel.close() @@ -441,7 +419,9 @@ def on_open_channel(channel): def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) - connection = TornadoConnection(pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection) + connection = TornadoConnection( + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() @@ -452,44 +432,31 @@ def on_open_connection(connection): if six.PY3: - _txn_name = ( - "test_pika_async_connection_consume:" - "test_select_connection_basic_consume_outside_transaction." - ".on_message" - ) + _txn_name = ('test_pika_async_connection_consume:' + 'test_select_connection_basic_consume_outside_transaction.' + '.on_message') _test_select_connection_consume_outside_txn_metrics = [ - ( - ( - "Function/test_pika_async_connection_consume:" - "test_select_connection_basic_consume_outside_transaction." - ".on_message" - ), - None, - ) - ] + (('Function/test_pika_async_connection_consume:' + 'test_select_connection_basic_consume_outside_transaction.' + '.on_message'), None)] else: - _txn_name = "test_pika_async_connection_consume:on_message" + _txn_name = ( + 'test_pika_async_connection_consume:on_message') _test_select_connection_consume_outside_txn_metrics = [ - ("Function/test_pika_async_connection_consume:on_message", None) - ] + ('Function/test_pika_async_connection_consume:on_message', None)] # This should create a transaction @validate_transaction_metrics( - _txn_name, - scoped_metrics=_test_select_connection_consume_outside_txn_metrics, - rollup_metrics=_test_select_connection_consume_outside_txn_metrics, - background_task=True, - group="Message/RabbitMQ/Exchange/%s" % EXCHANGE, -) -@validate_code_level_metrics( - "test_pika_async_connection_consume" - + (".test_select_connection_basic_consume_outside_transaction." if six.PY3 else ""), - "on_message", -) + _txn_name, + scoped_metrics=_test_select_connection_consume_outside_txn_metrics, + rollup_metrics=_test_select_connection_consume_outside_txn_metrics, + background_task=True, + group='Message/RabbitMQ/Exchange/%s' % EXCHANGE) +@validate_code_level_metrics("test_pika_async_connection_consume.test_select_connection_basic_consume_outside_transaction.", "on_message", py2_namespace="test_pika_async_connection_consume") def test_select_connection_basic_consume_outside_transaction(producer): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.basic_ack(method_frame.delivery_tag) channel.close() @@ -503,8 +470,8 @@ def on_open_connection(connection): connection.channel(on_open_callback=on_open_channel) connection = pika.SelectConnection( - pika.ConnectionParameters(DB_SETTINGS["host"]), on_open_callback=on_open_connection - ) + pika.ConnectionParameters(DB_SETTINGS['host']), + on_open_callback=on_open_connection) try: connection.ioloop.start() diff --git a/tests/messagebroker_pika/test_pika_blocking_connection_consume.py b/tests/messagebroker_pika/test_pika_blocking_connection_consume.py index 7b41674a2..e097cfbe9 100644 --- a/tests/messagebroker_pika/test_pika_blocking_connection_consume.py +++ b/tests/messagebroker_pika/test_pika_blocking_connection_consume.py @@ -38,30 +38,32 @@ DB_SETTINGS = rabbitmq_settings()[0] _message_broker_tt_params = { - "queue_name": QUEUE, - "routing_key": QUEUE, - "correlation_id": CORRELATION_ID, - "reply_to": REPLY_TO, - "headers": HEADERS.copy(), + 'queue_name': QUEUE, + 'routing_key': QUEUE, + 'correlation_id': CORRELATION_ID, + 'reply_to': REPLY_TO, + 'headers': HEADERS.copy(), } _test_blocking_connection_basic_get_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, 1), - (("Function/pika.adapters.blocking_connection:" "_CallbackResult.set_value_once"), 1), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, 1), + (('Function/pika.adapters.blocking_connection:' + '_CallbackResult.set_value_once'), 1) ] @validate_transaction_metrics( - ("test_pika_blocking_connection_consume:" "test_blocking_connection_basic_get"), - scoped_metrics=_test_blocking_connection_basic_get_metrics, - rollup_metrics=_test_blocking_connection_basic_get_metrics, - background_task=True, -) + ('test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_get'), + scoped_metrics=_test_blocking_connection_basic_get_metrics, + rollup_metrics=_test_blocking_connection_basic_get_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() def test_blocking_connection_basic_get(producer): - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() method_frame, _, _ = channel.basic_get(QUEUE) assert method_frame @@ -69,22 +71,23 @@ def test_blocking_connection_basic_get(producer): _test_blocking_connection_basic_get_empty_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] @validate_transaction_metrics( - ("test_pika_blocking_connection_consume:" "test_blocking_connection_basic_get_empty"), - scoped_metrics=_test_blocking_connection_basic_get_empty_metrics, - rollup_metrics=_test_blocking_connection_basic_get_empty_metrics, - background_task=True, -) + ('test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_get_empty'), + scoped_metrics=_test_blocking_connection_basic_get_empty_metrics, + rollup_metrics=_test_blocking_connection_basic_get_empty_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() def test_blocking_connection_basic_get_empty(): - QUEUE = "test_blocking_empty-%s" % os.getpid() - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + QUEUE = 'test_blocking_empty-%s' % os.getpid() + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() channel.queue_declare(queue=QUEUE) @@ -100,7 +103,8 @@ def test_blocking_connection_basic_get_outside_transaction(producer): @capture_transaction_metrics(metrics_list) def test_basic_get(): - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() channel.queue_declare(queue=QUEUE) @@ -116,57 +120,46 @@ def test_basic_get(): _test_blocking_conn_basic_consume_no_txn_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] if six.PY3: - _txn_name = ( - "test_pika_blocking_connection_consume:" - "test_blocking_connection_basic_consume_outside_transaction." - ".on_message" - ) + _txn_name = ('test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_outside_transaction.' + '.on_message') _test_blocking_conn_basic_consume_no_txn_metrics.append( - ( - ( - "Function/test_pika_blocking_connection_consume:" - "test_blocking_connection_basic_consume_outside_transaction." - ".on_message" - ), - None, - ) - ) + (('Function/test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_outside_transaction.' + '.on_message'), None)) else: - _txn_name = "test_pika_blocking_connection_consume:" "on_message" + _txn_name = ('test_pika_blocking_connection_consume:' + 'on_message') _test_blocking_conn_basic_consume_no_txn_metrics.append( - ("Function/test_pika_blocking_connection_consume:on_message", None) - ) + ('Function/test_pika_blocking_connection_consume:on_message', None)) -@pytest.mark.parametrize("as_partial", [True, False]) -@validate_code_level_metrics( - "test_pika_blocking_connection_consume" - + (".test_blocking_connection_basic_consume_outside_transaction." if six.PY3 else ""), - "on_message", -) +@pytest.mark.parametrize('as_partial', [True, False]) +@validate_code_level_metrics("test_pika_blocking_connection_consume.test_blocking_connection_basic_consume_outside_transaction.", "on_message", py2_namespace="test_pika_blocking_connection_consume") @validate_transaction_metrics( - _txn_name, - scoped_metrics=_test_blocking_conn_basic_consume_no_txn_metrics, - rollup_metrics=_test_blocking_conn_basic_consume_no_txn_metrics, - background_task=True, - group="Message/RabbitMQ/Exchange/%s" % EXCHANGE, -) + _txn_name, + scoped_metrics=_test_blocking_conn_basic_consume_no_txn_metrics, + rollup_metrics=_test_blocking_conn_basic_consume_no_txn_metrics, + background_task=True, + group='Message/RabbitMQ/Exchange/%s' % EXCHANGE) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) -def test_blocking_connection_basic_consume_outside_transaction(producer, as_partial): +def test_blocking_connection_basic_consume_outside_transaction(producer, + as_partial): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.stop_consuming() if as_partial: on_message = functools.partial(on_message) - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() basic_consume(channel, QUEUE, on_message) @@ -178,51 +171,41 @@ def on_message(channel, method_frame, header_frame, body): _test_blocking_conn_basic_consume_in_txn_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), ] if six.PY3: _test_blocking_conn_basic_consume_in_txn_metrics.append( - ( - ( - "Function/test_pika_blocking_connection_consume:" - "test_blocking_connection_basic_consume_inside_txn." - ".on_message" - ), - 1, - ) - ) + (('Function/test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_inside_txn.' + '.on_message'), 1)) else: _test_blocking_conn_basic_consume_in_txn_metrics.append( - ("Function/test_pika_blocking_connection_consume:on_message", 1) - ) + ('Function/test_pika_blocking_connection_consume:on_message', 1)) -@pytest.mark.parametrize("as_partial", [True, False]) -@validate_code_level_metrics( - "test_pika_blocking_connection_consume" - + (".test_blocking_connection_basic_consume_inside_txn." if six.PY3 else ""), - "on_message", -) +@pytest.mark.parametrize('as_partial', [True, False]) +@validate_code_level_metrics("test_pika_blocking_connection_consume.test_blocking_connection_basic_consume_inside_txn.", "on_message", py2_namespace="test_pika_blocking_connection_consume") @validate_transaction_metrics( - ("test_pika_blocking_connection_consume:" "test_blocking_connection_basic_consume_inside_txn"), - scoped_metrics=_test_blocking_conn_basic_consume_in_txn_metrics, - rollup_metrics=_test_blocking_conn_basic_consume_in_txn_metrics, - background_task=True, -) + ('test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_inside_txn'), + scoped_metrics=_test_blocking_conn_basic_consume_in_txn_metrics, + rollup_metrics=_test_blocking_conn_basic_consume_in_txn_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() def test_blocking_connection_basic_consume_inside_txn(producer, as_partial): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.stop_consuming() if as_partial: on_message = functools.partial(on_message) - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() basic_consume(channel, QUEUE, on_message) try: @@ -233,40 +216,33 @@ def on_message(channel, method_frame, header_frame, body): _test_blocking_conn_basic_consume_stopped_txn_metrics = [ - ("MessageBroker/RabbitMQ/Exchange/Produce/Named/%s" % EXCHANGE, None), - ("MessageBroker/RabbitMQ/Exchange/Consume/Named/%s" % EXCHANGE, None), - ("OtherTransaction/Message/RabbitMQ/Exchange/Named/%s" % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Produce/Named/%s' % EXCHANGE, None), + ('MessageBroker/RabbitMQ/Exchange/Consume/Named/%s' % EXCHANGE, None), + ('OtherTransaction/Message/RabbitMQ/Exchange/Named/%s' % EXCHANGE, None), ] if six.PY3: _test_blocking_conn_basic_consume_stopped_txn_metrics.append( - ( - ( - "Function/test_pika_blocking_connection_consume:" - "test_blocking_connection_basic_consume_stopped_txn." - ".on_message" - ), - None, - ) - ) + (('Function/test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_stopped_txn.' + '.on_message'), None)) else: _test_blocking_conn_basic_consume_stopped_txn_metrics.append( - ("Function/test_pika_blocking_connection_consume:on_message", None) - ) + ('Function/test_pika_blocking_connection_consume:on_message', None)) -@pytest.mark.parametrize("as_partial", [True, False]) +@pytest.mark.parametrize('as_partial', [True, False]) @validate_transaction_metrics( - ("test_pika_blocking_connection_consume:" "test_blocking_connection_basic_consume_stopped_txn"), - scoped_metrics=_test_blocking_conn_basic_consume_stopped_txn_metrics, - rollup_metrics=_test_blocking_conn_basic_consume_stopped_txn_metrics, - background_task=True, -) + ('test_pika_blocking_connection_consume:' + 'test_blocking_connection_basic_consume_stopped_txn'), + scoped_metrics=_test_blocking_conn_basic_consume_stopped_txn_metrics, + rollup_metrics=_test_blocking_conn_basic_consume_stopped_txn_metrics, + background_task=True) @validate_tt_collector_json(message_broker_params=_message_broker_tt_params) @background_task() def test_blocking_connection_basic_consume_stopped_txn(producer, as_partial): def on_message(channel, method_frame, header_frame, body): - assert hasattr(method_frame, "_nr_start_time") + assert hasattr(method_frame, '_nr_start_time') assert body == BODY channel.stop_consuming() @@ -275,7 +251,8 @@ def on_message(channel, method_frame, header_frame, body): if as_partial: on_message = functools.partial(on_message) - with pika.BlockingConnection(pika.ConnectionParameters(DB_SETTINGS["host"])) as connection: + with pika.BlockingConnection( + pika.ConnectionParameters(DB_SETTINGS['host'])) as connection: channel = connection.channel() basic_consume(channel, QUEUE, on_message) try: diff --git a/tests/testing_support/validators/validate_code_level_metrics.py b/tests/testing_support/validators/validate_code_level_metrics.py index d5c4b5648..c3a880b35 100644 --- a/tests/testing_support/validators/validate_code_level_metrics.py +++ b/tests/testing_support/validators/validate_code_level_metrics.py @@ -12,13 +12,18 @@ # See the License for the specific language governing permissions and # limitations under the License. +from newrelic.packages import six from testing_support.validators.validate_span_events import validate_span_events from testing_support.fixtures import dt_enabled from newrelic.common.object_wrapper import function_wrapper -def validate_code_level_metrics(namespace, function, builtin=False, count=1, index=-1): + +def validate_code_level_metrics(namespace, function, py2_namespace=None, builtin=False, count=1, index=-1): """Verify that code level metrics are generated for a callable.""" + if six.PY2 and py2_namespace is not None: + namespace = py2_namespace + if builtin: validator = validate_span_events( exact_agents={"code.function": function, "code.namespace": namespace, "code.filepath": ""}, @@ -38,5 +43,4 @@ def validate_code_level_metrics(namespace, function, builtin=False, count=1, ind def wrapper(wrapped, instance, args, kwargs): validator(dt_enabled(wrapped))(*args, **kwargs) - return wrapper - + return wrapper \ No newline at end of file diff --git a/tox.ini b/tox.ini index 466db2cc5..2319c336e 100644 --- a/tox.ini +++ b/tox.ini @@ -131,14 +131,11 @@ envlist = python-framework_flask-{pypy27,py27}-flask0012, python-framework_flask-{pypy27,py27,py37,py38,py39,py310,py311,pypy38}-flask0101, ; temporarily disabling flaskmaster tests - python-framework_flask-{py37,py38,py39,py310,py311,pypy38}-flask{latest}, + python-framework_flask-{py37,py38,py39,py310,py311,pypy38}-flasklatest, python-framework_graphene-{py37,py38,py39,py310,py311}-graphenelatest, - python-framework_graphene-{py27,py37,py38,py39,pypy27,pypy38}-graphene{0200,0201}, - python-framework_graphene-{py310,py311}-graphene0201, - python-framework_graphql-{py27,py37,py38,py39,py310,py311,pypy27,pypy38}-graphql02, - python-framework_graphql-{py37,py38,py39,py310,py311,pypy38}-graphql03, + python-framework_graphql-{py37,py38,py39,py310,py311,pypy38}-graphqllatest, ; temporarily disabling graphqlmaster tests - python-framework_graphql-py37-graphql{0202,0203,0300,0301,0302}, + python-framework_graphql-py37-graphql{0300,0301,0302}, grpc-framework_grpc-py27-grpc0125, grpc-framework_grpc-{py37,py38,py39,py310,py311}-grpclatest, python-framework_pyramid-{pypy27,py27,py38}-Pyramid0104, @@ -311,12 +308,7 @@ deps = framework_flask-flaskmaster: https://github.com/pallets/werkzeug/archive/main.zip framework_flask-flaskmaster: https://github.com/pallets/flask/archive/main.zip#egg=flask[async] framework_graphene-graphenelatest: graphene - framework_graphene-graphene0200: graphene<2.1 - framework_graphene-graphene0201: graphene<2.2 - framework_graphql-graphql02: graphql-core<3 - framework_graphql-graphql03: graphql-core<4 - framework_graphql-graphql0202: graphql-core<2.3 - framework_graphql-graphql0203: graphql-core<2.4 + framework_graphql-graphqllatest: graphql-core<4 framework_graphql-graphql0300: graphql-core<3.1 framework_graphql-graphql0301: graphql-core<3.2 framework_graphql-graphql0302: graphql-core<3.3 @@ -348,7 +340,6 @@ deps = framework_sanic-saniclatest: sanic framework_sanic-sanic{1812,190301,1906}: aiohttp framework_sanic-sanic{1812,190301,1906,1912,200904,210300,2109,2112,2203,2290}: websockets<11 - framework_starlette: graphene<3 framework_starlette-starlette0014: starlette<0.15 framework_starlette-starlette0015: starlette<0.16 framework_starlette-starlette0019: starlette<0.20 @@ -492,6 +483,7 @@ changedir = template_jinja2: tests/template_jinja2 template_mako: tests/template_mako + [pytest] usefixtures = collector_available_fixture @@ -500,10 +492,10 @@ usefixtures = [coverage:run] branch = True disable_warnings = couldnt-parse -source = newrelic +source = newrelic [coverage:paths] -source = +source = newrelic/ .tox/**/site-packages/newrelic/ /__w/**/site-packages/newrelic/ @@ -512,4 +504,4 @@ source = directory = ${TOX_ENV_DIR-.}/htmlcov [coverage:xml] -output = ${TOX_ENV_DIR-.}/coverage.xml +output = ${TOX_ENV_DIR-.}/coverage.xml \ No newline at end of file