Skip to content

Latest commit

 

History

History
76 lines (53 loc) · 2.95 KB

README.md

File metadata and controls

76 lines (53 loc) · 2.95 KB

Apache DataFusion Comet

Apache DataFusion Comet is an Apache Spark plugin that uses Apache DataFusion as native runtime to achieve improvement in terms of query efficiency and query runtime.

Comet runs Spark SQL queries using the native DataFusion runtime, which is typically faster and more resource efficient than JVM based runtimes.

Comet aims to support:

  • a native Parquet implementation, including both reader and writer
  • full implementation of Spark operators, including Filter/Project/Aggregation/Join/Exchange etc.
  • full implementation of Spark built-in expressions
  • a UDF framework for users to migrate their existing UDF to native

Architecture

The following diagram illustrates the architecture of Comet:

Current Status

The project is currently integrated into Apache Spark 3.2, 3.3, and 3.4.

Feature Parity with Apache Spark

The project strives to keep feature parity with Apache Spark, that is, users should expect the same behavior (w.r.t features, configurations, query results, etc) with Comet turned on or turned off in their Spark jobs. In addition, Comet extension should automatically detect unsupported features and fallback to Spark engine.

To achieve this, besides unit tests within Comet itself, we also re-use Spark SQL tests and make sure they all pass with Comet extension enabled.

Supported Platforms

Linux, Apple OSX (Intel and M1)

Requirements

  • Apache Spark 3.2, 3.3, or 3.4
  • JDK 8, 11 and 17 (JDK 11 recommended because Spark 3.2 doesn't support 17)
  • GLIBC 2.17 (Centos 7) and up

Getting started

See the DataFusion Comet User Guide for installation instructions.

Contributing

See the DataFusion Comet Contribution Guide for information on how to get started contributing to the project.