Download Trial Version
- 1. Convert existing Apache Spark 4.0.1 release installed on your machine to TabbyDB
- 2. From your existing spark 4.0.1 install location i.e /jars, remove the following jars and put it in a separate backup location
- spark-catalyst_2.13-4.0.1.jar
- spark-connect_2.13-4.0.1.jar
- spark-hive_2.13-4.0.1.jar
- spark-sql-api_2.13-4.0.1.jar
- spark-common-utils_2.13-4.0.1.jar
- spark-core_2.13-4.0.1.jar
- spark-repl_2.13-4.0.1.jar
- spark-sql_2.13-4.0.1.jar
- 3. Unzip the downloaded tabbydb-jars.tar.gz and add the 8 jars it in the location /jars
- The downloaded TabbyDB jars are compatible with Spark 4.0.1 and in most cases, above steps are sufficient. In case you are using Spark Connect , then you also need to download the TabbyDB's connect client.
Remove any conf property, if present, which excludes any rule of spark, Run your complex use cases and check for performance gains!
If you have any questions/issues please contact at asif.shahid@kwikquery.com
Frequently asked questions
Find clear answers to common questions about TabbyDB’s capabilities, performance, and integration to help you make the most of our advanced query engine.
TabbyDB is a specialized fork of Apache Spark designed to optimize complex queries. It significantly reduces compilation time and memory usage for queries with nested joins, complex case statements, and large query trees through intelligent compile-time and runtime enhancements.
Yes, TabbyDB is built to manage vast and intricate query structures. Its optimizations improve both the speed and resource consumption, enabling faster execution of queries that would typically take hours to compile and run.
TabbyDB maintains full compatibility with Apache Spark’s DataFrame APIs, allowing corporate users to continue using their programmatic query methods while benefiting from enhanced performance without changing their existing codebase.