“The partnership between Ascend.io and Qubole will serve as a game changer for customers as they look to unlock their next level of data maturity.”

Mike Leone, Senior Analyst at Enterprise Strategy Group

The Power of Partnership

A simple, secure and open data lake platform to accelerate machine learning, streaming and ad hoc analytics, on data lakes.

A DataAware engineering platform for autonomous, self-optimizing data pipelines that unify your lakes, warehouses, streams, and more.

%

Reduction in Infrastructure Cost

%

Less Code

%

Faster Build Velocity

Why Qubole + Ascend.io?

“The partnership between Ascend.io and Qubole will serve as a game changer for customers as they look to unlock their next level of data maturity. Customers will receive an end-to-end solution to simplify data engineering and reduce the time it takes to extract valuable business insights from real-time, large-scale analytics. The joint solution’s power and simplicity is impressive. “

“Not only does it put time back into the hands of data teams who are under constant pressure to do more with data, but it presents a combination of extensive customization and configurability with easy-to-use, visual, low-code interfaces. It truly offers the best of both worlds regardless of an organization’s data maturity.

Mike Leone

Senior Analyst, Enterprise Strategy Group

Benefits

SELF-SERVICE DATA PIPELINES:

Ascend for Qubole raises team productivity of data engineers, data scientists, and data analysts with self-service data pipelines by replacing the complexity of data engineering with low-code, declarative configurations.

REDUCED COSTS:

Ascend continually optimizes Spark usage by only processing incremental data and code. In concert, Qubole’s Heterogeneous Cluster Lifecycle and Intelligent Spot Management provide the most cost-effective combination of on-demand and spot VMs (virtual machines) to further minimize the cost of data processing.

INCREASED PERFORMANCE:

The optimized open source frameworks of Qubole, such as Presto, Spark, and Hive,, allow users to process and query data with industry-leading response times, all without changing their normal workflow or manually tuning their Ascend-based pipelines.

NATIVE CONNECTORS:

Qubole’s native connectors allow users to query unstructured or semi-structured data on any data lake regardless of the storage file format – CSV, JSON, AVRO, or Parquet. Meanwhile, Ascend’s extensive connector framework is capable of ingesting data from industry leading databases, warehouses, APIs, and more. 

BUILT IN GOVERNANCE:

Qubole’s advanced financial governance capabilities provide immediate visibility into platform usage and budget allocation, chargeback, monitoring, and control of cloud spend. Meanwhile, Ascend’s DataAware intelligence governs data lineage for all pipelines with full history linking data and code, and allocates costs to pipeline operations so data teams know exactly where to tune their code.

Wondering how Qubole’s Open Data Lake and Ascend’s Unified Data Engineering platforms can benefit your data team? Schedule a demo with an Ascend data engineer to learn more.

Learn More

New Feature: Dataflow JDBC/ODBC Connector

Today we’re excited to announce the general availability of our JDBC/ODBC Dataflow connector. This feature leverages the same intelligent persistence layer that backs Queryable Dataflows and Structured Data Lakes, and joins it (pun intended) with the SparkSQL Thrift JDBC/ODBC Server to provide the ability to directly access and query your Dataflows from your favorite environment, whether it is a BI tool like Looker, or your favorite SQL workbench.

How-to: Redshift Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Redshift by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Scala & Java Transforms

Today we’re excited to formally announce support for Scala & Java transforms. Not only does this expand our support to two of the most popular languages amongst data engineers, but marries this capability with the advanced orchestration and optimizations provided by Ascend.

How-to: Snowflake Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Snowflake by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Credentials Vault

With a strong emphasis on data security and compliance, Ascend employs a flexible, role-based permission model, ensuring only authorized users have access to sensitive secrets and data. As our customers build and evolve their Dataflows, several patterns have emerged:...

New Feature: Recursive Transforms

At Ascend we see all sorts of different pipelines. One pattern we see quite often is that of change data capture (“CDC”) from databases and data warehouses, followed by data set reconstitution. Doing this data set reconstitution usually requires a full reduction — a transform in which you iterate over all records to find those representative of the latest state. This can become inefficient over time, however, as greater and greater percentages of any given data set become “stale”.

Data Lake ETL Tutorial: Using Ascend No- and Low-Code Connectors to Load Data

Now that we’ve extracted some data from S3, cleaned it up using a SQL transform, we can start on the “L” of ETL  and write our data back out to our data lake. Follow the guide below to learn how. 1. Under the build option we can see the variety of write connectors...

Data Lake ETL Tutorial: Transforming Data

Now that you’ve learned to Extract data with Ascend, this tutorial will give you an overview of the “T” in ETL, namely, how to start transforming your data before you load it into the final destination. We will use SQL in this example, but Ascend also supports...

Data Lake ETL Tutorial: Using Ascend No- and Low-Code Connectors to Extract Data

Welcome to Ascend! This tutorial, the first in a three-part series on Data Lake ETL (Extract, Transform, Load) will give you a brief overview on how to quickly and easily you can accomplish the “E” of ETL to ingest data from any data source with Ascend.  Ascend...

New: support for 75 more SQL functions

SQL is one of the most well established, and powerful languages for working with data. It first emerged in the 1970s as a powerful domain-specific language (DSL) for managing data in relational database management systems (RDBMS), and has become one of the longest...

Pin It on Pinterest