Using Ascend

How tos, Tutorials, and Product Announcements.

New Release: Python SDK

With teams using Ascend.io to automate an increasingly large number of their data pipelines, programatic creation of Ascend dataflows has become increasingly essential. For users eager familiar with Python and eager for programatic access to Ascend dataflows, we are excited to announce the release of our new Python SDK! This SDK sits on top of Ascend’s public API, and is dynamically generated based upon the Protocol Buffer and gRPC definitions of each and every component found within the Ascend platform.

New Feature: Dataflow JDBC/ODBC Connector

Today we’re excited to announce the general availability of our JDBC/ODBC Dataflow connector. This feature leverages the same intelligent persistence layer that backs Queryable Dataflows and Structured Data Lakes, and joins it (pun intended) with the SparkSQL Thrift JDBC/ODBC Server to provide the ability to directly access and query your Dataflows from your favorite environment, whether it is a BI tool like Looker, or your favorite SQL workbench.

How-to: Redshift Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Redshift by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Scala & Java Transforms

Today we’re excited to formally announce support for Scala & Java transforms. Not only does this expand our support to two of the most popular languages amongst data engineers, but marries this capability with the advanced orchestration and optimizations provided by Ascend.

How-to: Snowflake Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Snowflake by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Credentials Vault

With a strong emphasis on data security and compliance, Ascend employs a flexible, role-based permission model, ensuring only authorized users have access to sensitive secrets and data. As our customers build and evolve their Dataflows, several patterns have emerged:...

New Feature: Recursive Transforms

At Ascend we see all sorts of different pipelines. One pattern we see quite often is that of change data capture (“CDC”) from databases and data warehouses, followed by data set reconstitution. Doing this data set reconstitution usually requires a full reduction — a transform in which you iterate over all records to find those representative of the latest state. This can become inefficient over time, however, as greater and greater percentages of any given data set become “stale”.

Data Lake ETL Tutorial: Transforming Data

Now that you’ve learned to Extract data with Ascend, this tutorial will give you an overview of the “T” in ETL, namely, how to start transforming your data before you load it into the final destination. We will use SQL in this example, but Ascend also supports...

New: support for 75 more SQL functions

SQL is one of the most well established, and powerful languages for working with data. It first emerged in the 1970s as a powerful domain-specific language (DSL) for managing data in relational database management systems (RDBMS), and has become one of the longest...

How We’re Helping Drive Efficiency & Reduce Spend

“A penny saved is a penny invested” — invested in the longevity of our companies, of our customers, and of our employees. Today we’re rolling out a number of initiatives to help our customers do more while spending less.

Pin It on Pinterest