Ascend.io + Looker

 

 

Ascend is a game changer for data analysts and Looker users that need to access and query many more types of data sources that are not available in relational formats.  Check out this video to learn the top reasons why Looker users love Ascend. 

Wondering how Looker and Ascend can specificially benefit your data team? Schedule a demo with an Ascend data engineer to learn more

Benefits

a

DATA LINEAGE

Easily answer the question “where did “net sales” come from?” thru visualization of lineage and all calculations/operations done to the data.

PIPELINE VISUALIZATIONS

Simplify complex queries into easy to understand sequential operations thru a modern DAG-based GUI that provides useful meta-data and state information at a glance!

9

ADAPTIVE INGESTION

Keep data flowing by intelligently cascading schema changes to the data warehouse. Alert the data team with configurable levels of notifications when breaking changes are detected.

ACCESS TO MODERN TECHNOLOGY INCLUDING DATA SCIENCE TOOLS

No matter your level of technical skill, we make the most modern capabilities accessible to you instantly!   Play with Python, execute Spark jobs, begin your DS/ML journey with Notebooks or MLLib with no training.

o

PROCESS MANY MORE DATA SOURCES

Native connections to common data sources and the ability to write small Python snippets to bring in anything else. Apply analytics to JSON, AVRO, XML, and custom byte encoded data in parallel. Even extract information from audio, documents, images by easily including ML/AI engines in the processing sequences

Z

ENSURE DATA QUALITY

Easily explore and validate your data before handing it off to your reports/analysts! Create a validation component that will run every time new data appears.

Looker with Ascend

Learn More

New Release: Python SDK

With teams using Ascend.io to automate an increasingly large number of their data pipelines, programatic creation of Ascend dataflows has become increasingly essential. For users eager familiar with Python and eager for programatic access to Ascend dataflows, we are excited to announce the release of our new Python SDK! This SDK sits on top of Ascend’s public API, and is dynamically generated based upon the Protocol Buffer and gRPC definitions of each and every component found within the Ascend platform.

New Feature: Dataflow JDBC/ODBC Connector

Today we’re excited to announce the general availability of our JDBC/ODBC Dataflow connector. This feature leverages the same intelligent persistence layer that backs Queryable Dataflows and Structured Data Lakes, and joins it (pun intended) with the SparkSQL Thrift JDBC/ODBC Server to provide the ability to directly access and query your Dataflows from your favorite environment, whether it is a BI tool like Looker, or your favorite SQL workbench.

How-to: Redshift Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Redshift by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Scala & Java Transforms

Today we’re excited to formally announce support for Scala & Java transforms. Not only does this expand our support to two of the most popular languages amongst data engineers, but marries this capability with the advanced orchestration and optimizations provided by Ascend.

How-to: Snowflake Data Ingest & ETL with Ascend.io

This How-to will provide you with an overview of how to ingest data into Snowflake by building a production pipeline that automatically keeps data up to date, retries failures, and notifies upon any irrecoverable issues.

New Feature: Credentials Vault

With a strong emphasis on data security and compliance, Ascend employs a flexible, role-based permission model, ensuring only authorized users have access to sensitive secrets and data. As our customers build and evolve their Dataflows, several patterns have emerged:...

New Feature: Recursive Transforms

At Ascend we see all sorts of different pipelines. One pattern we see quite often is that of change data capture (“CDC”) from databases and data warehouses, followed by data set reconstitution. Doing this data set reconstitution usually requires a full reduction — a transform in which you iterate over all records to find those representative of the latest state. This can become inefficient over time, however, as greater and greater percentages of any given data set become “stale”.

Data Lake ETL Tutorial: Using Ascend No- and Low-Code Connectors to Load Data

Now that we’ve extracted some data from S3, cleaned it up using a SQL transform, we can start on the “L” of ETL  and write our data back out to our data lake. Follow the guide below to learn how. 1. Under the build option we can see the variety of write connectors...

Data Lake ETL Tutorial: Transforming Data

Now that you’ve learned to Extract data with Ascend, this tutorial will give you an overview of the “T” in ETL, namely, how to start transforming your data before you load it into the final destination. We will use SQL in this example, but Ascend also supports...

Data Lake ETL Tutorial: Using Ascend No- and Low-Code Connectors to Extract Data

Welcome to Ascend! This tutorial, the first in a three-part series on Data Lake ETL (Extract, Transform, Load) will give you a brief overview on how to quickly and easily you can accomplish the “E” of ETL to ingest data from any data source with Ascend.  Ascend...

Pin It on Pinterest