The Build Plane
Program your data pipelines end-to-end on a single pane of glass.
What Is The Build Plane?
Unlike other data tech stacks, intelligent data pipelines are entirely built end-to-end on a single pane of glass. You can use the rich UI to program ingestion and transformation logic, visualize lineage, and monitor operations during dev, test and production. You can also develop pipelines programmatically with rich data engineering functions in the CLI/SDK, and incorporate them into your CI/CD workflows.
Loading data is the first step for any data pipeline. The Ascend ingestion framework provides over 300 connectors to get any data from anywhere, and load it into your data clouds.
All ingestion is by default incremental, including database CDC, with full resynch functions also available. The ingestion process includes the generation of metadata for the control plane to automate the downstream pipelines.
Use a Dedicated deployment to securely access private data sources in your network.
Data flows seamlessly from ingestion into the DAG of intelligent data pipelines. Use the SDK and the intuitive UI to program the data logic in each of the transformation steps. The control plane autonomously runs the pipelines by sending the transformation logic to the data planes in the right order. The data never leaves your data clouds, where you can query it at any point in the pipelines, anytime.
The UI automatically generates the lineage from the transform logic provided by the user in the form of SQL, Python, and Java. Following this lineage, the control plane self-orchestrates the sequences of workloads for each incremental change to propagate the data through the pipelines. The user controls key orchestration parameters such refresh rates, repartitioning rules of data sets, and actionable data quality rules.
Ascend pipelines can also be connected to external orchestrators, to control pipeline flow and receive signals of pipeline execution.
Intelligent data pipelines can be tapped at any point to share data. They serve as the foundation for data mesh implementations.
First, they can be linked with other data pipelines. These links guarantee continuity of lineage, schema, and orchestration across huge webs of pipelines — even if these span data clouds.
Second, they can be written to any number of external systems, commonly known as reverse ETL. The data in all destinations is always kept in synch with the data pipeline. This way multiple teams are always working with one shared version of the truth.