Explore the pivotal organizational changes necessary for successful implementation of the data mesh model in your business.
Discover the impact of AI on data engineering and how it is redefining roles, improving efficiency, and driving innovation.
The ‘post-modern data stack’ aims to remedy the complexity and inefficiencies of the ‘modern data stack,’ enabling engineering teams to focus on creating value and improving productivity.
Sean Knapp predicts six data engineering trends for this year based on customer discussions and conversations with other leaders.
Sean Knapp explores the evolution of the cloud data platform from his time at Google to Ascend to understand what’s happening in the data world today.
What Our Series B Signals, and Why 2022 is the Year of Automation
Three Mistakes Data Engineering Managers Make That Slow Down Development (And How to Speed It Back Up)
Leading data teams is challenging. Few technological domains have undergone such rapid change over the past few years. Yet the vast majority of data teams, 96% to be exact, are at or over capacity.
Dive into data automation: its pivotal role in modern data engineering, benefits, and the journey from imperative to declarative approaches.
By the numbers, this year was nothing short of remarkable. Developers on Ascend grew 5x in 2021. This was driven not only by new customers, but also by consistent growth in adoption across existing teams, with the number of developers per team expanding by more than 2.4x.
Not only are we seeing tremendous growth in builders, but builders are investing more time than ever (95% more per builder in fact) in their Ascend powered solutions.
Credentials Vault is a centralized place to store and manage secrets used by your dataflows. The feature makes it even easier to collaborate with others to quickly ingest from, and write to external data systems. It also empowers site administrators with an interface to audit and control all credentials in use by the Ascend platform.
A recursive transform is a transform that uses the output of its previous run as an input into the next run. This pattern is often used to incrementally aggregate data. In cases where historical data is substantially larger than the aggregated data, this pattern can result in significant reduction in processing time and compute resources.