Momentum blog twitter card3-01

2021 Look Back (5X User Growth!) and 2022 Look Forward (The Year of Data Automation)

2021 Recap

What an exciting end to 2021! Not only did we deliver on an incredible 2021 roadmap with more than 300 new features and enhancements, but we launched Ascend on Snowflake just in time for the winter season!

"The scale and power of Snowflake combined with the acceleration and automation of Ascend enable us to not only take on new data challenges and workloads, but deliver results faster than ever before."
Laurent Bride, Chief Technology Officer at Komodo Health
"By leveraging Snowflake's scale in combination with Ascend's unified data and analytics capabilities, data teams can accelerate productivity and time to value for data workloads more simply achieve true data engineering success."
Tarik Dwiek, Head of Technology Alliances at Snowflake

By the numbers, this year was nothing short of remarkable. Developers on Ascend grew 5x in 2021. This was driven not only by new customers, but also by consistent growth in adoption across existing teams, with the number of developers per team expanding by more than 2.4x.

The growth didn’t stop there as the number of hours spent building in Ascend also grew—by more than 2x per user! What’s most exciting is how fast users are able to build—with the introduction of our new SDK and a plethora of product enhancements, developer productivity increased 3x in 2021! (yes, we actually have a stat for that 🤓)

What’s most important, however, is the impact this has had on our customers, a story that is best told in their own words:

"Ascend substantially simplified our data engineering efforts."
Ankur Potdar, Principal Data Engineer at Drivewealth
"Ascend's declarative engine is a game-changer for our team's productivity... there is no going back to a traditional imperative approach to Data Engineering."
Remi Raulin, Data Architect at PrestaShop
"The Ascend Platform is a phenomenal application for data engineering... I’ve searched for something like Ascend for a long time."
Paolo Esposto, Chief Data Officer at BePower

Perhaps this is why we were named a Gartner Cool Vendor and an Outperformer in GigaOm’s 2021 Data Pipeline Radar!

In short, there is certainly a lot to celebrate. As we head into a new year, however, it’s also important to take a step back and reflect on where we are on our respective journeys.

To that end, I thought I’d share my thoughts on industry trends, and an important prediction for 2022.

Crossing the Chasm

5 years ago, the vast majority of data teams ran their own infrastructure—at the time, this was usually Hadoop and/or Spark clusters, sometimes in cloud, sometimes on prem. Today, the vast majority of companies use Snowflake, BigQuery, Databricks, or Redshift. New initiatives wouldn’t dare to think of running on “bare” infrastructure, and teams are eager to move existing systems off of brittle old infrastructure. 

For those familiar with Geoffrey Moore, this aligns with Crossing the Chasm, where the mainstream demands more complete solutions. The sheer size of the mainstream also makes it viable for the introduction of commercial solutions, often whose capabilities greatly surpass that of early bespoke solutions as commercial providers can devote far greater resources to the enhancements of these underlying technologies. In an ecosystem as far-reaching as data, the sheer demand for innovation only accelerates this process.

It’s important to note this doesn’t happen just once, but is the continual cycle of innovation as teams work “up the stack” to solve increasingly impactful and complex problems at scale.

Avoiding Accidental Ransomware & Choosing What Not To Do

With the backdrop of crossing the chasm and accelerated productization cycles, we’ve seen a trend emerge called “accidental ransomware.” Experienced technology leaders have seen this pattern before—it occurs when last year’s strategic advantage has now become the thing that holds you hostage, hindering your next wave of innovation. 

This is not an uncommon phenomenon and one that particularly afflicts early adopters. Whether you’re a data team leader who is struggling with how to increase output, or a data engineer who just wants enough time to fix that pipeline that keeps paging you at 3 a.m., know you’re not alone. As we saw in our 2nd annual industry survey on data team productivity, 96% of data teams are at or over capacity.

What was the #1 drain on individuals’ time? Maintenance. Supporting, troubleshooting, and triaging existing systems. Yet, ironically, when you ask most data leaders today whether those systems are strategic differentiators to their business, they will tell you no. 

So how do you keep pace with increasing demand and finite resources? Steve Jobs famously taught us that “deciding what not to do is as important as deciding what to do.” Find something you’re doing today, and either stop doing it or offload it to regain capacity. Get out of the business of custom and bespoke systems at layers of the stack that no longer differentiate your business, free your team to innovate, and focus those efforts where it counts. It can be a painful process to go through at times, but pales in comparison to being trapped by legacy systems. 

The Year of Data Automation

In 2011 Marc Andreessen stated “Software is eating the world.” By 2019, Satya Nadella declared “Every Company is Now a Software Company.” Mission accomplished. So what happens next?

Modern software not only emits, but is fueled by, data. The need for data advancements is ubiquitous across industries and geographies. We’ve seen it first hand from our customers spanning Education to IoT, Healthcare to Finance, and around the globe, from New Zealand to France.

So why do I believe 2022 is the year of data automation? Because the preceding problems have been solved for and are in the process of ubiquitous adoption.

Data has always been about scale, but the challenges of scaling have continually evolved. Cloud solved for the first scale challenge (infrastructure) by providing nearly limitless storage and compute capacity. The second scale challenge, that of how to actually process the data, has been solved for by the likes of Snowflake and Databricks by taking advantage of the affordances of cloud to introduce capabilities like the decoupling of storage and compute, near-instant auto-scaling, and more.

Developer productivity is the next challenge. Demand continues to greatly outpace supply for data teams, and is only exacerbated by a shortage of talent. This is why we see engineering teams turning their focus to what they do best: automation. 

Per our accidental ransomware section, this also presents an exciting opportunity for data leaders. Whether you have an early adopter team who has already built a platform on top of orchestration & scheduling tools, or you have new use cases with an opportunity to leapfrog the competition, how you navigate your team through what to build vs buy, what to do vs not do, will define your pace of innovation.

Success in 2022

Just as we saw the adoption of microservice-based architectures and DevOps as software ate the world, we are now seeing adoption of data mesh architectures and DataOps as data increasingly fuels this software-driven world. Success in 2022 will be defined by those who embrace the rapid innovation cycles of the data ecosystem, ruthlessly focus their efforts on differentiated technology, and invest heavily in this new wave of automation.