Pennies Saved

“A penny saved is a penny earned.” Words of wisdom; something my grandparents would be proud to hear me say. Ironically, I can’t recall a single utterance of this saying in the past 15+ years of my career. That’s unfortunate. Perhaps we simply need to modernize this saying: “A penny saved is a penny invested” — invested in the longevity of our companies, of our customers, and of our employees.

As we all navigate these uncertain times, one thing has never been more certain — the need to reduce spend. Dollars saved directly equates to employees kept, strategic investments continued, and savings passed on to customers. Finding unnecessary spend and inefficiencies across the business is more important than ever before.

Many of Ascend’s customers rely on us to keep their costs low by automating and optimizing their data pipelines, and the systems to which they connect. We pride ourselves on features that ensure big data pipelines run faster, and with fewer resources. Data & job deduplication guarantees the same work is never done twice; incremental data processing ensures efficient propagation of new data; Spark parameter optimization continuously adapts for maximum utilization; and the list goes on.

In the past 6 months, Ascend has invested heavily in platform optimization and, despite our growth, has reduced our cloud spend by 70%. Many of these benefits have already been passed on to our customers through pricing, but we are far from done. Today we’re rolling out a number of additional initiatives.

Efficiency: Doing More With Less

Ascend may already be the most efficient platform for building and running data pipelines, but we’re always looking for ways to improve. In the coming weeks, we will begin rolling out a series of optimizations to customer environments. Realized gains will depend on the nature of your use cases, but our lab tests have produced up to a 2x increase in efficiency (and up to a 50% reduction in $$$) for a variety of scenarios. Best of all, you don’t need to do anything to benefit from this. As we roll out these optimizations, you’ll simply start using fewer resources, and pay us less (please don’t tell our investors).

Flexibility: New Pricing Models

We’re introducing a number of changes to our pricing that better reflect the needs of our customers:

  • Zero Commit: these are uncertain times, and you shouldn’t have to make large commits based on uncertain forecasts. Customers can now buy Ascend’s platform On Demand / Pay as you Go with no annual contract.
  • Cluster Pricing: Ascend has always charged following a Serverless model — pay only for the slices of machines you use, and for the seconds you use them. This works great for bursty and unpredictable workloads, but comes at a markup that generally becomes less cost effective at scale. We’re fixing that with the introduction of Cluster pricing. For our customers with systems that average >20% utilization, this is likely a more economical option and we’ll work with you to convert your contracts if desired.
  • Transparent Pricing: like most others in the industry, we charge on a unit of consumption — in our case, a Dataflow Unit (“DFU”). While there are many benefits to charging for units (such as portability across server types), it also introduces potential confusion. We’re clearing that up, and have translated our pricing down to the cents per vCPU hour, for both Serverless or Cluster deployments.
  • Volume Discounts: building on the theme of transparency, and for our customers with significant volumes who wish to commit for further cost reductions, we’re publishing our volume discounts. The math is simple: pre-purchase $25,000 of usage for 1 year, get a 5% discount. Double the commit, get another 5%. Want to commit to something that isn’t $25,000 * 2n where n is an integer? You can do that too. Here’s the formula:
        what_you_pay = commit * 0.95(1 + log2(commit / 25000))

BYO: Leverage What You’ve Got

Many of the teams we talk to have already made significant investments, moreover commitments, that need to be utilized. As a result, we’re formalizing support for:

  • BYO Infra: not only do a number of our customers run in their private VPCs for security reasons, they also have economic incentives to do so. There are often large enterprise-wide commits that have been made (and must be used), as well as the enterprise-wide discounts that accompany them. We’re also helping our customers stretch these dollars even further, as we run almost exclusively on Spot & Preemptible VMs (AWS & GCP, Azure Spot coming soon!) which offer up to a 90% reduction in cost.
  • BYO Spark: a number of people we talk to are happy Databricks customers but looking to optimize their use of the platform — running all those pipelines on your All Purpose (f.k.a. Interactive) Cluster is a surefire way of burning through your Databricks credits, and likely not the wisest use of your money. Ascend has added capabilities to help you get more out of your Databricks investment by bringing a number of advanced capabilities and optimizations to even their most cost effective Jobs Light Cluster, including advanced autoscaling, cluster reuse, interactive queries, and more.

Extra Hands: Free Optimization & Migration Services

We hear from so many that they’re now being told to do twice as much with half the resources — to upgrade systems, lower the overall maintenance and COGS burden, all while continuing to execute to the broader roadmap. Oftentimes what is needed is that extra pair of hands to push things up and over the hill, and we’re here to help. We’re offering free optimization & migration services for all of our customers to help them quickly migrate off the legacy systems weighing their teams down, so they can focus on building towards the future during this critical time.

Sharing the Knowledge

We’ve learned a lot about optimizing infrastructure, data pipelines, and everything in between. Lots of what we’ve learned and built is of value whether you’re an Ascend customer or not, and we want to share the wealth. As a result, we’re launching a series of videos hosted by our infrastructure team to share the tips & tricks they’ve learned over the past 4+ years running, automating, and scaling 100+ Kubernetes clusters across 3 different clouds. If infrastructure optimization isn’t your cup of tea, odds are you know somebody for whom it is, and we’d be delighted if you too help share the wealth.

Sponsorship

While we may not be expert data scientists, we do fancy ourselves pretty good data engineers, and our forte is ensuring those experts get access to live, continuously updating data, across clouds, platforms, and tools. To that end, we’re helping out wherever we can, and providing Ascend’s platform free for teams helping to battle COVID-19. We’re currently sponsoring Lumiata’s Global AI Hackathon, and are eager to help others with similar missions. If this is you, please contact us at [email protected] — we’d love to learn more.

Looking Forward

The thing about big data is, well, that it’s big, and right now that spend needs to be getting a lot smaller. While we’re far from done, I’m extremely proud of the work we’re doing to help our customers reduce spend, and make each and every dollar count.

Regards,
Sean Knapp