I’ve had the good fortune to work at or start companies that were breaking new ground.

Back in 2004, I got to work with MapReduce at Google years before Apache Hadoop was even released, using it on a nearly daily basis to analyze user activity on web search and analyze the efficacy of user experiments. In 2007, I co-founded a company predicated on the idea that highly personalized signals would revolutionize the TV industry as we moved from “broadcast to broadband,” shifting from a one-to-many to a one-to-one model of engagement — it’s truly amazing how far things have come.

And now, I’m betting on the fact that there is a new data challenge on the horizon — that the classic scale challenge of storing and processing data will soon become table stakes, and a new problem of scaling engineering productivity will emerge. Engineering teams will need a tool like Ascend to increase productivity as they build more stable, reliable data products, and faster than ever before.

While being ahead of the curve has made everything from fundraising to explaining what I do at work challenging, it does have its perks. One distinct advantage is that it gives me a birds eye view of macro trends and their development over time.

So in this piece, I’ll give my take on the evolution of the cloud data platform, starting way back from my days at Google.

I didn’t know it yet, but big data would be a big deal

Google was my first position out of college.

In 2004, there was no such thing as a “frontend engineer,” so I was hired as a “Java developer,” working on the UI for web search (which, ironically, was written in C++). Because my Master’s degree at Stanford was concentrated on human-computer interaction, this was my dream job.

What I didn’t realize is that working at Google, on the web search team, at this very precise point in time would become every HCI and data nerd’s ideal first job. On a nearly daily basis, we got to “push pixels” (i.e., iterate on awesome new user experience ideas), collect the results, and for the things that improved our target metrics, release them to the entire user base. In 2006, our team was able to experiment and iterate so much that we were able to release a flurry of improvements that increased revenue by nearly 10% while improving customer satisfaction metrics all at the same time.

Our internal process was highly efficient for processing such massive amounts of distributed data. But I thought our team was just doing what everyone else was doing. Instead, we were early beneficiaries of not only technologies such as Hadoop, but entire large-scale trends around data-driven product development for years to follow.

Although I didn’t know it yet, we were at the forefront of something special. Big data would be a big deal. Helping companies manage and draw actionable insights from big data would radically transform more traditional industries like TV, banking, or insurance.

Becoming subconsciously data-first

In 2007, my two colleagues and I left Google and started Ooyala.

Although it seemed like an odd, even risky move at the time, we saw a gap in the TV market that we could fill with our cutting-edge big data training. As more and more content consumption moved online, we knew the unit economics of online simply wouldn’t work for broadcast companies.

Major networks needed a tool to help them make that transition, recommending the right online content, personalizing ads, and even predicting the exact point to prompt users to convert to a paid subscription.

So we built an engine that could do just that. One that was cloud-native (a term that didn’t even exist yet), that used Amazon Web Services (we were one of the first companies ever to use it), and leveraged a Hadoop cluster (something close to no startups were using) as the backbone of our data & analytics offering.

We wouldn’t have described it this way back then, but we were establishing a data-first company — mostly because that’s all we knew. When I look back, our ability to do big data analytics and processing, as a 20-person startup 15 years ago, is what put us ahead of the pack.

We grew Ooyala to about 500 people serving over 600 customers and more than 200 million consumers worldwide, ultimately being acquired by Telstra, an Australian telecommunications company. Knowing we got that big from embracing a data-first strategy made me wonder how I could help other companies get there, too.

The third wave of data infrastructure

That thought kept popping up in my head. And for more than a year, I spent a good deal of time reconciling my cloud data platform experiences with the tools available to data engineers.

While I had been at Ooyala, it seemed like two major things happened:

  1. Other people realized big data was a really big deal.
  2. Companies were trying to capitalize on this trend.

And this is where the Goldilocks-like story of data adoption began.

First, data infrastructure tools were really, really technical. Only the most talented engineers were able to operate in the bowels of the tech stack, tuning the JVM to squeeze the last bit of scale and performance out, and creating a culture where having to go so deep into the bowels of the system to fix the odd issue or running an emergency job was like a point of pride.

While the technology they were using was powerful, it was tough to understand, especially for business folks. So then the pendulum swung the other way. Low or no-code solutions began to crop up, enabling BI engineers and analytics-savvy product managers to access data on the fly. Leveraging a new wave of cloud-first data platforms such as Snowflake and Databricks, scale challenges were largely taken off the table, and teams were free to pursue all sorts of new opportunities with data.

The Goldilocks dilemma, however, is companies can’t just rely on no-code solutions — there’s still a substantial amount of standard data engineering work to be done. Businesses are complex, and as a wise CTO once told me, “your platform may make 95% of my life so much easier, but if you make the last 5% impossible, I can’t use you.” 

My main takeaway was that we’re due for a third wave in the data platform world, one that would introduce a new set of tools that cater to the modern engineer. And that was my inspiration for Ascend.

Building a platform for the modern engineer

As we enter the third big wave, organizations should look for ways to support modern engineers. These engineers can’t spend their time deep in the weeds, nor can they do their best work outside of a coding terminal.

Modern data teams need to think beyond storing and processing data — something that platforms like Snowflake or Databricks already accomplish quite well. They also need to think beyond cobbling together tools to create the next platform for other teams. Instead, they should be focused on building impactful data products that move the company’s revenue needle forward.

If this idea catches your attention, stay tuned. I’ll be writing about this topic and others in the coming months.

Additional Reading and Resources