Ep 15 – Negotiating the Release From Accidental Ransomware

About this Episode

In this episode, Sean and I discuss the concept of accidental ransomware—or when a team is slowed down by the burden of managing and maintaining outdated, commoditized, or otherwise decreasing in value software architectures or codebases. Learn how to spot the signs of accidental ransomware and negotiate the release in this week’s episode of DataAware.


Leslie Denson: Have you ever accidentally been held hostage by accidental ransomware? Chances are, even if you don’t know what it is, you probably actually have. Sean Knapp and I sat down to talk about what accidental ransomware is and how you can actually get out of it in this episode of DataAware, a podcast about all things data engineering.

LD: Hey, everybody, and welcome back to another episode of the DataAware podcast. I am back, once again, joined by… I was about to call him my trusty sidekick, but I feel like I actually should be his trusty sidekick. So…

Sean Knapp: I would take trusty sidekick. 

LD: Okay, well, so, he’s my trusty sidekick, it’s Sean Knapp, everybody. I’d say, “Give him a round of applause.” But he wouldn’t hear you guys if you did, so, welcome, Sean.

SK: I’m just gonna assume it’s there, and I am really happy to be here again.

LD: Standing ovation. There you go. We are, today, I’m gonna call it “Our very special episode of the DataAware podcast”. This is our after school special. “The More You Know” special episode, those of who you are ’80s and ’90s… Maybe not ’90s, ’80s babies out there know “The More You Know”. Remember? With the star on NBC. Remember? Sean’s like, “I have no idea what you’re talking about.” You should remember what I’m talking about.

SK: My parents didn’t let me watch TV.

LD: Oh, gosh. Okay, well, that explains a lot about you, so…

SK: I gotta go outside and play with some rocks and sticks and…

LD: Okay, well that explains a lot. I’ll send you the GIF later. But, anyway, so…

LD: This is a topic that we talk about all the time internally, and Sean and I can literally riff on this topic for hours, so we may do that here, except we have meetings after this. And, it is a topic that if you’ve heard us talk… You’ve probably heard us mention it in other podcasts, you’ve probably seen us talk about it on our blog, if you’ve gone to our blog. There’s no end to where you could have heard us talk about this, and we are so excited, and it is the topic of accidental ransomware. It is one that we could literally talk about for hours. So, what do you think, Sean? Shall we talk accidental ransomware?

SK: Yes, please.

LD: There are probably people out there going, “What in the world are they talking about?” Because I know what we’re talking about. You obviously know what we’re talking about. I have heard it called a multitude of other things at other companies. I don’t know… Accidental ransomware in and of itself is not the term for it. I’ve heard it called much different things at other places. I think everybody has their own spin on what this is, so I will give you the honors of really explaining to the crowd what accidental ransomware actually is.

SK: Yeah, absolutely and I think you’re right there’s a lot of different people call it by a lot of different things. And in short, it’s when you and your team, with all the best intentions, end up being held hostage by your very own software. And I think we see the manifestations in a lot of different ways. And I’m sure we’re gonna go through many of those variations. But it’s one of those ones that we often times, when we dive in to building something new, we’re like, we wanna get away from this other thing or escape this… The traps of our previous design, or get away from this vendor that was just charging us so much money, and so we’re gonna go build this thing and design this new system, and oftentimes we’re so excited to get away from something that we neglect to look at what we’re going towards, only to oftentimes find we’ve repeated that same cycle and we’re trapped by the next thing.

SK: And I think, the reason why I believe it’s particularly salient today is we talk to a ton of companies in the space who, right now, are trapped by those previous architectures and designs, and so we use this term “accidental ransomware” as a really shorthand way of quickly connecting with those data leads and data managers who are having to deal with the “Gosh, we were moving so fast and now it just feels like we’re trying to swim through molasses. We’re not getting anywhere.”

LD: So, I guess the question that I always have is, as somebody who is not an engineer, as I think I’ve made incredibly clear over the last two years, if anybody out there hasn’t realized that yet, or if this is their first podcast, haven’t people learned their lesson? Why is this still happening? In my West Valley girl voice.

SK: With the inflection at the end?

LD: Yeah. Why is it still happening?

SK: It keeps happening because, I think, it is a natural by-product of the innovation cycle itself. And we oftentimes find, as technology matures, there’s this constant pattern of oftentimes what we call “moving up the stack”. We tend to innovate further and further, whether it’s up stack or just on the peripheries. And over time, the pattern happens pretty much everywhere, which is what was early innovation, we have to firsthand find fast followers, and slow followers, or if you’re crossing the chasm bands, you go from the innovators to the early adopters, the mainstreams, etcetera, and what used to be innovation, now becomes mainstream and standardized. And what happens when you do that, these become mainstream and standardized, it now makes sense and is viable for companies to enter the space and say, “Hey, as more and more people are doing things, let me build a product that can standardize.”

SK: You see the convergence of patterns, and as a result, you start to see different vendors, whether it’s from the cloud vendors or start-ups, or big companies expanding their product portfolios, introducing new products that support a lot of that standardization, or even just open source technologies that support that standardization, and that work, then, becomes basically not differentiated anymore.

SK: This is like the 101s of just software development cycles and so on, and I think the reason why it becomes particularly salient, and then we should definitely drop into why it’s even more salient for the data ecosystem, but why we continue to see this happen time and time again is, frankly, because I think it’s inevitable and there’s some challenges as to how we make sure that as you pursue innovation, you don’t trap yourself ’cause I do think it is escapable despite the inevitability of a lot of these manifestations can still find ways to get out. And the way that I think about this is, in any new innovative domain, there is no standardization, the tools are really rough, you have to build so much yourself, so you get your best and brightest and put them in the war room and you just start cranking and you do a lot of really cool things.

SK: But oftentimes, we just forget to… As there’s that standardization, to jettison the things we have and as a result, we start to get trapped by that maintenance, and that actually becomes an encumbrance for us, and we instead just keep building more and more on top as opposed to recycling back through and jettisoning the things that are no longer differentiated. And so I think it is certainly… And I think it afflicts different types of organizations differently, it certainly afflicts the earliest adopters and the innovators as they tend to, out of necessity, have to build a lot themselves, and it’s inevitable that they will end up with very bespoke systems. I think it does afflict some of the mainstream adopters of new domains as well. Oftentimes, as they try to catch up, they’re playing the game of “do what the innovators and early adopters have done” as opposed to looking to leapfrog and leverage those standardizations and common patterns and so on now, and so I think there’s different behavioral characteristics, it still drives the same outcome for folks.

LD: We have once again come to the time in the podcast where I go, “Oh, wait, I’ve seen this happen in the marketing organization.” I would almost argue that I see this happen a lot in marketing organizations and larger orgs with marketing automation systems, that’s why you hear me talk about this a lot, in larger, more established organizations with larger, more established products. When you are trying to not do this, but you have totally over-complicated the situation and you have completely, completely, completely backed yourself into a corner because you have completely over-complicated the situation and now you’ve got one person in charge of the system who knows that they’re the only person who knows how to use the system, and it is their job security to some degree, not saying that that’s always the way that it happens, but I’ve seen that happen. I’ve also just seen it be where somebody else looks at it and they go, “Oh, yeah, we have no idea how to use this, but because we have no idea… It’s the only way we know how to use this, or this is the only way we know how to use this, because it’s the only way we know how to use this, we have no idea how to come off of this now, and we are too big, and we’re doing too many things, and it’s too complex and too interwoven to try and change it now,” and that’s where I see big orgs having that problem in my domain, and I can only imagine that same thing probably happens in other ways as well.

SK: I totally agree, and I think the… If we zoom out a lot, I do get that… In at least, let’s call them fast-moving industries, that oftentimes then have fast-moving teams, the efficacy and success of a team is largely aligned with their overall agility and ability to rapidly adapt to and execute upon the needs of the business and their team. So oftentimes we find as we get these bigger and bigger systems and we get silos and specialties and so on, the introduction of all these encumbrances into those teams and those organizations where if we index on the… The fundamental goal is the ability to adopt and ability to execute and in these fast-moving industries, we had embraced the fact that change is inevitable and we will have to change significantly and iterate and innovate. The measure of success can often be boiled down to and the secret ends up becoming, how many people have to get involved to affect an outcome and enact some level of change and what skill sets are required, and the idea, honestly being is the person who is responsible for driving an outcome can do so efficiently and independently such that they are not required to depend upon anybody else as that is where you took all of the latency and all of the slack out of the system and they can move very quickly.

SK: And I think this ends up working both with your marketing teams, this works with engineering teams, this works with data teams, it is, how do you move as quickly as possible? And I think one of the things that we were talking about actually on the engineering team just yesterday, is what defines great software and there’s a lot of different arguments around what defines great software, simplicity, scale, performance, efficiency, etcetera, etcetera, etcetera and these are all the very standard ones, and our team had a really fun conversation on… But we actually think the definition of great software is its ability to change. And not self-adapting or self writing software, but that would also be super cool.

LD: Kinda creepy, but…

SK: And we’ll leave room for the unknown, yes, hopefully after I retire. Because, ML will gonna start writing my own code and so on, but the adaptability to change, if you embrace the fact, which I think in the data industry, we are absolutely just neck deep in, there’s overwhelming demand for what we all do in the data industry, there’s no shortage of work to be done. In our last data automation survey, 96% of teams are at or over capacity, and 81% of them recorded that demand for what they do is continuing to outpace their ability to grow their team, so things are continuing to amplify.

SK: If we embrace the fact that we have to be able to change and adapt and iterate very quickly, and you use that as your north star, as we actually do on our own software, that drives those behaviors of simplicity and elegance and maintainability, and the ability to even take a brand new person and drop them into the system and that time production to how long it takes for them to be incredibly effective and so when we think about this notion of accidental ransomware, it is often the counter in the opposite of this, it is the thing that is preventing your team to adapt and to change and is the thing that is locking you into a design and an architecture that was made, you would hope a year or two ago but more often than not it’s even longer than that, and the… Gosh, it’s been two years ago in our industry, how much that’s changed.

SK: We have entirely new databases that are dominant, we’ve seen the surge of the underlying data infrastructure providers, we’ve seen the ebbs and the flows and the ebbs again, of streaming to batch to hybridized data flows, the entire world changes so fast. I think that’s… Really kinda starting to get to that crux of accidental ransomware.

LD: So, in the vein of being nimble and agile and willing to change, at what point can you look at your code base or your architecture or your “insert thing here” and say, “Uh-oh, we are about to pass the point of no return,” or can you look at it and say that, or is it really only something that you can see in the rear view mirror? Like, is there a point at which you can go, “I’m at a crossroads and I have to choose the right direction.” And I think it’s… I think your answer is probably a little bit of both there, but I’d be interested to know.

SK: Yeah, so I think a few things. I would… Based on my observations and working with a lot of companies across industries, I would generally… Well, first, I would say, it is a spectrum, and I would generally contend that nearly every company is far too deep on the end of this spectrum of accidental ransomware. And I think folks are much further on that end of the spectrum than they should be, as they don’t believe, or haven’t yet been exposed to the other end of that spectrum of rapid development, fluid, iterative, and I think it’s starting to change, as I think we’re getting more software engineering influence into the data world, in particular, the DevOps, iterative, agile models into the data world, which is pushy, things are a little bit faster.

SK: But, as we engage with companies and in teams, there’s some really basic questions to ask around, “How trapped are you?” Basic questions like, “How long would it take for you to integrate new data systems?” And if your team now all of a sudden is working with new data that’s… You have a bunch of data in Salesforce, but you also are pulling data from HubSpot, or customer data at Zora, or you wanna work with new big data that you have a lot coming into S3, but now you’re grabbing all your Google Ads data out of BigQuery. How hard is it just to integrate a new technology into your system. I think even bigger, I think you should ask the question, “How long would it take for you to move a Cloud?” If you wanted to move from Azure to AWS, “How long would that take you?” Weeks, months, quarters? Most teams would say quarters, maybe even years. And those are big changes. But think through the… How long would it take you to introduce some change?

SK: I think that the next question is really two-part, how long would it take a person to accomplish a particular task? How long would it take for, let’s say a new engineer out of college, fresh grad, maybe had a couple of internships, but comes in with a pretty blank slate. How long would it take them to build something new in your system, a new data pipeline, a new report? And I think even more importantly, and this is the part two of it, how many people would they depend upon? How many people in a chain are required to accomplish something? Because that’s usually where we also start to see this, if we get systems that are not comfortable with change and are not adaptable or are too complex, what happens then is, you may have one person who’s trying to drive an outcome, but they may need help from three, four or five other people, and especially in our new geo distributed world, now you’re having to pull in folks from different time zones, you have all sorts of meetings, you’re introducing all of this inefficiency into your organization as…

SK: We see this time and time again, where you’re looking at, “Man, this one kid could accomplish this thing in two hours,” they really should be able to accomplish it that fast, it feels like we’re throwing you 50-100 hours of collective human time into solving what are very pedestrian problem solving patterns. So that’s why, some of those questions to start to ask, and I think as we’re getting in the industry more and more mature leaders in data, they know when their Spidey sense goes off and you asked that question, just ball park it for me, what’s the level of effort on this team, and you’re expecting to hear hours or days and somebody comes back with weeks or months, and I think that sense is correct and increasingly tuned and refined for the emerging data leaders.

LD: So if most companies are erring on the side of being too ingrained and too held hostage by accidental ransomware, we’ll call it. What can I do? Like all isn’t lost. You can get out of it, you don’t have to pay a ransom. You can get yourself out of this mess. What do you do? You don’t have to throw the whole system out, if I’m not mistaken, you can… I mean, sometimes maybe you want to, but you don’t have to. You can get out of this.

SK: Yeah, you can. What you should do is undergo a multi-year project that’ll build the next generation that will offer you more flexibility. [chuckle] I’m kidding, but you would be surprised for how many people could…

LD: What the world didn’t see was me roll my eyes and put my head down.

SK: You would be shocked as to how many teams for which that is the actual answer. And it is crazy. Look, I feel for a lot of both data teams, their leads, their managers, ’cause look, in such an exciting fast growing space, the demand is so heavily put upon these teams and oftentimes, especially the innovators and those early adopters, they built. Just hand crafted their systems, ’cause there was nothing else out there, and now they’re stuck having to maintain something that literally is no longer special or differentiated. It is just literally what everybody else does, but now everybody else is, actually does it better, because theirs is more modern.

SK: And so you’re stuck in this, this a bit of a trap, which is… And we see a lot of teams too, which is, “Oh, shoot I don’t wanna lose my team,” ’cause they’re really awesome engineers and are really frustrated with this old system, and they really wanna go build something, and so oftentimes they look right in front of them… Like the team right in front of them and they’re like, “Alright, well, I’ll let the team rebuild this because I at least don’t wanna lose my team.” And the reality is these projects usually fail because they’re like, “Most of these mega projects, we’re gonna re-architect the entire platform.” You think it’s gonna take a quarter and it takes you a year or more, and you get all the pressure from the business because it’s been too long since you’ve delivered new incremental value, and then you start to cut the corners and you get stuck in that same spot all over again, which is you end up with an incrementally better platform that had to get rushed and pushed out the door because you just weren’t afforded enough time. It happens every time.

SK: And then your engineers are right back, stuck in the same spot, your consumers of your platform are still right back in the same spot, and this is the classic platform cycle, that you see. And so it’s stuck, ’cause you are trying to find a balance. One of the exercises that we like to run here at Ascend is a couple of really simple basic questions, and this will lead into the second part of my answer, but you ask the team, “What would it take for us to 2x our output and our productivity, but with constraints? You can’t hire anybody new, and you can’t work any additional hours.” The second part is very important, otherwise you’re just a very… Maybe tone deaf manager, and so the… We’re a start-up, people are already working really hard.

SK: So let’s assume that they’ve given everything they’ve got already, and so, the classic answer for every org is, basically, add more people, which I always feel like is a horrible answer as the default answer, because if you’re not at the same time getting rid of other things that are no longer special or differentiated, you are approaching 100% of your organization, just maintaining old crap. As you add more and more people in, and you just add more and more systems, eventually, the vast majority of your team is just gonna maintain non-differentiated, and yet you have your best and brightest that are very expensive on your balance sheet, literally doing stuff that doesn’t differentiate your business, and everything this is just like… I would contend this is the reason why there’s a talent shortage is, ’cause we just have a lot of people doing not-differentiated stuff now that we actually don’t have enough raw talent in the industry.

SK: And so when we ask our questions, they’re actually three. It is, what would take for you to double your productivity? You’re not allowed to increase your time, investment, but you are allowed to stop doing things and get rid of things. What do we do? And we usually ask that because I think that’s the most, both intellectual and empathetic path of, “Hey, let’s assume the things that we did two years ago are just no longer differentiated. We may be smart, or we’re innovators, but gosh, we’re not that far ahead of everybody else. So, alright, let’s just assume with all humility that there’s a lot of things that just aren’t special anymore, what can we get rid of? What can we just stop doing?”

SK: And that usually leads to a really powerful exercise for teams to figure out what they can start to jettison, and we do run this exercise, we go and look around the market, and we say, what can we buy? And we all know that the fully loaded head count costs of a data engineer, a data scientist, a data analyst, etcetera, inside of our orgs. So we know what the cost is, we know we can’t hire enough of them and we know we can’t hire them fast enough, and we can’t onboard them fast enough, so how do we get them out of the non-differentiated stuff? And focus our efforts on the things that really do matter and differentiate us going forward. And so we run this exercise on a very regular basis, both for the software we run, but also help our customers go through those same exercises.

LD: Yeah. Makes sense. And it sounds like, if somebody wants to not get themselves into this situation from the get-go, what they should be doing is, to your point, on that regular basis, and it’s probably different for every company. Some companies, it may be monthly, some companies, it may be every quarter, some companies it may be every six months, but for whatever your particular company is, you find the cadence that… Shoot, for some companies that may be weekly. You find whatever your cadence is and you have that conversation with the right group of people. And I think that’s also a thing to note is it has to be the right group of people, and find what works.

LD: ‘Cause you hit on something that we’ve also been talking a lot about here internally lately, which is, there’s a lot of conversation about the cost and the expense of infrastructure and a lot of conversation about the cost and expense of scaling infrastructure, or buying new platforms, or buying this, that, and the other, which can be expensive, absolutely. Totally understand that. But a data engineer is expensive too. And why would you have them sitting around doing something that is not value-add to the business when that could be offloaded? Which is something… I mean, I’m stealing from you, you’ve said that multiple times. And again, it’s a conversation we’ve had a lot internally.

LD: Shoot, we have that conversation about the marketing team, too. When we have that conversation about a lot internally, like, “What’s the ROI of me doing that versus just outsourcing that to something else?” That is a conversation every team should be having about stuff. It is not that expensive to bring on X, Y, or Z, as opposed to having me spend 10 hours a week of my time doing something. That is the same conversation that every engineering manager should be having with themselves and their team. So…

SK: Totally. I can’t tell you how many times we talked to companies where, literally, the math is ignoring the orders of magnitude. When I draw up, we’re spending a million dollars a year on AWS or with Snowflake, or with Databricks, it’s so much money. And the following question I always ask is, “Well, how big’s your team?” “We have about 20 engineers, 30 engineers working on these systems.” It’s just not hard to do the back of the envelope math and realize that we’re talking 5x, 10x. The cost of what they’re spending on head count compared to the infrastructure. And so it’s oftentimes, the… I think we see this a lot, especially with engineers where “We’re trained for performance and efficiency,” and like, “Oh, I can squeeze a little bit more out of this system and save that $5,000 a month.” Which is a lot of money and that’s important. It’s fantastic. And you should save that. But what’s the cost of it? Not just the cost of your time in long-term, what’s the cost… The opportunity cost? What’s the cost of complexity? What’s the downstream cost of your ability to adapt and change? And so I think a lot of times, and oftentimes, because they come out of different budgets, a lot of orgs are allocated head count as opposed to just pure financial budgets.

LD: Right.

SK: People are thinking more around the, “How do I reduce my infrastructure cost?” Versus, “What if I could make the most valuable, and more importantly, the most expensive assets I have, my data team, what if I could actually 2x, 3x, 4x their productivity? What is that worth?” I mean, that’s the CFO. And I’m pretty sure the CFO would be able to give you a very succinct and clear answer to that, and I think that’s where the, oftentimes, teams end up getting into accidental ransomware is they’re micro-optimizing, and at the cost of that, they’re just hand-straining and dampening the output and productivity of their most expensive and highest-leverage resources.

LD: Yeah. I mean, you mentioned that survey that we did last year, which, plug, we’re doing it again this year, so keep an eye out. And I was working on a piece of content yesterday, and so I just have the numbers in front of me, ’cause the content isn’t in front of me. So 96% of data teams are at or over capacity. Okay, so the same survey, only about 20% of organizations still had any issues with data scale. The rest of them said, “Solved issue for me. Don’t have a problem with that.” To some degree, it makes sense because it has been such a honed problem for this… I would say for teams and for engineering teams and developer teams, that that has been such an issue for so, so, so, so long that it is just…

LD: It is basically their knee-jerk reaction to be worried about that, and it is their… To your point, they look at a budget and they’re looking at OPEX, or they’re looking at CAPEX, excuse me, they’re not necessarily looking at the operating expense of their people. They’re looking at the capital expense of how much their servers are costing, or their AWS bill is, or their whatever bill is. But what they’re not looking at is the next step that is down on the list, which is 74% of those same respondents said that their need for data products is growing at a faster pace than their team sizes.

LD: So, to the point that we’ve been talking about this whole time, you have to find a way to make those engineers more productive. Get rid of the work that doesn’t matter. Stop worrying about the problem that’s already solved. Don’t worry about the problem that’s already solved. Get rid of the work that doesn’t matter. Stop being held back by accidental ransomware, and let engineers be engineers, and let them do the work that matters so that they can solve these problems. If 96% of teams are already under water, 74% of teams said that their need for data products was growing faster than their team size, but we’re so worried about the cost of our servers, that doesn’t spell good news for the engineering teams. They’re gonna be even more under water and you’re going to lose them anyway.

SK: Yeah, I agree, and I think the… In this climate of talent shortage that I keep at least reading about a lot on TechCrunch and Business Insider and etcetera, I do think we need to be investing, in not just the… Oftentimes teams do this wrong, and they’re like, “Oh, just let my team work on that piece that we’re passionate about, ’cause that will help satiate them.” But it doesn’t really fix the root-cause, which is often times, they’re overworked, and they’re spending too much of their time on what is crappy stuff.

SK: As we want to create things that have impact on the world, and that’s actually one of those rewards and fulfillments we get, is knowing that it matters. And as technology ecosystems evolve, things we used to do are no longer special or differentiated, and if we’re trapped in that space, yeah, it’s kind of fun to like, tinker around and build random things, here and there, and etcetera, but in reality, we want to do things that actually have impact and matter. And to do that, we have to get out of the things we were doing in the quarters and years prior, and get to the new wave of things.

SK: And so I think the… When we think about how to properly invest in teams and happiness is actually aligned with that productivity and freeing your teams from being held hostage by previous architectures, previous designs and embracing the fact that you need systems and designs that can change rapidly and continue to pull you out of the muck that used to be innovative a year or two ago and keeps getting you into the new things that really matter.

LD: Okay. So if you could… If there’s somebody out there that’s listening, which I’m sure, I’m sure there’s somebody out there listening, but if there’s somebody out there that’s listening, that is like, “Oh, this is me they’re speaking, they’re looking into my soul, and they’re speaking directly to me,” what is the one piece of advice that you would give to say, “You can do this, you can get out of this.” Give them a pep talk, Sean.

SK: It is very, very possible. The one piece of advice, and I think, honestly, this goes for whether you’re a hands-on data engineer, or data analyst, or engineering director, incrementalism, in this case, is a good thing. Break the cycle of re-architecture and re-design. They are too long and slow of cycles. Every time we encounter teams who are doing a re-architecture, it just… It reeks of, like, 1990s software development of waterfall style, slow glass. The world is changing, and the data world, which has historically been a very waterfall-ey axed style model. Data platforms for the last decade plus have all been these like multi-year build exercises, and the…

SK: I think, if you’re proposing this to the rest of your team to do a re-architecture, it is more likely now than ever before to be dead on arrival. And most teams understand that and know that, and will patently reject big massive re-architectures. So my one piece of advice is, find a way to start fast and alleviate short-term pains. Find something that… It may be a new component, it may be a new platform that you want to introduce, but don’t run what we call a horizontal strategy of re-platforming from sort of a bottom-up, figure out a way to run things in parallel and incrementally migrate parts of your technology over as they produce pain, and go for the things that create the most pain first, and they consume the most amount of your time.

SK: ‘Cause if something is consuming 20%, 30% of your time maintaining or it is consuming 20%, 30% of your time because somebody else could probably accomplish it, but they can’t use the same tools that you have. If you can find a way of just incrementally introducing something new to offload the biggest time burden for you, you’ll start to get more of those hours back and then you can start to make bigger bets and go after more and more things. And so be ruthless about that time, and figure out, how do you deliver value, incremental value in weeks? Not months or quarters, and push yourself to do that in a much more iterative way.

LD: I like it. I can appreciate it. Hopefully others can as well. I’m sure they’d be. Alright. Well, this is likely not the last time we’ll talk about this topic, I will say, ’cause again, we do actually like talking about this topic quite a bit, so thank you Sean.

SK: You’re welcome.

LD: Appreciate it.

SK: My pleasure. Thank you Leslie.

LD: Well, there you have it folks. As you could probably tell, this is the topic that we at Ascend care about quite a bit and engineering productivity is always, always, always at the very, very, very top of our mind. So if you’d like to learn more, you can always visit us at Ascend.io or reach out to us on Twitter or LinkedIn, which, you can find those links at Ascend.io. Welcome to a new era of data engineering.