[00:00:05] Jay: We get up, go into the office and we start getting some calls. [The] helpdesk started fielding phone calls, where [it was] blue screen of death on people’s machines.
[00:00:15] Jonathan Crowe: Hello, and welcome. Please come in. Join me. I’m Jonathan Crowe, Director of Community at NinjaOne and this is IT Horror Stories. Brought to you by NinjaOne, the leader in automated endpoint management.
Introduction
[00:00:31] Jonathan: Hey everyone, welcome to IT Horror Stories. I’m your host, Jonathan Crowe, Director of Community at NinjaOne, and I have with me Jay Abbott.
[00:00:38] So, Jay, you are NinjaOne’s very own IT Director. Tell us a little bit more about yourself and what you do at Ninja.
[00:00:46] Jay: Yeah, I’m the director of IT at Ninja. I’ve been here for about three years now. I’ve been in IT for a very long time, since before the first dot com bust back in, back in the 2000s, early 2000s. So, I’ve worked in everything from pharmaceuticals, higher education, satellite communication, oil and gas, and now software. So I’ve seen my fair share of horror stories.
[00:01:11] Jonathan: There are things that you cannot unsee. Well, thank you so much for being brave and coming in, to share your story. Before we dive into it, let’s set the scene a little bit.
Drilling into Jay’s Background
[00:01:22] Jay: The year is 2010, right? So, you know, 14 years ago. April of 2010, in fact, and, I was freshly a supervisor slash manager – depending on who you talk to at the company at the time – of an oil and gas company. So we’re a midsize upstream exploration production company, had about 2500 employees all over the world. And, you know, fun things about oil and gas is they don’t put it in nice places typically.
[00:01:50] You generally are in offshore, in rough areas, in countries that are difficult to get to actually. So it’s, it’s a lot of fun. It’s very interesting. There’s a lot of challenges with that.
[00:02:03] Jonathan: Before you dive into it, you’re explaining, yeah, the, the gas and oil industry and how that’s, unique in a lot of ways. Were there any things when you were applying to this job in IT that they kind of drilled in on, sorry for the pun, that made this different than other IT positions? Were there things that they were calling out ahead of time that was going to be different than other jobs you may have had?
[00:02:27] Jay: In oil and gas, like in pharmaceuticals or in defense, the scale of the amount of money that you’re talking about is truly something that is hard to wrap your head around. Like how much money you lose on a daily basis for being down, how much an outage costs, how much a deep-water drilling program costs.
[00:02:47] A friend of mine, when he used to say these numbers don’t make sense. Give it to me in something I can understand. How many tacos will that buy me? And I had to explain to him, I was like, no, Fred, you’re missing the idea. It’s not how many tacos will this buy you, it’s how many Taco Bells will this buy you, right?
[00:03:05] And the answer is all of them, like literally all of them. We’re talking, the numbers are just, they’re completely crazy when you compare it to most other industries. An ultra deep-water drill program might cost $180 million. And you have no return on that. It’s spent money and you found nothing.
[00:03:25] So it’s, the numbers just boggle your mind when you talk about a million dollars a day for something that you’re working on. A satellite connection costing $120,000 a month for two megabytes of bandwidth. It was just that idea and getting your head wrapped around that is really, really different than a lot of other places.
[00:03:46] Jonathan: Is that something that they made clear when you were going in to interview for this job? And what was the position that you were going in for anyway?
[00:03:54] Jay: So I originally started working as the Senior Linux Systems Administrator. I found that job, it sounded interesting and I was there for a very long time.
[00:04:03] Jonathan: Let’s get into the day of the incident. How long had you been there?
[00:04:07] Jay: I’d been there just under four years.
[00:04:13] Jonathan: So you’d be there for a while. You’d had your chance to get your feet underneath you. You’re, you know how things work. You’re not brand new to the job. You’re feeling pretty comfortable?
[00:04:24] Jay: Yeah. Yeah. Oh, yeah, absolutely. So I had, I had moved from senior Linux sysadmin to the manager of the infrastructure team and we were responsible for – at the time, we didn’t have anybody who was responsible for endpoints specifically, so my team just kind of took that on because we owned the backend infrastructure that ran all of that.
[00:04:42] So, all the licensing servers, all of the, centralized management servers for the client tools, right? This is before Ninja. This is before the cloud was a thing. We didn’t call it the cloud then. It was just the internet, and nobody hosted servers there, right? It was, you had a co-location facility or you had your own data center. There wasn’t really a third direction for that.
[00:05:00] Jonathan: And how big was the company? Sorry, this is my last background question, but, how big was the company? How big was your team?
[00:05:06] Jay: So, my team was four people, five people total. The company was 2,500 people, my team did not include the help desk or application support team. In total, the IT department was 18 or 19 people, and we had 2,500 employees, about, $14 billion in revenue.
[00:05:25] Jonathan: Do you think IT was treated, as a valuable resource there?
[00:05:30] Jay: At the time, no, we were just starting to, we were just starting to do this push and into digital oil field and digital transformation and making, you know, being more work centric for IT… IT happened to people.
[00:05:44] We didn’t do things for people. So, so we had been, we’d been doing all these big efforts to modernize things and make it easier for the employees to do their business and make it easier for them to travel, to run new software, to pilot new technologies. And that was part of why we were, at the time, very well regarded. We had a ridiculously high success rate in our exploration programs.
Jay’s BSOD Horror Story
[00:06:10] Jonathan: And so you go into this, this fateful day, undergoing this transition of, digital transformation, IT getting more involved with the business operations, kind of selling the value.
[00:06:23] Jay: We were working very hard on proving the business value that IT can bring and the capabilities that we can bring to bear. And then the morning – we get up, go into the office and we start getting some calls. Helpdesk started fielding phone calls where blue screen of death [popped up] on people’s machines. First one or two. Then a few more, then a few more, because a corrupted or a bad virus definition file was deployed by our centralized management server to every device in our fleet. So that’s 2,500 people plus all of the extra, you know, secondary devices that run conference rooms, things of that nature.
[00:07:08] And every one of those was a Windows machine, at the time Windows XP. And they all started blue screening. One after the other. And we had no idea what was going on. Nobody had any idea what was going on. Because this started at around 8 a.m. Central Time. So the definition file was released around just before 6. [We] staged all of our servers. Pushed out globally to everybody, so we started with the, you know, declaring a, an emergency.
[00:07:40] “Hey, there’s a big problem. We don’t know what’s causing it. Nobody knows what’s causing it.” We finally get a couple of devices to look at and we see that, some critical Windows system files have been flagged as viruses and quarantined. Which make the computer no worky. So it’s like, “okay, big problem.
[00:07:58] But now we see that the problem is coming from the antivirus software. Let’s solve that.” Let’s stop that from pushing out anything new and let’s figure out how to, how do we manually touch all these systems, downgrade them? Can we do it remotely? No, because they’re blue screen. Okay. So we had to build a process around how can we touch a device, get logged in, get the old DAT file reinstated, get the, you know, and get the machine back up and running so people can do their work because we are now impacting 100 percent of the company. Except for the people in the Far East who were already in bed. China was the only place that wasn’t really impacted.
[00:08:40] Jonathan: And at this point, how much time has passed? You go in, you show up for it’s, as far as you know, just another normal work day. It hits you. It seems like pretty immediately that no, it is not a normal day. When did the full impact of this really kind of dawn on your, you and your team?
[00:08:56] Jay: We started getting the calls just after eight, and it was about nine, nine AM that we were seeing a very substantial amount of the people in the central time zone having significant problems. And by that point the people in mountain standard time were starting to come online because our second largest area in the U.S. was in Denver, outside of Denver, and so they started having problems. And then Europe and, the Middle East, they started having problems and they took longer only because we had slower links, so it took longer to push the virus definitions out to all their computers.
[00:09:33] Jonathan: When you’re getting these calls, are they initially just confusion? Is it just a normal kind of support ticket? Is there urgency and anger involved at all?
[00:09:41] Jay: So it depends on who was calling. [For] the people in the offices, mainly it was just, “hey, my computer blue screened, this is a problem.” Oh, okay, cool. We’ll submit the ticket. We get it. We start working through it, start triaging. The people who were involved in operations on a drill ship or on a drilling platform or on a production platform or who are running active drill campaigns, they had much more urgent issues, right? Because you’re talking about like safety equipment. You’re talking about, you know, monitoring equipment while you’re actively drilling, you need to be monitoring all constantly what’s happening. And some of those people who are doing that active monitoring are offshore.
[00:10:20] They’re connected over satellite links. They’re connected via, you know, very high latency, low speed connections. So getting stuff fixed is. I mean, even getting information from them was very challenging.
Dead in the water
[00:10:33] Jonathan: And so at this point, everything is dead in the water.
[00:10:38] Jay: Yeah, largely. Some servers are still running, which was great because the servers, we did not use the same antivirus on our servers that we use on our endpoints.
[00:10:46] Jonathan: At this scale I imagine when, okay, yeah, you can troubleshoot a few tickets that come in. [But] when it’s everyone in the company, it must be a completely different situation.
[00:10:55] Jay: So not only was it a completely different situation, but it highlighted really that nobody plans for this scale of a problem. When you plan, when you do like, disaster recovery or business continuity planning, you say, okay, we’ve lost a facility, we’ve lost a data center. It’s not all of our data is available, all of our servers are available, but everything, every single thing we use to access that is dead in the water.
[00:11:19] Nobody ever plans on that. So it highlighted a problem in our process and our thought process, even of building DR and BC plans – that it’s like, “oh, wow, this is really a problem.” So once we were able to identify the system, we were able to, this, the issue, we were able to work out a, you know, checklist for doing it, then we had to figure out how do we deploy this?
[00:11:41] How do we deploy it at scale? And it turns out there’s not a good way to do it at scale, when your machines are blue screen of death because you can’t get them back up. The only way to get them up is to boot them in safe mode. Safe mode doesn’t have networking connections, so you can’t even have the people boot it up into safe mode and then remotely, fix the problem…
[00:12:00] You have to go and touch each machine, which is what we ended up doing. We called all hands-on deck, everybody that was in IT at all – didn’t matter what role they had – some sysadmins, some database admins. Didn’t matter. Here is a printout with a step-by-step list. Here is a flash drive with what you need.
[00:12:20] Go and start working on computers. Here is the checklist of places you need to go first. And then we would even, we even co-opted some of the people who we knew who were more technically minded in the business units. It was like, if you want to come over and help out, we’re here for it.
[00:12:36] We had to schedule people to go offshore. We had to schedule people to go to remote offices, and we had a lot of remote offices – onshore, both onshore U.S. and offshore and around the world.
Prioritizing and actioning at scale
[00:12:49] Jonathan: What did that look like in terms of prioritization? You’ve got everyone in the company, was there a clear path of we need to get these folks up and running first? How did that work?
[00:13:00] Jay: Yeah, so once we figured out what the problem was and how widespread it was going to be and how intensive it was going to be to work on it, we made a few phone calls. I called our Senior Vice President of Internationals, like, “Hey, who’s actively drilling right now? Who’s actively producing right now? Who do we need to go touch first?”
[00:13:18] Then we called onshore, you know, “Who’s actively working? Who do we need to, who do we need to prioritize?” The executives were all very cool. They were like, “We are not the people you need to be prioritizing.” Here is where, these are the sites that are going, that are live right now. Here are the sites that we need up and running.
[00:13:34] Here’s the sites that absolutely need the first touch. And then we triaged those, got the, got people shipping out to go to, you know, to get on helicopters, or to get on boats, or to get in cars. And once we got those, once we stemmed the immediate bleeding, then it was more of a, okay, now we’ll go with the, you know, the start from the top and work our way down.
[00:13:55] Jonathan: Gotcha. And so in this case, I mean, you mentioned the priorities and going back to the amount of money in play. You’ve got these drilling operations that are going on that now can’t do anything, they can’t operate, right?
[00:14:08] Jay: Our engineers always had the plan of falling back to paper. They were not convinced of these newfangled computers in 2010. They still were prepared to always go back to pen and paper if they needed to. You know, manual well logs. So, so it wasn’t like we were completely down, but some of our other systems were absolutely down and could not be down. So we had, we definitely had to prioritize fixing those first.
[00:14:36] Jonathan: You and your team had to diagnose a problem, with high pressure. A problem that you quickly realized was at scale, was very urgent. You had to troubleshoot that and understand the problem from a technical perspective. But then at the same time, you’re also, now having to deal with people, right? You’re having to get management involved. You’re having to get a plan together and delegate and work with your team. What did that look like from your perspective? How are you kind of juggling those two, all those different priorities in your head?
[00:15:10] Jay: The good news about that was our leadership really understood, we were able to make them understand very early and very quickly that this is not something that we could have prevented. Right? We couldn’t not distribute new virus definition files, because that leaves us open to all of the viruses and problems that comes to.
[00:15:34] That is a policy that was signed off by our board of directors that we deploy virus definitions immediately. So there was no way we could have solved this. They were thrilled that we had the, you know, the foresight to, to run different systems on our servers versus our endpoints.
[00:15:50] That was great, we got a lot of kudos for that after in the, the root cause analysis afterwards, but they were really great. And they made sure that they were available to help push on anybody who was having a problem, who wouldn’t prioritize this, or who would argue, or, you know, the few people who were, this is absolutely critical.
[00:16:09] It’s more critical than you think it’s critical. And it’s like, you’re not helping the problem. You’re just part of the problem. You’re making the problem worse. So you need to be out of our way. And thankfully our leadership was really good about standing there with us to do that. So, but in terms of keeping it all straight, giant whiteboard in the biggest conference room we had. And everybody would just come back when they would finish their section.
[00:16:30] They would come back, whiteboard, check off what there was, grab another piece of paper, put their initials on the whiteboard next to the floors and things that they were doing. And we had a couple of people who, a couple of the executive assistants who would just sit on the phone and in the other sites, when people would come back, they would tell them what it is and they would go mark the whiteboard off with this site is, this site, this floor is done, this site, this floor is done, this site, this floor is done.
[00:16:54] And honestly, it was just a lot of human labor. Human capital, just getting that working.
[00:17:03] Jonathan: And so what you’re describing, I mean, it sounds like this was mission control. You had the largest conference room. Everyone’s getting together. You’re dispatching people out to various locations. You’re getting updates. Tell us a little bit more about, well, first of all, how you were feeling in this situation. Was there an immediate, kind of, panic – wishing that you had called in that day? Was there a cold sweat? Or did once you started moving it kind of went away and you’re just focused on the solving one problem at a time.
“A good plan violently executed today is better than a perfect plan executed next week.”
[00:17:30] Jay: I mean, there’s always, of course, you’re always like, I should have been home. I should have, like, taken vacation. I should have been on a cruise where no one could have gotten a hold of me. But in fact, I actually had a meeting with our senior executives that day. So I was wearing a suit. I was, like, all ready to go.
[00:17:47] And that is not a place you want to be when, like, the spotlight just immediately pivots and throws that heat on you, as wearing a three-piece suit, like, getting ready for things. So I’m like, okay, well, this is how it goes, but, in IT, there are so many fires so often. You don’t get into IT if you don’t like pressure. That’s just, that’s kind of how it is. Once you start moving, the hardest part is getting moving, right? Taking that first step, because you always want more information than you have, right? When we’re looking at it, it’s like, okay, well, is this a virus problem? Is it a Microsoft problem? Is it a, you know, corruption problem?
[00:18:26] Like, where is it? And you just want to keep looking for something else that would, you know, the magic fix, the magic elixir that will just let you press a button and everything fixes itself. But that doesn’t exist, right? You can’t let perfect be the enemy of good, and you know, to quote General Patton in a sort of way, “a good plan violently executed today is better than a perfect plan executed next week.”
[00:18:52] So, we got our good plan, and we violently executed it.
[00:18:57] Jonathan: And so how long did this plan take to roll out? I mean you’ve diagnosed the problem. You know what has to be done. People are acting on it now. How long is everyone operating?
[00:19:09] Jay: From the time we actually started sending people out with flash drives and pieces of paper to the time the last machine was done, was about 13 hours. But the majority of the, of the company was fixed within six, six hours. All right, there was just some. Because there are places, it takes a real long time to get out to, you know, Hayes, Kansas. There’s nothing close to that, it turns out.
[00:19:34] Jonathan: Was there anything that was under, that you guys were doing to, keep morale up, to keep people, focused, but also encourage them during, this all hands on deck kind of crisis?
[00:19:45] Jay: So we started making it kind of a game. We would send out some people to go buy, like, a bunch of sodas, a bunch of snacks, treats, whatnot, and you know, whenever people would come to the, would get back to the conference room, it’s like, okay, you did this whole floor, so you get something from pile A and B. Or you only did half of your list, you only get something from pile A, you know. So we tried to make it be fun, we tried to, and whenever, when we were all done, we had a bit, we, brought in a ton of pizza and beer and stuff to say thank you to everybody for the absolute Herculean effort of running around and touching every device that we had.
[00:20:19] We just tried to make it as fun as it could be. Everyone understood it was a serious problem, right? We weren’t, but we could, you have to be able to laugh at it. You gotta be able to make that joke. You know, it, it may seem to some other people like we’re not taking it seriously, but if we didn’t, we’d be like, we would be so…
[00:20:37] So wound up and so stressed out and we wouldn’t, we would probably end up snapping and it just wouldn’t be pretty.
[00:20:43] Jonathan: Absolutely. I mean, I imagine at some point adrenaline does kick in and, even though it’s a terrible issue that you certainly wouldn’t have wished on anyone and, or yourself, there’s got to be something to it where you’re, you are putting out those fires and you’re all coming together and you’re executing like that. That’s got to be a little bit, yeah, it gets the blood pumping, I guess.
[00:21:05] Jay: It does. And you know, it is great when you see, when everybody’s like going and you’re moving and people are cycling in and out and getting new lists and checking things up. It’s like a well-oiled machine. It’s like clockwork. It’s really nice to see. And the guys see it too, right? They’ll all see each other coming in and you know, oh, I just did, you know, I just did 50 machines.
[00:21:24] Oh yeah, I did 60. And they start getting into it. You know, we didn’t, but you’d also know you don’t have time to like complain about it. You know, I was not thrilled crawling under desks in a suit. That was not super happy for me. But it is what it is. And we all came together, we all did it, and we were all, we all felt accomplished.
[00:21:45] And you know, the executives recognized that we got a really nice shout out, at the next, the next board meeting for how well we handled that problem.
[00:21:53] Jay: By the way, we were not the only company that got hit with this. Almost everybody who used this particular antivirus software that wasn’t staging it and doing delayed rollouts, almost all of them had a very similar problem. And ours, in our peer group, was the best response to that situation.
[00:22:15] Jonathan: It must be a great thing to show to leadership, hey, we’re not alone here. We’re actually, everyone’s dealing with this. The way that we’ve dealt with it. We did the best under our circumstances really. Did, when you reached a, how were you able to confirm that you were resolving things? That taking the USB drive, rebooting the machines, it was fixing everything.
[00:22:36] Did you have any kind of feeling of like, okay, we have to be cautious before we declare our mission accomplished here?
[00:22:43] Jay: Honestly, by the time we had the first, say, 100 machines fixed, we weren’t, we were not really concerned about it anymore. It was obvious that nothing else was, nothing was going wrong. It wasn’t re spinning out. We had already paused the file distribution. So after the first 50, 100, we were like, okay, this is not a problem.
[00:23:03] It’s a problem in that it’s a large scale, but it’s not a complex fix. It’s simply a time-consuming fix, because you’ve got to touch absolutely everything.
Mission accomplished
[00:23:13] Jonathan: Have we arrived at a happy ending here? Was it all happily ever after? Were there any things that were lingering, doubts, concerns, risks?
[00:23:21] Jay: We were told in no uncertain terms we don’t get to use this software anymore. We had to rip it out of our entire environment, and the CIO said this is just not a thing we will ever use again. So, you know, happy ending? Successful ending. I’ll say that. Successful ending. We, we measured our losses.
[00:23:38] In actually only the hundreds of thousands of dollars, not like some of our peers in the millions and tens of millions of dollars. So I consider that an absolute win for a complete, a completely catastrophic, every device is broken to lose in the hundreds of thousands, not even in the millions – I consider that an absolute win.
[00:23:58] Jonathan: Do you recall any kinds of adjustments, you all made to your incident response plans or your preparation or anything like that because of this incident?
[00:24:07] Jay: Yeah, absolutely. We staged a lot more. We brought in house a lot more warehouse computers. So that way we would have devices in the event of a problem that we could roll out to these critical stations, without needing to wait for a problem. If there was something, once we identified that this was pushed out automatically,
[00:24:24] Here’s a new device that has your software on it that you can run. So our production operations would no longer be impacted by that. So that was one thing. Second thing was, we made the, we made the choice to stage our virus definition rollout. To… we had a group of users who would get it immediately when it was released by the vendor. Then it would wait for at least three hours, before we would roll it out to another tranche of users and then another tranche. Previously, we had only rolled it out…
[00:24:58] We just rolled it out to everybody, it just pushed. And that was actually a problem with the particular antivirus software we were using at the time was they didn’t allow that granular of a rollout. It was like you roll it out to everybody all at once, or you roll it out to nobody until you release it. So, switching antivirus, allowed us to gain that additional flexibility. We also, we changed our incident response plan completely to allow for the fact that our data center is fine. Our data is fine. We simply have no way to access that data.
[00:25:31] Jonathan: So here we are, you now have a battle tested team. You have a more resilient incident response plan. The leadership’s giving you kudos. You must have had, no other further issues after this day, forward, right?
[00:25:46] Jay: No further issues with that. It’s IT. There was another fire the next day. I mean, that fire was much smaller and for an entire year, nobody complained about an outage again. People were, they weren’t happy if we had an outage, but everybody’s like, this is nothing compared to that last outage. I mean, it’s like, yeah, exactly.
[00:26:04] It’s like glass is always half full, right.
Lessons learned
[00:26:07] Jonathan: Yup. Yup. It puts things in perspective for sure. So Jay, thanks for sharing that particular story. Want to move on to another segment where we look at where you are today. And looking back on that, was there anything that really stuck with you that you still find yourself thinking about in your approach to your IT position now?
[00:26:27] Jay: So the biggest thing that sticks with me, actually does stem from that. Which is no matter what you think about when you’re doing incident response, disaster recovery plans – think about things that you like, try to think about things that, the odds of them happening are very low, right? Or that they’re not even something that you would entertain and always look at your plans with an eye towards those issues.
[00:26:54] You know, people rarely think about, you know, before, and I hate to bring, even bring this up, but before 9/11, nobody thought about the idea that an entire building would be gone. Nobody did. In disaster recovery, nobody. And I worked in data centers and telecommunications at the time. And, none of us, we were like, oh no, that data center will always be there.
[00:27:16] 100, we may lose something inside of it, but the data center will be there. And when the data center wasn’t there anymore, only people who considered the idea that maybe that data center might disappear one day were, did okay. So you always, there’s always something to learn from it and there’s always something about it that you didn’t think about, no matter how many smart people you have in a room, you know, they [say], the best laid plans of mice and men often go astray.
[00:27:43] Always look at it from whatever angle, from whatever weird angle you can come at it from.
[00:27:48] Jonathan: You know, not, it’s not just about testing, recovering from one backup. Recovering from all of them at the same time.
[00:27:52] Jay: Exactly. If, and those are the, those are some of the things where we’re seeing right now, like this, resurgence of, data centers, companies owning data centers, instead of using cloud for things like backup, because if you ever have to restore a lot of data at once, your bandwidth limitation is your internet access.
[00:28:10] It’s versus, your on-premise land, which is typically going to be an order of magnitude faster then whatever your internet access is. It’s interesting how everything old is new again, but…
What’s in your IT survival kit?
[00:28:21] Jonathan: Absolutely. There’s cycles to everything for sure. Well now that you’re here at Ninja and you have your own team, are there any things that you guys focus on that’s designed to help your survival rate when it comes to IT horror stories? We like to say that there’s, the rules of surviving an IT horror story.
[00:28:39] Just like there’s rules to surviving horror movies, right? The cliche things and the things that you have to do to prepare and avoid those things. What is in your team’s IT survival kit? Are there crucial things?
[00:28:50] Jay: Absolutely. So the first thing is obviously Ninja[One], of course. We use Ninja very heavily, we rely on it every day. It is how we deploy every device we have at the company. It’s how we deploy software to every user. It’s how we make sure the fleet is still running good. We have all, we have our Ninja set up along with our MDM solution to provide us with the ability to immediately just pull a laptop out of a box, give it to somebody, and they can log in.
[00:29:21] And all of their data, all of their applications, everything will be available and we don’t have to do anything. Which means in the event of a massive problem like, like this other DAT file that crushes people’s laptops, we don’t have to try and fix Windows. We can just reinstall Windows and you re-login and you’re fine.
[00:29:38] Everything is still there. You’ve lost nothing. Right? That’s a huge, that changes the scope of how you can even think about, of what problems you even need to think about, because I don’t need you to be in an office where I can restore your local backup. I don’t need you to have a specific computer.
[00:29:56] I can do this on any device. Anywhere. Oh, you’re in Bora? Well, it’s going to take you a while to download all of the stuff from Bora because their internet access isn’t so good, but you can still do it. That’s a huge thing for us. Secondarily, we’re a cloud first company because our software is cloud based.
[00:30:12] We do as many things as we can cloud based. You can do, you could probably do 80 percent of your job with just a web browser, which means you could do it on an iPad, you could do it on an Android phone, you could do it on, you know, on a Windows, Mac, Linux, doesn’t matter. So those, but planning for those types of scenarios where we don’t, where we’re not reliant on a Ninja office, a local Ninja office.
[00:30:37] We’re not reliant on, you know, remote connectivity into a specific location. And we have tools designed to allow us to manage things at scale. Even if we’re going to deploy, you know, 10,000 new devices. That’s fine. We’ve designed around that problem.
[00:30:55] Jonathan: Any parting words for IT leaders? Maybe even, IT leaders who are newer to their jobs in terms of getting ready because undoubtedly they will, themselves face their own IT horror story at some point. Any advice for them?
Closing: “It’s a matter of when, not if.”
[00:31:14] Jay: It is a matter of when, not if, right? We all will have it. Some of them will be cause, some of them will be our fault too. Right? In fact, a lot of them are going to be our fault, if I’m really honest. But sometimes they’re, sometimes it’s completely out of your hands. Communicate with your users. Your users are a lot more, a lot nicer than you want to give them credit for, as long as they understand what the problem is, and what you have to do to go about fixing it.
[00:31:43] That’s a big one. Don’t let inaction paralyze you. You just, you have, at some point you have to move forward. You’ve got to start working on the problem in order to get it resolved. So look at your incident response plans. Make sure you understand them. Make sure you have copies of them that aren’t on just a device that may get stolen, lost, blown up, whatever.
[00:32:00] So make sure you can always access your incident response plan from wherever you are. That would be, those would be my big recommendations for you.
[00:32:08] Jonathan: Well, Jay, thank you so much. thank you for sharing your, and reliving your IT horror story with us here, and thank you so much for that effort, and for everything that you and your team do for us at NinjaOne.
[00:32:20] Jay: Absolutely. Thank you guys. Appreciate it.
[00:32:22] Jonathan Crowe: Thanks for listening to this week’s episode of IT Horror Stories. For even more information and resources on how you can beat IT misery and transform your IT, check out www.NinjaOne.com. Or pop by our IT Leadership Lab community at TheITLeadershipLab.com. There you can connect with other IT leaders, talk shop, and get access to the latest guides, templates, and documents from other experts in the space.
[00:32:46] Remember, whatever your IT horror story, just know you don’t have to go it alone. That’s all for this week. We’ll be back soon with more. Thanks for listening!