IT Horror Stories

The scariest stories in IT.

About This Episode

In this episode, CIO of Avery Dennison Corp., Nick Colisto, joins host Jonathan Crowe to discuss a horror story Nick faced as a first-time CIO. The two explore how Nick and his team worked tirelessly to ensure the business stayed afloat throughout a natural disaster, lessons learned along the way, and why strong vendor relationships are key to ensuring business continuity in the worst of times.

Host

Jonathan Crowe

Jonathan Crowe

Director of Community, NinjaOne

Guest

Nick Colisto

Nick Colisto

SVP and CIO, Avery Dennison

About Nick Colisto

Nick Colisto is the senior vice president and chief information officer for Avery Dennison Corporation. He joined Avery Dennison in March 2018. In his role, Nick is responsible for driving and executing an enterprise IT strategy for the company, including shepherding the overall company strategy with respect to information technology trends, driving efficiencies across the organization, improving the delivery of IT services and products to the business, and building on our existing culture of operational excellence.

Audio Transcript

[00:00:00] Nick: And the storm caused a power outage that lasted for more than a week. Initially, we were able to run on generator power. But as the outage dragged on, Jonathan, you can imagine, you know, we faced a shortage in fuel.

[00:00:18] Jonathan Crowe: Hello, and welcome. Please come in. Join me. I’m Jonathan Crowe, Director of Community at NinjaOne and this is IT Horror Stories. Brought to you by NinjaOne, the leader in automated endpoint management.

Introduction

[00:00:33] Jonathan: Welcome to IT Horror Stories. I’m your host, Jonathan Crowe, Director of Community at NinjaOne, and I’m sitting down with Nick Colisto, Senior Vice President and CIO at Avery Dennison, to talk about some of the IT Horror Stories that he has experienced in his career. Nick, welcome to the show.

[00:00:50] Nick: Thank you, Jonathan. Thank you for having me.

[00:00:51] Jonathan: Before we get into things, the nitty gritty details, we always like to learn a little bit more about our protagonists so the audience can care about you, and really cringe when they hear the terrible things that you’ve gone through. So what should the audience know about you?

[00:01:07] Nick: Since 2018, I had the privilege of serving as the CIO of Avery Dennison Corporation. We are a $9 billion dollar Fortune 500 company that specializes in industrial manufacturing. And in my role, I lead a diverse team across the globe, focusing on using digital technology to drive profitable growth and productivity.

[00:01:27] Before joining Avery Dennison, I had IT leadership roles across industrial goods, consumer products, pharmaceuticals, construction and high-tech industries. My career spans over 30 years, in which I spent a lot of time directing digital business transformation in large, global organizations, but also startups through lots of different business cycles like high growth, downturns, spinoffs, and acquisitions. And I’m the author of the CIO playbook, and I’m actually working on my second book now. And I recently joined NinjaOne’s CxO Advisory Board, and I’m just thrilled to be part of the winning team.

[00:02:04] Jonathan: Well, we are thrilled for you to be on that team too. And for you to be joining us here. You’ve written books, you’re sharing your stories warts and all, and, and helping people learn from them, so really appreciate that. With all that in mind, let’s get into it.

[00:02:18] Tell us about your IT horror story that you’re going to be sharing. Set the scene for us.

Nick’s Hurricane Horror Story

[00:02:23] Nick: The year was 2012. I was a first time CIO, [at] a large national organization that operated across the U.S. The company’s headquarters was located on the East Coast and it also housed our primary data center, which was a critical hub for processing company operations nationwide. It was late October, and Hurricane Sandy was forecasted to make landfall, and we had a disaster recovery plan, but the storm’s unprecedented scale left a lot of uncertainty about how well our systems and our team would hold up.

[00:03:01] Jonathan: So when this was coming, there’s preparation. As you mentioned, you had your plan laid out. You mentioned there being uncertainty there. What did that look like or what did that feel like in the buildup to this?

[00:03:11] Nick: So the buildup was, we were watching the weather channel quite a bit. So in IT, we had, many of our employees were based in, at, at the headquarters facility.

[00:03:20] We had that sort of centralization of IT. So we were, very nervous about it. We had a lot of early career professionals as well that had never experienced something like this before. We were just very concerned about what this might mean. And then it was just like completely, as you know, it was quite, quite the disaster.

[00:03:37] Jonathan: Absolutely. I mean, so many people affected by it. Tell us a little bit more about your specific role there. What stage were you during your career when this happened?

[00:03:46] Nick: I was four years into my first, CIO role at Hovnanian Enterprises, was the name of the company. So it was one of the largest residential homeowners in the U.S. And it was headquartered in New Jersey, so Hovnanian Enterprises was a national homebuilder, it still is, designing and building and selling homes across the country and as a CIO, I was responsible for ensuring that IT-enabled business continuity was in place.

[00:04:12] My team consisted of about 150 professionals. All very highly committed, but also relatively new to facing a crisis of this magnitude. As I mentioned, many were early career professionals, very talented individuals, but this was sort of a new thing for all of us, really.

[00:04:30] Heading into the event, the mindset was cautiously optimistic. You know, we had a disaster recovery plan in place. We had a secondary site as well. We just definitely anticipated Hurricane Sandy to be severe, but the extent of the damage was much more challenging, much more eventful than anything that we’d prepared for. We were about to face one of the most challenging events in the company’s history.

[00:04:55] Jonathan: So let’s talk a little bit about the stakes. There’s the stakes for the company. There’s also the stakes for you personally. You’re in your first time CIO role. You’ve been there for a few years now, so you’ve got your feet underneath you. But like you mentioned, this is your charge.

[00:05:08] You know, you are responsible for ensuring uptime during this, scenarios like this. What would downtime look like and cause for the company?

[00:05:18] Nick: Stakes were high. I joined the company as the VP of Strategic Systems years before and was promoted into the role of the CIO. This was the big role for me. And I was still learning how to be a successful CIO at that time. So yeah, I felt a lot of pressure that way. And, I was just really concerned about the ramifications of what would happen with this hurricane.

[00:05:38] Jonathan: So let’s talk about, as things shift over from that anticipation moment.

[00:05:43] Nick: Yeah.

[00:05:44] Jonathan: To, now Hurricane Sandy’s here.

[00:05:46] Nick: Wow. It’s here, right?

[00:05:48] Nick: Well, okay. Hurricane Sandy struck, it hit our headquarters facility at Red Bank, New Jersey, and it hit pretty hard. Our headquarters was, the home of our primary data center, when we process, you know, critical business operations for the entire company, everything got processed out of Red Bank, New Jersey.

“The storm caused a power outage that lasted for more than a week.”

[00:06:06] All of our systems that supported sales and home production, quality assurance, homeowner service, and finance, all got processed out of that building. And the storm caused a power outage that lasted for more than a week. Initially, we were able to run on generator power. We had a generator in the building, but as the outage dragged on, Jonathan, you can imagine, you know, we faced a shortage in fuel.

[00:06:33] And on top of that, I personally lost power in my home. And my family relocated, my wife and two children and the dog all went up to New York to stay with relatives. So I stayed home without heat or hot water. And I wanted to keep an eye on the home, because we didn’t know if all of a sudden there would be looting going on or something.

[00:06:54] And I was just going back and forth to the office constantly. And the executive team was, understandably very concerned. You know, without our data center, the national business was… risked grinding to a halt. And so my team quickly collected the latest backup tapes and began to transport those tapes to our Disaster Recovery Center in Philadelphia, which fortunately didn’t have as much of an impact. The whole East Coast was, was just devastated.

[00:07:19] Jonathan: Let’s talk a little bit about the conditions there. A lot of us remember Sandy, right? And remember watching the news and seeing the footage there. You’ve got your job that you’re concerned about. You and your employees, your team, also have their personal situations. So tell us a little bit more about the conditions that were out there, and getting to and from the office and how was communication going during this time?

[00:07:41] Nick: Yeah, so communication was about zero. So think about it, heavy wind, heavy rain, because you had the initial brunt of the hurricane, but its fall, it was prior to that. Tons of rain, tons of wind. Trees down everywhere, cars abandoned, power out. Cold, it was actually cold, it was late October, it was unseasonably cold as well.

[00:08:02] We had to go buy food. You know, did you prepare well enough to get water and food for your home? I didn’t have contact with my team, so I was concerned about them and how they were faring. Maybe we should have had satellites, you know, phones and all these different things now that you think about these things, right?

[00:08:19] It was pretty scary and I, I didn’t know if they were harmed by the, the storm or they had relocated themselves to a different state, you know. So it was, the uncertainty made it even worse.

Running low on diesel

[00:08:27] Jonathan: And so, when you’re going to check in on the data center, it sounds like that’s one of your first priorities, right? So how are you assessing what was going on there? You mentioned running on diesel. Is this something that had been covered in your disaster recovery plan? And how much diesel did you have? How much were you prepared to have running the generator for?

[00:08:47] Nick: I think it, yeah, I’m trying to remember exactly, but the technology has changed somewhat since then, but I think we had enough for about a week or something like that.

[00:08:53] So this is a brand new building. I think we were in the building for maybe a year or two, max. And so we had the data center there. It wasn’t in the basement. It was, it was a raised floor, the second floor up. It was protected that way. But you were on, we were also on the bank of the Navesink River. So now the river is flooding. We had this cul-de-sac outside and it’s creeping into the cul-de-sac and making its way into our garage. So we’re nervous that, not too concerned because even if it got into the garage, the garage went down, so the water would go down into the, the lower levels of the garage, so the chances of it affecting our data center was pretty low.

[00:09:24] We were more worried about the wind coming off of the river because there was nothing to block the wind coming off the river. It was a big glass building, right? So, yeah, it was definitely a scary moment, but we were confident the generator was running well. We had, we could definitely take our tapes down to our secondary site, but to take, to plug in the tapes and get them restored back to where we needed to be would have taken a couple days. So it was that, that wouldn’t have been great for the business. If you had been out for a couple of days, cause that was a technology we had.

[00:09:55] Jonathan: I’m just imagining this brand new building. Technology, all brand new, say there are technology, and it’s all being run on a generator, on a diesel generator. I don’t know if this, this generator is making a sound, like what I’m imaging in my head or not…

[00:10:10] Nick: Oh, they’re, they’re loud. And I remember one moment, where I was standing outside with the CFO. As I mentioned the CEO was in Manhattan, that’s where he lived. And the CFO and I were standing outside looking at this little round mechanical gauge that displayed the level of fuel left in the tank.

[00:10:26] And we were several days into this now, and the gauge, read a quarter of a tank. And I wish I had a photo of this moment, because you could just see our faces, just white. Really concerned at that point. Behind, literally, you look behind you, you see the water coming up from the Navesink River. We thought we were done, right, and probably be out for about a couple of days by the time we had the tape restored and hoping that would have gone well as well.

[00:10:54] And don’t forget, these people are driving to Philadelphia, and they’re driving through situations that were dangerous, right? Because you had trees down, the highways were closed, you know, it was not an easy ride.

[00:11:05] Jonathan: And so, you see the diesel gauge going down. You have tapes going up. So you do have a scenario, you’ve got that covered where, if you really need to, you’re going to be able to restore that way. But what do you do about that fuel situation at this point? Do you also have a sense of, when things are going to be, power’s going to be restored, anything like that?

Staying focused and finding solutions

[00:11:27] Nick: No, no, it was just, I remember being experiencing a mix of, like, fear, urgency, but also determination. I knew we had to stay focused, and find solutions. So we really tackled the issue on two fronts. We started transporting the tapes to Philadelphia, to make sure that we can restore operations in that data center if necessary.

[00:11:47] Again, we didn’t know any kind of timetable. Homes are destroyed, buildings are destroyed, people lost their lives. This was an incredibly traumatic event. The second thing that we took into consideration was, I think it’s a very, it was a very unconventional approach.

[00:12:00] We were running out of diesel and that was not an option, you know, so I personally drove to the offices of the local fuel company. And I had to maneuver around all these fallen trees and all this debris from the homes. Because we had these shingles blown off the homes, and then you had like vinyl sidings all over the place.

[00:12:16] So he had all that going on. And I remember finding this office and the office was like out of a home. And it was the fueling offices and this elderly woman came to the door, you know, and she was shocked to see anybody actually at the door, you know, and I essentially, Jonathan, like begged her to prioritize us to refill our tanks. And, we were very fortunate that Hovnanian Enterprises was very much into the community. The founder of the company built Children’s Hospitals in New Jersey. So there’s a lot of volunteerism going on by employees. So pretty well known brand in the area, probably the biggest company in Red Bank, so I don’t know if that helped Jonathan, but it probably did a little bit.

[00:12:57] Jonathan: Yeah. I mean, it certainly couldn’t have hurt. I can’t imagine that was in your training or what you thought when you took on the role of CIO at this company that you would be doing.

[00:13:07] Jonathan: As you mentioned, you had that level of, okay, there’s a situation here, but we’re going to act, we’re going to find solutions. You taking on that yourself, driving on to get the fuel, you’re obviously all hands on, all hands there. Talk about your team a little bit and the communication with the other execs. Was everyone kind of, I mean, we were talking about days here in terms of being on that kind of exhausting, you know, alert sprint, I guess, for days. How are you guys handling that?

[00:13:35] Nick: Yeah, again, there was very little communication. So, there weren’t many people in the office. It was me, the CFO, a few others, finance, a couple of my IT people because we had to just watch the data center. I think for the team, the experience really strengthened our resolve and the collaboration that we had. So the outcome was, I’ll just get to the outcome was, we were able to, by refueling the tanks, that bought us time, to get our tapes to the disaster recovery center in Philadelphia and ultimately the power came back.

[00:14:03] It turned out to be fine, up from a business perspective, not, not fine for the area and the people that got impacted by this and their lives. But it could have been a catastrophic disruption to our business. And I think the storm really tested us. It also underscored the value of preparation. And also just being very adaptable.

Lessons learned

[00:14:24] Jonathan: The outcome from a continuity perspective. Mission accomplished there. You stayed online. In terms of a recovery time, I’m sure there’s other elements that, that maybe had to get attention, but, from that point on, I mean, did you have an official, debriefing process or look back on, here’s what happened, here’s what we can do in terms of planning moving forward.

[00:14:44] Nick: Yeah, I mean, we had a lot of lessons learned from the event. I think the event taught us about IT resilience, isn’t just about technology, it’s about people, about process, and it’s also about partnerships and relationships that you had. You know, if we hadn’t treated that fuel company well or we didn’t have preparation for having our secondary site in place. It was a cold, cutover. It wasn’t, we didn’t pay for a hot, resilient site at the time. Since then, things have changed with the cloud, right? But at that time we were paying for, you know, cold cutover. So, you know, we had basically had to get machines in place and servers to the restoration.

[00:15:21] So we learned about that. When you’re experiencing a major incident and normal escalation isn’t working with a vendor, so we were trying to get in touch with the vendor in other ways, and you got to take the wheel, you know, and just go to the top.

[00:15:36] It’s something that I didn’t really kind of recognize until that moment. How important your technology partners are in crisis and how you, you’ll need to depend on them. And if you can’t get what you need through the normal course of action, you need to go right to the top and physically either physically go to the site or, you know, make phone calls to in some cases, the CEOs of vendors, right?

[00:15:59] And I’ve gotten that reputation now where, we treat our technology partners really well, I think we do, and we have good relationships. But when we have major events that are going on around the world, and I’m not getting what I need from, you know, our assigned partner, it’s my job, you know, to escalate up and go back to the CEO of that company and it works well. You don’t want to abuse that privilege because if you overuse that, then you start to destroy relationships, right?

[00:16:30] With your partners, but you have to be able to use that card at the appropriate time. And I hadn’t even thought about doing that at that time.

[00:16:36] Jonathan: So it sounds like, is it safe to say that you came out of this incident stronger in a sense, of you identified another way for you to evolve as an IT leader.

[00:16:45] Nick: Sure. We learned a lot of lessons, like plan beyond information technology – you have to think about with the physical dependencies, like I was explaining. Like fuel contracts, you need to have fuel contracts, make sure they’re solid, right? You need to have alternate powers, power sources, you know, other different ways of getting power to that site, also empower your team. Make sure your team has the decision rights to make decisions, take actions, right?

[00:17:10] Solve problems that you probably didn’t anticipate, you’re always going to learn something new from these events. Be very transparent with the communications, even though the communications was difficult, it wasn’t impossible. If we needed to talk to someone, some of the telecom providers were working, and some were not. I didn’t mention that earlier. So we can get in touch with some people, and not others, right? So that, it talks about when you can talk to people during those events, and usually you can, not always. Just be very transparent with what’s going on. Don’t try to sugarcoat it. Because I think if people are informed, Jonathan, I think it really does help to reduce the panic that people have.

[00:17:46] And keeps everyone aligned about what, what to do. And they help you do that. So, we learned a lot from that experience.

[00:17:52] Jonathan: Based on these experiences, what do you have now that’s in your IT horror story survival pack? I mean, you’d mentioned you can take this literally, you know, there’s the, you’d mentioned the sat phones before, or it could be more of a figurative question. You know, what are the things that, you have now made sure that you have in place, to be able to, to really act when you need to.

[00:18:13] Nick: So first of all, cloud-based systems can reduce dependencies, of course, on physical data centers. Most companies that have history to them. Okay. Now I’m not talking about new net companies, that startups, they’re all on the cloud, right?

[00:18:25] But a lot of companies that have been around for 10, 20, 30, 40, 50 years are going to have some form of a data center. Maybe they’ve gotten away from it completely, but I would say majority, right, there’s some form of a data center. And I think, in that case, they probably have a hybrid cloud environment, right?

[00:18:40] So the idea is obviously reduce more and more, make sure that data center is solid and you’ve got, good recovery plans in place, and then obviously you want to migrate more of your workloads to the cloud, right? So you can reduce your dependency on a physical data center that you have to manage, right? That’s 1. Redundant, when you do have a data center, redundant power and fuel contracts, multiple lines going into it. Make sure that you really spec out the needs that you have. For that center and make sure that you’ve got the full resilience that you need in case of an outage.

[00:19:08] You find situations where lines are cut all the time and you have redundancy of those, of those lines into that data center. And your, even your office buildings and your factories, the same thing if you’re a manufacturing company. Uh, the other thing that we learn from that is practicing disaster recovery drills, right, including standard communication templates.

[00:19:27] So you have all the templates ready to inform your employees, also inform your customers and your suppliers. Keeping leadership really engaged in that. So those are some of the technical things that we had, we learned from, we took away from that, but also some of the, just some of the processes, you know, that we had to evolve since that event.

What else is keeping you up at night?

[00:19:47] Jonathan: This story, wow, I mean, what an amazing one and really kind of a story of its time, right? In the ways that the industry has evolved since then in terms of, going, you know, to the cloud more and everything.

[00:19:58] What are the, some of the things that are keeping you up at night now, or that you’re trying to prepare for?

[00:20:02] Nick: Probably the main area that I think a lot of CIOs would probably say is cybersecurity. I think is the key one. I mean, ransomware attacks can be just as paralyzing, if not more than a natural disaster. There’s some things in common when you think about the story I just described.

[00:20:17] And, you know, one example is, is testing, right? Constantly, doing security incident response tests. Hiring white hat hackers to try to help penetrate. And you learn from that. You’re always learning from that. You learn from events as well, you know, real events, making sure that the leadership team is, is very engaged with you even up to the board, so that they’re also part of that process. They have an experience through these tabletop exercises, so that, when and if it does occur, you know, they’re, they’re prepared, along with you to, to manage through that. So I think that’s the, that, that’s what keeps me up to, up at night, right? Of course you have your other, type of topics like a bad enterprise application implementation that just goes awry and, and having to deal with the aftermath of that, going live too early.

[00:21:03] So that sort of thing and having lots of issues. But I think the main one is sort of like anything to do with infrastructure and security. Is what, you know, keeps me up at night.

[00:21:12] Jonathan: So it’s interesting hearing you talk about these things and, very clear from the story that you told itself, a lot of the issues – sure there’s the technical components of it – but as you pointed out, it’s a lot of times, equally, if not more important, are the people, the processes, and making sure that you’re training people up and to be prepared for the unknown, right?

[00:21:31] Because who could have guessed that you would have been forced to go knock on a door to, to help, to try to ask to prioritize fuel delivery. As much as you prepare, there’s always going to be a risk of some kind of element that’s unknown. Right. And so do you have anything, any suggestions for other IT leaders out there or anything that you’ve implemented yourself when you’re training others, to help raise that ability to adapt and to think quickly on the spot?

“Building strong relationships can make all the difference when a disaster strikes.”

[00:21:58] Nick: It’s about investing in your team and as I mentioned a few times, the relationships that you have with your suppliers, I think it’s very important. So it’s about people. It’s about either way, the executive team, your suppliers. Global suppliers that you have, but also local suppliers. So I think it’s about building strong relationships can make all the difference when a disaster strikes, right? So that’s why I said earlier, when I escalate, it’s only when I need to. I don’t like to do that, but I will do that if I need to get the response that I need, right? And I think that’s important.

[00:22:28] I think, I think there’s a lot of leaders out there that don’t want to pull that trigger or think that they can’t get in touch with top leaders and technology providers. And, you know, LinkedIn makes it really quite easy to do that, by the way. So, I think that’s one of the biggest lessons I learned.

[00:22:44] Is when I had that moment of like, well, I’m going to run out of fuel. I could, I could have given up. If I had given up at that point, we would have run out of fuel. There’s like, there’s no doubt in my mind. We would have run out of fuel because that truck was not coming. So I’m not the hero in this.

[00:22:59] I’m just thinking, what was the hero in this is the idea to go to do that. Right. Not me personally. Right. Anybody could have gone over there. It was the idea to say, wow, I need to escalate. I need to take advantage, I need to get to a place where I can get fuel delivered to this facility. So think about what happens when there’s a crisis, whether it be a ransomware event or an infrastructure outage.

[00:23:22] You need to make sure that you’ve got the A team from your suppliers on that with you, right? Because you’re there, you’re going to be working 24 hours by, 24 by 7. You want them there as well, and you want their A team. So that was the, I think that’s the biggest piece of advice I could offer is make sure you maintain really great relationships so that when an event happens, you automatically get that type of service.

[00:23:43] And if not, you need to take, you need to take the – your team is waiting, is looking to you, to take that step.

[00:23:52] Jonathan: You need to have it all planned out. You need to have plans in place. At the same time, you need to be able to go off script when you need to. And that needs to be encouraged. You need to empower people to do that.

[00:24:01] Nick: Right. Right. That’s, that’s great. Right. Exactly.

Closing

[00:24:03] Jonathan: Nick, thank you so much, for sharing your story, for revisiting, reliving that moment with us. Is there anything else that you’d like to add just in terms of signing off?

[00:24:13] Nick: Yeah. I just want to thank you for having me. I think reflecting on this experience reminds me of why I think resilience and adaptability are really at the heart of IT leadership, for sure. I think, I hope this story inspires others to prepare for the unexpected horrors.

[00:24:29] Jonathan: Well, that’s all the time we have today, Nick. Thank you so much again for joining me on the IT Horror Stories podcast, and we very much appreciate your time. Listeners, we thank you as well, and we’ll see you for the next episode.

+++
×

See NinjaOne in action!

By submitting this form, I accept NinjaOne's privacy policy.