Why Facebook Went Down – And How to Prevent the Same Happening to Your Business
Earlier this month, Facebook went down, taking Instagram and WhatsApp down with it. While hilarity ensued on Twitter and people made jokes about returning to the Dark Ages and finally getting youths off their social media, the reality was much more significant than many people realised. Let’s take a look at the cost of this event, why this outage happened, and how it could’ve been prevented.
With just one tech giant going down for over 6 hours, the company lost US$100 million in ad revenue based on current earnings, with the share price dropping by almost 5% as shareholders sold stock in panic, and Mark Zuckerberg’s personal fortune dropping into 5th position.
Of course, it’s not always so easy to feel sorry for billionaires losing a portion of their fortune, but there are other victims to consider. Because these platforms are driven by ad revenue, it’s the more than 10 million brands and businesses that use these platforms to connect with consumers who actually missed out. According to CNBC, creators and small businesses lost anywhere between hundreds of US dollars to over 5 thousand US dollars during the outage.
Businesses were unable to make sales over the platform, couldn’t communicate with customers, and essentially shut many of these companies down completely for the duration – and no one had any idea how long the outage would last.
So, what caused the outage? After all, this is one of the biggest, wealthiest, and most powerful tech companies in the world, so it must have been something sophisticated, dangerous, and exceptional. Wrong. It was a simple maintenance error and networking issue.
What happened, according to Facebook, is that the company’s engineers issued a command that accidentally cut the data centres off from the rest of the world, taking the computing power that connects the data on your app with everyone else offline. In addition, it also took down the tools and access that engineers needed to get the data centres assessed, investigated, and reconnected. The same network was being used for remote engineers, so the outage affected the ability of staff to access the network in the same way it affected you and me. This means that they had to physically go to the site and repair the issue. But because the data centres are exceptionally secured, it took a long time for engineers to even be able to get onsite and access the servers to fix the issue.
No tech is infallible, and the Facebook example shows just how easy it is for a single point of failure to have massive repercussions even when the intent behind the design is solid. Of course, the solution lies in testing for these scenarios and constantly building and adapting responses to make sure that situations like this don’t arise again. For everyday businesses, it’s a chance to ask your IT provider about taking a more proactive approach to network and data centre management, learning from the missteps that others have made to better protect your own business.
At Otto, we pride ourselves on our human touch. Along with the most advanced tech solutions, security, and support, we offer our clients a strong personal relationship, an understanding of their business, and a commitment to keep our tech simple. Chat to us today about how we can assist your business through innovative IT solutions for humans.