Was Delta’s Global Outage Last Month Caused By A Cyber Attack?

Early last month Delta suffered from an outage that crippled their operations, as all of their computer systems globally went down. There was some miscommunication initially about what caused the meltdown. First the airline claimed it was due to a power outage, though their electric company quickly confirmed that there were no reported outages.

Then Delta claimed that a switchgear malfunctioned, and that the real issue that caused the catastrophe was that the backup systems didn’t kick in as they were supposed to.


I’ll be the first to admit that I’m not very good with technology (I even struggle to use my MacBook Air, at times), but it looks like there’s another theory as to what may have happened, which some claim is a near sure bet. Per observer.com, here’s why some believe this was more than an outage or a malfunctioning switchgear:

Generally speaking, huge companies that rely on their computers have backup as well as multiple alternative electrical sources to make certain that something like a power outage does not happen to them. A hack, however, is much harder to fix.  Even the Delta information boards were not showing old information that was stored in the cache which is supposed to go into default mode in the event of a malfunction or a reset.

It should have been obvious from the get go that this was more than a power shortage that needed a reset. No global company, Delta Airlines included, maintains all of their servers and routers in a single place.  And wherever they are, they are located deep beneath the ground. And each of these locations has several independent backup electricity systems in the event of a blackout.

In addition, because of that ubiquitous storage system called “the cloud” everything should be immediately or almost immediately accessible through other access points.

What do they think happened?

A more likely scenario: malware was inserted into Delta computers months ago. Then, on command, the spyware shut down Delta’s computers and blocked emergency protocols from automatically kicking in to protect the company.  Without a safety plan in action, there was no way for Delta to function. They could not even do something as simple as hand write boarding passes because they could not confirm seats.

Of course a Delta spokesperson says that this definitely wasn’t a cyber attack. That could very well be true, or it could also be that they’re trying to cover it up, since that would be quite embarrassing.

So, people who are better at technology than I am: is this a crazy conspiracy theory, or is there a very real chance that Delta was in fact hacked?

(Tip of the hat to Point Me to the Plane)


  1. I’ll let others that aren’t on mobile devices post longer responses. But..

    Yes it most certainly could’ve been a cyber attack. However what’s quoted isn’t really all that true.

    Yes there’s backup systems and sites, but even with a company as big as DL its not all completely seamless or automatic or even instant.

    In short, Id actually bet on a cyber attack of some kind but not as described in that article.

  2. I know the reason and can’t say much due to an NDA but I can tell you without a doubt it was not a cyber attack.

  3. Cyber attack conspiracy theory written by none tech people…
    It’s true most large corporations function have multiple data center, but they certain not ‘deep beneath the ground’ like in in sci-fi movies. Most corporate network are behind firewall, having spyware execute command in isolated network on the right server is far more complicated that it sounds. Large corporate infrastructure also tend to have diverse IT environments, which means multiple OS (RHEL/Ubuntu/Debian/CentOS/Solaris/BSD) of different versioning would be at play. Typically from what I’ve seen with this type of outage happens when there’s issue with internal DNS. Different corporate does DNS differently, but a corrupted zone file that get pushed to all sites can instantly bring down the entire site. And different local DNS are cached in various times, the effect may not be seem right away until all DNS at all sites fails to respond properly. Diagnose such issue typically takes quiet some times to rule out other potential issue (code, network, datacenter) until its too late. Several major .com have similar outages, and believe me, their site infrastructure are far more advance and complex than Delta’s.

  4. I doubt it was an external attack. More likely, IT did something dumb or a bug wiped out a core system – or both! (I know a consumer products company that was kept from shipping for a week due to a known bug wiping their system and backups).

    My reasoning:
    1 – If an external party had it out for Delta, they would have done far more damage than what could be corrected in a day.
    2 – In IT, it is never the sexy answer.

    A day for backups to be put in place seems about right. The Observer’s claim that “the cloud” should have made data immediately accessible is baseless – Delta’s not exactly backing up photos and mp3s. Security-focused companies aren’t yet switching core business processes to the cloud. HR, yes. Booking information? I doubt it.

    I’m sure a large number of your audience works with enterprise IT. Looking forward to seeing their thoughts.

  5. Many people are saying this, just something I heard. I don’t know if it’s true.

    Promoting whackjob conspiracy nutjobs is beyond irresponsible. Get a grip.

  6. What RK said is accurate. But it could have also been a routing error that propagated to all the Delta routers. Years ago that happened at AT&T and brought their entire network to its knees.

    That outage caused major headaches for organizations that used AT&T for their data circuits

  7. I look for the simplest thing that would explain the outage. Poorly configured/tested backup systems would be enough to explain.

    I wouldn’t bother with conspiracy theories.


  8. Propagating conspiracy theories that are built around “facts” that demonstrate the author obviously has no understanding whatsoever of large scale IT infrastructure or operations. Wonderful.

    Seriously – The Cloud is not something magic, no airline is going to run its operations using public cloud services. Nor are they going to have data centers hosted underground. Maybe .05% of total data center space is underground / in caves / in abandoned missile silos. Achieving see less and automatic failover of systems is extrodinarilly difficult and requires dozens of systems/processes to work properly when they Re called upon. I’ve personally witnessed total power failures in supposedly Tripple-redundant datacenter facilities.

    Could it have been a cyber attack? Sure. But the simpler answer is they had a design flaw in one or more systems that caused a series of cascading failures.

    Please try to be more responsible than to spread misinformation on topics that you admittedly know nothing about.

  9. As an actual IT executive i’ll weigh in here. Yes data centers often have redundant power coming into their facilities. Yes, they have backup power – a system of batteries that kick in until a diesel powered generator can come on line in a few seconds. But these systems sometimes fail. I once had my data center go dark because we lost our physical connection between the data center and primary power – which was also the route for connectivity to backup. We found out the hard way that we had a single point of failure in an architecture. I recently saw a data center in another company go down hard when a switch gear exploded. Hacking wasn’t involved in either case. A fire (which was reported in this case) in the right piece of equipment could cause this scenario. No hacking required. Could there have been a hack? Sure. But in spite of what you see on TV hackers don’t overload physical systems and cause fires. So it’s pretty darn unlikely. Unless of course, you think Delta is lying about it all. Go ahead if you want to, but there’s enough bad luck and stupid design in the technology world that you don’t actually need a conspiracy or cover up to have problems.

  10. whatever the reason, all i know is we got two first class award tickets for 19,500 miles each from PHX-PVR during this time. It was pricing at 30,000 OW the day before and then all of the sudden went way down so we grabbed the seats and the booking was honored.

  11. Most likely I agree with @Kimberly. As a database professional, I would bet that this was self-inflicted. They probably pushed an update that rolled across and affected the integrity of the data.
    “The cloud should have made things available” is a rediculous statement … its purely a cached data store for reporting and applications needing read only access – which is refreshed on some type of interval.
    They could have also had a single point of failure somewhere that reared its ugly head – a switch – something without redundancy or not configured correctly – or a maintenance window that had to be extended due to a physical failure.

    But my money is on a self-inflicted update.

  12. In my long experience (not managing a data center, but 35’ish years as a user of data centers), the “Letterman Style” top 10 list is topped by these:

    3) Failed software upgrade or configuration change
    2) Squirrels and Racoons
    1) Backhoes

    Number 3 is why there are “change freezes” during critical periods (like end of quarter sales crunch and the time around producing the annual report)

    Item 2 is supposed to be avoided by backups and redundancy. But until you’ve installed UPS systems that wait for the power to drop to ground before kicking on (yeah, that happened back in the 80’s), or you’ve had the backup generator snap its crankshaft when the starter kicked on (yeah, that happened as well), or your redundant power feed to your site turns out to go through a common switch yard, you don’t realize how fragile those plans can be.

    Let’s just say a squirrel in an underground conduit chewing on 7.2kv cables does not ever end well (and the water in the conduit exploded out into the panels taking out all of the other feeds to the site.) Neither does a raccoon in an underground vault finding a transformer to be a nice warm place to sleep/die.

    Number 1 … well, backhoes are the internet’s enemy #1. They have an insatiable appetite for fiber. They must. eat. fiber.

Leave a Reply

Your email address will not be published. Required fields are marked *

* I consent to the collection of my name, email address, and content so that One Mile at a Time may manage comments placed on this site.