On June 8 there was an event that affected service to all customers between approximately 1300 – 1400 UTC. Immediate review of our monitoring and systems data suggested that the network between our systems and database services suddenly stopped functioning. Our systems repeatedly tried to reestablish communication with these resources without immediate success. The systems once again became operational after approximately ~1 hour.
Further review of available data indicated communications between our systems and other Amazon AWS cloud services were not reliably functioning during this time. This presented itself as sporadic errors and/or temporary communication failures between key systems. As these network systems are not within our domain of control, we do not have access to the detailed data that can confirm our conclusions with absolute certainty. Previously, our systems were operating normally until network communication was suddenly lost.
Subsequently, we reported this to Amazon AWS support, which later advised that there were elevated API latency issues around the period of this event. While this does align with what our evidence indicates, Amazon AWS Support was not immediately able to positively confirm that these were in fact the specific issues that we experienced. We have requested additional information.
As this event was external to our systems and outside of our control, no direct actions could have predicted or prevented the issues.
External network or Internet issues can and will affect access to our systems; however, these types of events are generally rare and often quickly resolved.