DBS Singapore’s handling of outage offers no clear explanation

Problem with access control servers .

On the 24th of November DBS Singapore suffered an outage, which the bank identified as a problem with our access control servers and this is why many of you have been unable to log in.

Tse Koon, DBS Singapore Country Head said that the Bank had  been working round the clock, together with its “third-party engineering providers, to fix the problem and services were restored at 2am”.

“Unfortunately this morning, the same problem recurred and while the situation is less severe than yesterday, we know that many of you are still unable to get access,” he said. “We acknowledge the gravity of the situation and as we work to resolve matters, we seek your patience and understanding.”

However Kevin Reed, CISO at Acronis wrote this whole situation was a “disaster”. In a LinkedIn post, Reed wrote that DBS online bank was not available or partially unavailable for about 2 days and didn’t “offer any explanation on why did it happen”, except for a video non-explainer by Shee Tse Koon, DBS Singapore head.

“They also mentioned, the issue lies with “access control service” responsible for logging on, and they are working with “third party engineering providers” to address it, he wrote. “To me, it sounds odd, that such a core system, apparently handling all authentication for the online bank, is managed by a third party provider. If true, this looks like DBS not having enough in-house tech expertise to manage their tech stack. This is a sign of systemic security risk within DBS. If they indeed lack the needed tech skills, it is not surprising, it took them so long to resolve the issue.”

Acronis noted the last time he had been involved in a “customer-facing issue”, was the Lazada Indonesia issue with displaying images during one of 11/11 events and Yandex going down due to BGP misconfiguration.

At the time the Lazada issue was caused by Akamai not routing their upstream requests the way we would expect and it was fixed within about 5 hours.

“Yandex, which had a complete internal network meltdown, went back online in about 6 hours. In a similar recent Facebook issue, they recovered in six hours as well. Mind it, this is a planetary-scale network, not a single service. This was possible because teams were on top of problems and knew their stuff. This does not seem to be the case with DBS,” Reed wrote.

“Besides the length of the downtime and apparent lack of technical expertise, DBS (lack of) communication deserve a special mention. We will still have to see, if the bank will provide a technical postmortem, a practice that is now widely accepted, but the communication during the crisis was close to nonexistent. There was no regular updates, no ETAs, no status page, nothing. Clearly, their BCP plan does not have a crisis communication chapter.”

Tags:

Leave a Comment

Related posts